WorldWideScience

Sample records for sparse partial equilibrium

  1. Explicit integration of extremely stiff reaction networks: partial equilibrium methods

    International Nuclear Information System (INIS)

    Guidry, M W; Hix, W R; Billings, J J

    2013-01-01

    In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

  2. A Partial Equilibrium Theory for Drops and Capillary Liquids

    International Nuclear Information System (INIS)

    Searcy, Alan W.; Beruto, Dario T.; Barberis, Fabrizio

    2006-01-01

    The two-century old theory of Young and Laplace retains a powerful influence on surface and interface studies because it quantitatively predicts the height of rise of capillary liquids from the contact angles of drops. But the classical theory does not acknowledge that equilibrium requires separate minimization of partial free energies of one-component liquids bonded to immiscible solids. We generalize a theorem of Gibbs and Curie to obtain a partial equilibrium (PE) theory that does so and that also predicts the height of capillary rise from contact angles of drops. Published observations and our own measurements of contact angles of water bonded to glass and Teflon surfaces support the conclusion of PE theory that contact angles of meniscuses and of drops are different dependent variables. PE theory provides thermodynamic and kinetic guidance to nanoscale processes that the classical theory obscures, as illustrated by examples in our concluding section

  3. Improved Sparse Channel Estimation for Cooperative Communication Systems

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2012-01-01

    Full Text Available Accurate channel state information (CSI is necessary at receiver for coherent detection in amplify-and-forward (AF cooperative communication systems. To estimate the channel, traditional methods, that is, least squares (LS and least absolute shrinkage and selection operator (LASSO, are based on assumptions of either dense channel or global sparse channel. However, LS-based linear method neglects the inherent sparse structure information while LASSO-based sparse channel method cannot take full advantage of the prior information. Based on the partial sparse assumption of the cooperative channel model, we propose an improved channel estimation method with partial sparse constraint. At first, by using sparse decomposition theory, channel estimation is formulated as a compressive sensing problem. Secondly, the cooperative channel is reconstructed by LASSO with partial sparse constraint. Finally, numerical simulations are carried out to confirm the superiority of proposed methods over global sparse channel estimation methods.

  4. Learning partial differential equations via data discovery and sparse optimization.

    Science.gov (United States)

    Schaeffer, Hayden

    2017-01-01

    We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.

  5. Sparse grid spectral methods for the numerical solution of partial differential equations with periodic boundary conditions

    International Nuclear Information System (INIS)

    Kupka, F.

    1997-11-01

    This thesis deals with the extension of sparse grid techniques to spectral methods for the solution of partial differential equations with periodic boundary conditions. A review on boundary and initial-boundary value problems and a discussion on numerical resolution is used to motivate this research. Spectral methods are introduced by projection techniques, and by three model problems: the stationary and the transient Helmholtz equations, and the linear advection equation. The approximation theory on the hyperbolic cross is reviewed and its close relation to sparse grids is demonstrated. This approach extends to non-periodic problems. Various Sobolev spaces with dominant mixed derivative are introduced to provide error estimates for Fourier approximation and interpolation on the hyperbolic cross and on sparse grids by means of Sobolev norms. The theorems are immediately applicable to the stability and convergence analysis of sparse grid spectral methods. This is explicitly demonstrated for the three model problems. A variant of the von Neumann condition is introduced to simplify the stability analysis of the time-dependent model problems. The discrete Fourier transformation on sparse grids is discussed together with its software implementation. Results on numerical experiments are used to illustrate the performance of the new method with respect to the smoothness properties of each example. The potential of the method in mathematical modelling is estimated and generalizations to other sparse grid methods are suggested. The appendix includes a complete Fortran90 program to solve the linear advection equation by the sparse grid Fourier collocation method and a third-order Runge-Kutta routine for integration in time. (author)

  6. The energy balance of a plasma in partial local thermodynamic equilibrium

    NARCIS (Netherlands)

    Kroesen, G.M.W.; Schram, D.C.; Timmermans, C.J.; de Haas, J.C.M.

    1990-01-01

    The energy balance for electrons and heavy particles constituting a plasma in partial local thermodynamic equilibrium is derived. The formulation of the energy balance used allows for evaluation of the source terms without knowledge of the particle and radiation transport situation, since most of

  7. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  8. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  9. Partial chemical equilibrium in fluid dynamics

    International Nuclear Information System (INIS)

    Ramshaw, J.D.

    1980-01-01

    An analysis is given for the flow of a multicomponent fluid in which an arbitrary number of chemical reactions may occur, some of which are in equilibrium while the others proceed kinetically. The primitive equations describing this situation are inconvenient to use because the progress rates omega-dot/sub s/ for the equilibrium reactions are determined implicitly by the associated equilibrium constraint conditions. Two alternative equivalent equation systems that are more pleasant to deal with are derived. In the first system, the omega-dot/sub s/ are eliminated by replacing the transport equations for the chemical species involved in the equilibrium reactions with transport equations for the basic components of which these species are composed. The second system retains the usual species transport equations, but eliminates the nonlinear algebraic equilibrium constraint conditions by deriving an explicit expression for the omega-dot/sub s/. Both systems are specialized to the case of an ideal gas mixture. Considerations involved in solving these equation systems numerically are discussed briefly

  10. Sparse dynamics for partial differential equations.

    Science.gov (United States)

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley

    2013-04-23

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.

  11. Partially wrong? Partial equilibrium and the economic analysis of public health emergencies of international concern.

    Science.gov (United States)

    Beutels, P; Edmunds, W J; Smith, R D

    2008-11-01

    We argue that traditional health economic analysis is ill-equipped to estimate the cost effectiveness and cost benefit of interventions that aim at controlling and/or preventing public health emergencies of international concern (such as pandemic influenza or severe acute respiratory syndrome). The implicit assumption of partial equilibrium within both the health sector itself and--if a wider perspective is adopted--the economy as a whole would be violated by such emergencies. We propose an alternative, with the specific aim of accounting for the behavioural changes and capacity problems that are expected to occur when such an outbreak strikes. Copyright (c) 2008 John Wiley & Sons, Ltd.

  12. Numerical method for partial equilibrium flow

    International Nuclear Information System (INIS)

    Ramshaw, J.D.; Cloutman, L.D.; Los Alamos, New Mexico 87545)

    1981-01-01

    A numerical method is presented for chemically reactive fluid flow in which equilibrium and nonequilibrium reactions occur simultaneously. The equilibrium constraints on the species concentrations are established by a quadratic iterative procedure. If the equilibrium reactions are uncoupled and of second or lower order, the procedure converges in a single step. In general, convergence is most rapid when the reactions are weakly coupled. This can frequently be achieved by a judicious choice of the independent reactions. In typical transient calculations, satisfactory accuracy has been achieved with about five iterations per time step

  13. Imaging the equilibrium state and magnetization dynamics of partially built hard disk write heads

    Energy Technology Data Exchange (ETDEWEB)

    Valkass, R. A. J., E-mail: rajv202@ex.ac.uk; Yu, W.; Shelford, L. R.; Keatley, P. S.; Loughran, T. H. J.; Hicken, R. J. [School of Physics, University of Exeter, Stocker Road, Exeter EX4 4QL (United Kingdom); Cavill, S. A. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom); Department of Physics, University of York, Heslington, York YO10 5DD (United Kingdom); Laan, G. van der; Dhesi, S. S. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom); Bashir, M. A.; Gubbins, M. A. [Research and Development, Seagate Technology, 1 Disc Drive, Springtown Industrial Estate, Derry BT48 0BF (United Kingdom); Czoschke, P. J.; Lopusnik, R. [Recording Heads Operation, Seagate Technology, 7801 Computer Avenue South, Bloomington, Minnesota 55435 (United States)

    2015-06-08

    Four different designs of partially built hard disk write heads with a yoke comprising four repeats of NiFe (1 nm)/CoFe (50 nm) were studied by both x-ray photoemission electron microscopy (XPEEM) and time-resolved scanning Kerr microscopy (TRSKM). These techniques were used to investigate the static equilibrium domain configuration and the magnetodynamic response across the entire structure, respectively. Simulations and previous TRSKM studies have made proposals for the equilibrium domain configuration of similar structures, but no direct observation of the equilibrium state of the writers has yet been made. In this study, static XPEEM images of the equilibrium state of writer structures were acquired using x-ray magnetic circular dichroism as the contrast mechanism. These images suggest that the crystalline anisotropy dominates the equilibrium state domain configuration, but competition with shape anisotropy ultimately determines the stability of the equilibrium state. Dynamic TRSKM images were acquired from nominally identical devices. These images suggest that a longer confluence region may hinder flux conduction from the yoke into the pole tip: the shorter confluence region exhibits clear flux beaming along the symmetry axis, whereas the longer confluence region causes flux to conduct along one edge of the writer. The observed variations in dynamic response agree well with the differences in the equilibrium magnetization configuration visible in the XPEEM images, confirming that minor variations in the geometric design of the writer structure can have significant effects on the process of flux beaming.

  14. Cotton Trade Liberalizations and Domestic Agricultural Policy Reforms: A Partial Equilibrium Analysis

    OpenAIRE

    Pan, Suwen; Fadiga, Mohamadou L.; Mohanty, Samarendu; Welch, Mark

    2006-01-01

    This paper analyzed the effects of trade liberalizing reforms in the world cotton market using a partial equilibrium model. The simulation results indicated that a removal of domestic subsidies and border tariffs for cotton would increase the amount of world cotton trade by an average of 4% in the next five years and world cotton prices by an average of 12% over the same time horizon. The findings indicated that under the liberalization policy, the United States would lose part of its export ...

  15. Partial equilibrium in induced redox reactions of plutonium

    Energy Technology Data Exchange (ETDEWEB)

    Nikol' skii, B P; Posvol' skii, M V; Krylov, L I; Morozova, Z P

    1975-01-01

    A study was made of oxidation-reduction reactions of Pu in buffer solutions containing bichromate and a reducing agent which reacted with hexavalent chromium at pH=3.5. In most cases sodium nitrite was used. A rather slow reduction of Pu (6) with NaNO/sub 2/ in the course of which tetravalent plutonium was formed via disproportionation reaction of plutonium (5), became very rapid upon the addition of bichromate to the solution. The yield of tetravalent plutonium increased with an increase in the concentration of NaNO/sub 2/ and the bichromate but never reached 100%. This was due to a simultaneous occurrenc of the induced oxidation reaction of Pu(4), leading to a partial equilibrium between the valence forms of plutonium in the nitrite-bichromate system which on the whole was in a nonequilibrium state. It was shown that in the series of reactions leading to the reduction of plutonium the presence of bivalent chromium was a necessary link.

  16. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  17. The equilibrium structures of the 900 partial dislocation in silicon

    International Nuclear Information System (INIS)

    Valladares, Alexander; Sutton, A P

    2005-01-01

    We consider the free energies of the single-period (SP) and double-period (DP) core reconstructions of the straight 90 0 partial dislocation in silicon. The vibrational contributions are calculated with a harmonic model. It is found that it leads to a diminishing difference between the free energies of the two core reconstructions with increasing temperature. The question of the relative populations of SP and DP reconstructions in a single straight 90 0 partial dislocation is solved by mapping the problem onto a one-dimensional Ising model in a magnetic field. The model contains only two parameters and is solved analytically. It leads to the conclusion that for the majority of the published energy differences between the SP and DP reconstructions the equilibrium core structure is dominated by the DP reconstruction at all temperatures up to the melting point. We review whether it is possible to distinguish between the SP and DP reconstructions experimentally, both in principle and in practice. We conclude that aberration corrected transmission electron microscopy should be able to distinguish between these two core reconstructions, but published high resolution micrographs do not allow the distinction to be made

  18. Statistical approach to partial equilibrium analysis

    Science.gov (United States)

    Wang, Yougui; Stanley, H. E.

    2009-04-01

    A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.

  19. Fast sparsely synchronized brain rhythms in a scale-free neural network.

    Science.gov (United States)

    Kim, Sang-Yoon; Lim, Woochang

    2015-08-01

    We consider a directed version of the Barabási-Albert scale-free network model with symmetric preferential attachment with the same in- and out-degrees and study the emergence of sparsely synchronized rhythms for a fixed attachment degree in an inhibitory population of fast-spiking Izhikevich interneurons. Fast sparsely synchronized rhythms with stochastic and intermittent neuronal discharges are found to appear for large values of J (synaptic inhibition strength) and D (noise intensity). For an intensive study we fix J at a sufficiently large value and investigate the population states by increasing D. For small D, full synchronization with the same population-rhythm frequency fp and mean firing rate (MFR) fi of individual neurons occurs, while for large D partial synchronization with fp>〈fi〉 (〈fi〉: ensemble-averaged MFR) appears due to intermittent discharge of individual neurons; in particular, the case of fp>4〈fi〉 is referred to as sparse synchronization. For the case of partial and sparse synchronization, MFRs of individual neurons vary depending on their degrees. As D passes a critical value D* (which is determined by employing an order parameter), a transition to unsynchronization occurs due to the destructive role of noise to spoil the pacing between sparse spikes. For Dpartial and sparse synchronization do contributions of individual neuronal dynamics to population synchronization change depending on their degrees, unlike in the case of full synchronization. Consequently, dynamics of individual neurons reveal the inhomogeneous network structure for the case of partial and sparse synchronization, which is in contrast to the case of

  20. Fast sparsely synchronized brain rhythms in a scale-free neural network

    Science.gov (United States)

    Kim, Sang-Yoon; Lim, Woochang

    2015-08-01

    We consider a directed version of the Barabási-Albert scale-free network model with symmetric preferential attachment with the same in- and out-degrees and study the emergence of sparsely synchronized rhythms for a fixed attachment degree in an inhibitory population of fast-spiking Izhikevich interneurons. Fast sparsely synchronized rhythms with stochastic and intermittent neuronal discharges are found to appear for large values of J (synaptic inhibition strength) and D (noise intensity). For an intensive study we fix J at a sufficiently large value and investigate the population states by increasing D . For small D , full synchronization with the same population-rhythm frequency fp and mean firing rate (MFR) fi of individual neurons occurs, while for large D partial synchronization with fp> ( : ensemble-averaged MFR) appears due to intermittent discharge of individual neurons; in particular, the case of fp>4 is referred to as sparse synchronization. For the case of partial and sparse synchronization, MFRs of individual neurons vary depending on their degrees. As D passes a critical value D* (which is determined by employing an order parameter), a transition to unsynchronization occurs due to the destructive role of noise to spoil the pacing between sparse spikes. For D partial and sparse synchronization do contributions of individual neuronal dynamics to population synchronization change depending on their degrees, unlike in the case of full synchronization. Consequently, dynamics of individual neurons reveal the inhomogeneous network structure for the case of partial and sparse synchronization, which is in contrast to the case of statistically homogeneous

  1. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin

    2015-04-03

    © 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  2. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  3. Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data

    Directory of Open Access Journals (Sweden)

    Alexander P. Kartun-Giles

    2018-04-01

    Full Text Available A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing and equilibrium (static sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling.

  4. The equilibrium structures of the 90{sup 0} partial dislocation in silicon

    Energy Technology Data Exchange (ETDEWEB)

    Valladares, Alexander; Sutton, A P [Materials Modelling Laboratory, Department of Materials, University of Oxford, OX1 3PH (United Kingdom)

    2005-12-07

    We consider the free energies of the single-period (SP) and double-period (DP) core reconstructions of the straight 90{sup 0} partial dislocation in silicon. The vibrational contributions are calculated with a harmonic model. It is found that it leads to a diminishing difference between the free energies of the two core reconstructions with increasing temperature. The question of the relative populations of SP and DP reconstructions in a single straight 90{sup 0} partial dislocation is solved by mapping the problem onto a one-dimensional Ising model in a magnetic field. The model contains only two parameters and is solved analytically. It leads to the conclusion that for the majority of the published energy differences between the SP and DP reconstructions the equilibrium core structure is dominated by the DP reconstruction at all temperatures up to the melting point. We review whether it is possible to distinguish between the SP and DP reconstructions experimentally, both in principle and in practice. We conclude that aberration corrected transmission electron microscopy should be able to distinguish between these two core reconstructions, but published high resolution micrographs do not allow the distinction to be made.

  5. Coordinating choice in partial cooperative equilibrium

    NARCIS (Netherlands)

    Mallozzi, L.; Tijs, S.H.

    2009-01-01

    In this paper we consider symmetric aggregative games and investigate partial cooperation between a portion of the players that sign a cooperative agreement and the rest of the players. Existence results of partial cooperative equilibria are obtained when the players who do not sign the agreement

  6. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  7. Structural phase diagram and equilibrium oxygen partial pressure of YBa2Cu3O6+x

    DEFF Research Database (Denmark)

    Andersen, N.H.; Lebech, B.; Poulsen, H.F.

    1990-01-01

    of the ordering of oxygen. Oxygen equilibrium partial pressure shows significant variations with temperature and concentration which indicate that x = 0.15 and x = 0.92 are minimum and maximum oxygen concentrations. Measurements of oxygen in-diffusion flow show relaxation type behaviour: View the MathML source......An experimental technique by which in-situ gas volumetric measurements are carried out on a neutron powder diffractometer, is presented and used for simultaneous studies of oxygen equilibrium partial pressure and the structural phase diagram of YBa2Cu3O6 + x. Experimental data was collected under...... near equilibrium conditions at 350 points in (x,T)-space with 0.15 gas law in connection with iodiometric titration and structural analyses. The temperature...

  8. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun; Tan, Jieqing; Chen, Peng; Zhang, Jie; Helg, Lei

    2013-01-01

    as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL

  9. SparseM: A Sparse Matrix Package for R *

    Directory of Open Access Journals (Sweden)

    Roger Koenker

    2003-02-01

    Full Text Available SparseM provides some basic R functionality for linear algebra with sparse matrices. Use of the package is illustrated by a family of linear model fitting functions that implement least squares methods for problems with sparse design matrices. Significant performance improvements in memory utilization and computational speed are possible for applications involving large sparse matrices.

  10. Diagnosis and prognosis of Ostheoarthritis by texture analysis using sparse linear models

    DEFF Research Database (Denmark)

    Marques, Joselene; Clemmensen, Line Katrine Harder; Dam, Erik

    We present a texture analysis methodology that combines uncommitted machine-learning techniques and sparse feature transformation methods in a fully automatic framework. We compare the performances of a partial least squares (PLS) forward feature selection strategy to a hard threshold sparse PLS...... algorithm and a sparse linear discriminant model. The texture analysis framework was applied to diagnosis of knee osteoarthritis (OA) and prognosis of cartilage loss. For this investigation, a generic texture feature bank was extracted from magnetic resonance images of tibial knee bone. The features were...... used as input to the sparse algorithms, which dened the best features to retain in the model. To cope with the limited number of samples, the data was evaluated using 10 fold cross validation (CV). The diagnosis evaluation using sparse PLS reached a generalization area-under-the-ROC curve (AUC) of 0...

  11. Sparse multi-block PLSR for biomarker discovery when integrating data from LC-MS and NMR metabolomics

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Nørskov, Natalja; Yde, Christian Clement

    2015-01-01

    The objective of this study was to implement a multivariate method which analyzes multi-block metabolomics data and performs variable selection in order to discover potential biomarkers, simultaneously. We call this method sparse multi-block partial least squares regression (Sparse MBPLSR). To ac...

  12. Value Added Tax and price stability in Nigeria: A partial equilibrium analysis

    Directory of Open Access Journals (Sweden)

    Marius Ikpe

    2013-12-01

    Full Text Available The economic impact of Value Added Tax (VAT that was implemented in Nigeria in 1994 has generated much debate in recent times, especially with respect to its effect on the level of aggregate prices. This study empirically examines the influence of VAT on price stability in Nigeria using partial equilibrium analysis. We introduced the VAT variable in the framework of a combination of structuralist, monetarist and fiscalist approaches to inflation modelling. The analysis was carried out by applying multiple regression analysis in static form to data for the 1994-2010 period. The results reveal that VAT exerts a strong upward pressure on price levels, most likely due to the burden of VAT on intermediate outputs. The study rules out the option of VAT exemptions for intermediate outputs as a solution, due to the difficulty in distinguishing between intermediate and final outputs. Instead, it recommends a detailed post-VAT cost-benefit analysis to assess the social desirability of VAT policy in Nigeria.

  13. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  14. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  15. Partial local thermal equilibrium in a low-temperature hydrogen plasma

    International Nuclear Information System (INIS)

    Hey, J.D.; Chu, C.C.; Rash, J.P.S.

    1999-01-01

    If the degree of ionisation is sufficient, competition between de-excitation by electron collisions and radiative decay determines the smallest principal quantum number (the so-called 'thermal limit') above which partial local thermodynamic equilibrium (PLTE) holds under the particular conditions of electron density and temperature. The LTE (PLTE) criteria of Wilson (JQSRT 1962;2:477-90), Griem (Phys Rev 1963;131:1170-6; Plasma Spectroscopy. New York: McGraw-Hill, 1964), Drawin (Z Physik 1969;228: 99-119), Hey (JQSRT 1976;16:69-75), and Fujimoto and McWhirter (Phys Rev A 1990;42:6588-601) are examined as regards their applicability to neutral atoms. For these purposes, we consider for simplicity an idealised, steady-state, homogeneous and primarily optically thin plasma, with some additional comments and numerical estimates on the roles of opacity and of atom-atom collisions. Particularly for atomic states of lower principal quantum number, the first two of the above criteria should be modified quite appreciably before application to neutral radiators in plasmas of low temperature, because of the profoundly different nature of the near-threshold collisional cross-sections for atoms and ions, while the most recent criterion should be applied with caution to PLTE of atoms in cold plasmas in ionisation balance. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  16. Sparse orthogonal population representation of spatial context in the retrosplenial cortex.

    Science.gov (United States)

    Mao, Dun; Kandler, Steffen; McNaughton, Bruce L; Bonin, Vincent

    2017-08-15

    Sparse orthogonal coding is a key feature of hippocampal neural activity, which is believed to increase episodic memory capacity and to assist in navigation. Some retrosplenial cortex (RSC) neurons convey distributed spatial and navigational signals, but place-field representations such as observed in the hippocampus have not been reported. Combining cellular Ca 2+ imaging in RSC of mice with a head-fixed locomotion assay, we identified a population of RSC neurons, located predominantly in superficial layers, whose ensemble activity closely resembles that of hippocampal CA1 place cells during the same task. Like CA1 place cells, these RSC neurons fire in sequences during movement, and show narrowly tuned firing fields that form a sparse, orthogonal code correlated with location. RSC 'place' cell activity is robust to environmental manipulations, showing partial remapping similar to that observed in CA1. This population code for spatial context may assist the RSC in its role in memory and/or navigation.Neurons in the retrosplenial cortex (RSC) encode spatial and navigational signals. Here the authors use calcium imaging to show that, similar to the hippocampus, RSC neurons also encode place cell-like activity in a sparse orthogonal representation, partially anchored to the allocentric cues on the linear track.

  17. Sparse data structure design for wavelet-based methods

    Directory of Open Access Journals (Sweden)

    Latu Guillaume

    2011-12-01

    Full Text Available This course gives an introduction to the design of efficient datatypes for adaptive wavelet-based applications. It presents some code fragments and benchmark technics useful to learn about the design of sparse data structures and adaptive algorithms. Material and practical examples are given, and they provide good introduction for anyone involved in the development of adaptive applications. An answer will be given to the question: how to implement and efficiently use the discrete wavelet transform in computer applications? A focus will be made on time-evolution problems, and use of wavelet-based scheme for adaptively solving partial differential equations (PDE. One crucial issue is that the benefits of the adaptive method in term of algorithmic cost reduction can not be wasted by overheads associated to sparse data management.

  18. Oscillator Neural Network Retrieving Sparsely Coded Phase Patterns

    Science.gov (United States)

    Aoyagi, Toshio; Nomura, Masaki

    1999-08-01

    Little is known theoretically about the associative memory capabilities of neural networks in which information is encoded not only in the mean firing rate but also in the timing of firings. Particularly, in the case of sparsely coded patterns, it is biologically important to consider the timings of firings and to study how such consideration influences storage capacities and quality of recalled patterns. For this purpose, we propose a simple extended model of oscillator neural networks to allow for expression of a nonfiring state. Analyzing both equilibrium states and dynamical properties in recalling processes, we find that the system possesses good associative memory.

  19. In Defense of Sparse Tracking: Circulant Sparse Tracker

    KAUST Repository

    Zhang, Tianzhu; Bibi, Adel Aamer; Ghanem, Bernard

    2016-01-01

    Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

  20. In Defense of Sparse Tracking: Circulant Sparse Tracker

    KAUST Repository

    Zhang, Tianzhu

    2016-12-13

    Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

  1. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  2. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  3. Speculative segmented sum for sparse matrix-vector multiplication on heterogeneous processors

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2015-01-01

    of the same chip is triggered to re-arrange the predicted partial sums for a correct resulting vector. On three heterogeneous processors from Intel, AMD and nVidia, using 20 sparse matrices as a benchmark suite, the experimental results show that our method obtains significant performance improvement over...

  4. Helical axis stellarator equilibrium model

    International Nuclear Information System (INIS)

    Koniges, A.E.; Johnson, J.L.

    1985-02-01

    An asymptotic model is developed to study MHD equilibria in toroidal systems with a helical magnetic axis. Using a characteristic coordinate system based on the vacuum field lines, the equilibrium problem is reduced to a two-dimensional generalized partial differential equation of the Grad-Shafranov type. A stellarator-expansion free-boundary equilibrium code is modified to solve the helical-axis equations. The expansion model is used to predict the equilibrium properties of Asperators NP-3 and NP-4. Numerically determined flux surfaces, magnetic well, transform, and shear are presented. The equilibria show a toroidal Shafranov shift

  5. GHG Mitigation Potential, Costs and Benefits in Global Forests: ADynamic Partial Equilibrium Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sathaye, Jayant; Makundi, Willy; Dale, Larry; Chan, Peter; Andrasko, Kenneth

    2005-03-22

    This paper reports on the global potential for carbonsequestration in forest plantations, and the reduction of carbonemissions from deforestation, in response to six carbon price scenariosfrom 2000 to 2100. These carbon price scenarios cover a range typicallyseen in global integrated assessment models. The world forest sector wasdisaggregated into tenregions, four largely temperate, developedregions: the European Union, Oceania, Russia, and the United States; andsix developing, mostly tropical, regions: Africa, Central America, China,India, Rest of Asia, and South America. Three mitigation options -- long-and short-rotation forestry, and the reduction of deforestation -- wereanalyzed using a global dynamic partial equilibrium model (GCOMAP). Keyfindings of this work are that cumulative carbon gain ranges from 50.9 to113.2 Gt C by 2100, higher carbon prices early lead to earlier carbongain and vice versa, and avoided deforestation accounts for 51 to 78percent of modeled carbon gains by 2100. The estimated present value ofcumulative welfare change in the sector ranges from a decline of $158billion to a gain of $81 billion by 2100. The decline is associated witha decrease in deforestation.

  6. New fundamental equations of thermodynamics for systems in chemical equilibrium at a specified partial pressure of a reactant and the standard transformed formation properties of reactants

    International Nuclear Information System (INIS)

    Alberty, R.A.; Oppenheim, I.

    1993-01-01

    When temperature, pressure, and the partial pressure of a reactant are fixed, the criterion of chemical equilibrium can be expressed in terms of the transformed Gibbs energy G' that is obtained by using a Legendre transform involving the chemical potential of the reactant that is fixed. For reactions of ideal gases, the most natural variables to use in the fundamental equation are T, P', and P B , where P' is the partial pressure of the reactants other than the one that is fixed and P B is the partial pressure of the reactant that is fixed. The fundamental equation for G' yields the expression for the transformed entropy S', and a transformed enthalpy can be defined by the additional Legendre transform H'=G'+TS'. This leads to an additional form of the fundamental equation. The calculation of transformed thermodynamic properties and equilibrium compositions is discussed for a simple system and for a general multireaction system. The change, in a reaction, of the binding of the reactant that is at a specified pressure can be calculated using one of the six Maxwell equations of the fundamental equation in G'

  7. Semi-blind sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  8. Partial Reform Equilibrium in Russia: A Case Study of the Political Interests of and in the Russian Gas and Oil Industry

    Science.gov (United States)

    Everett, Rabekah

    While several theories abound that attempt to explain the obstacles to democracy in Russia, Joel Hellman's partial reform equilibrium model is an institutional theory that illustrates how weak institutions, combined with an instrumentalist cultural approach to the law and authoritarian-minded leadership, allowed the struggle over interests to craft and determine the nature of Russia's political structure. This thesis builds on the work of Hellman by using the partial reform theory to understand the evolution of interest infiltration and their impact on the formation of policies and institutions in favour of the elites or winners from 2004 to the present time period that allow them to wield law as a political weapon. The hypothesis posits that through their vested interests in state politics, the political and economic elites of the oil and gas industry have successfully stalled reform in Russia resulting in partial reform equilibrium. This is illustrated in a case study that was designed to collect the names, backgrounds, and social networks of gas and oil executives in order to determine how many of them have a history of, or are currently working as, ministers in the government or representatives in the Federation Council. The objective being to measure the degree to which gas and oil interests are present in government decision-making and conversely, the degree to which the government is present in the gas and oil industry. The thesis stresses the importance of institutional structure in determining Russia's political evolution, and uses vested interests as a primary source of structural institutional change, while also stressing on the social and international implications of this evolution.

  9. Structural Sparse Tracking

    KAUST Repository

    Zhang, Tianzhu

    2015-06-01

    Sparse representation has been applied to visual tracking by finding the best target candidate with minimal reconstruction error by use of target templates. However, most sparse representation based trackers only consider holistic or local representations and do not make full use of the intrinsic structure among and inside target candidates, thereby making the representation less effective when similar objects appear or under occlusion. In this paper, we propose a novel Structural Sparse Tracking (SST) algorithm, which not only exploits the intrinsic relationship among target candidates and their local patches to learn their sparse representations jointly, but also preserves the spatial layout structure among the local patches inside each target candidate. We show that our SST algorithm accommodates most existing sparse trackers with the respective merits. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed SST algorithm performs favorably against several state-of-the-art methods.

  10. Sparse distributed memory

    Science.gov (United States)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  11. Group-wise partial least square regression

    NARCIS (Netherlands)

    Camacho, José; Saccenti, Edoardo

    2018-01-01

    This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

  12. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  13. Partial Cooperative Equilibria: Existence and Characterization

    Directory of Open Access Journals (Sweden)

    Amandine Ghintran

    2010-09-01

    Full Text Available We study the solution concepts of partial cooperative Cournot-Nash equilibria and partial cooperative Stackelberg equilibria. The partial cooperative Cournot-Nash equilibrium is axiomatically characterized by using notions of rationality, consistency and converse consistency with regard to reduced games. We also establish sufficient conditions for which partial cooperative Cournot-Nash equilibria and partial cooperative Stackelberg equilibria exist in supermodular games. Finally, we provide an application to strategic network formation where such solution concepts may be useful.

  14. On solutions to equilibrium problems for systems of stiffened gases

    OpenAIRE

    Flåtten, Tore; Morin, Alexandre; Munkejord, Svend Tollak

    2011-01-01

    We consider an isolated system of N immiscible fluids, each following a stiffened-gas equation of state. We consider the problem of calculating equilibrium states from the conserved fluid-mechanical properties, i.e., the partial densities and internal energies. We consider two cases; in each case mechanical equilibrium is assumed, but the fluids may or may not be in thermal equilibrium. For both cases, we address the issues of existence, uniqueness, and physical validity of equilibrium soluti...

  15. A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs

    Directory of Open Access Journals (Sweden)

    Guixia He

    2016-01-01

    Full Text Available Sparse matrix-vector multiplication (SpMV is an important operation in scientific computations. Compressed sparse row (CSR is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs, for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing and CSR-vector (partial coalescing. Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.

  16. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  17. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  18. One-shot 3D scanning by combining sparse landmarks with dense gradient information

    Science.gov (United States)

    Di Martino, Matías; Flores, Jorge; Ferrari, José A.

    2018-06-01

    Scene understanding is one of the most challenging and popular problems in the field of robotics and computer vision and the estimation of 3D information is at the core of most of these applications. In order to retrieve the 3D structure of a test surface we propose a single shot approach that combines dense gradient information with sparse absolute measurements. To that end, we designed a colored pattern that codes fine horizontal and vertical fringes, with sparse corners landmarks. By measuring the deformation (bending) of horizontal and vertical fringes, we are able to estimate surface local variations (i.e. its gradient field). Then corner sparse landmarks are detected and matched to infer spare absolute information about the test surface height. Local gradient information is combined with the sparse absolute values which work as anchors to guide the integration process. We show that this can be mathematically done in a very compact and intuitive way by properly defining a Poisson-like partial differential equation. Then we address in detail how the problem can be formulated in a discrete domain and how it can be practically solved by straight forward linear numerical solvers. Finally, validation experiment are presented.

  19. Chemical equilibrium. [maximizing entropy of gas system to derive relations between thermodynamic variables

    Science.gov (United States)

    1976-01-01

    The entropy of a gas system with the number of particles subject to external control is maximized to derive relations between the thermodynamic variables that obtain at equilibrium. These relations are described in terms of the chemical potential, defined as equivalent partial derivatives of entropy, energy, enthalpy, free energy, or free enthalpy. At equilibrium, the change in total chemical potential must vanish. This fact is used to derive the equilibrium constants for chemical reactions in terms of the partition functions of the species involved in the reaction. Thus the equilibrium constants can be determined accurately, just as other thermodynamic properties, from a knowledge of the energy levels and degeneracies for the gas species involved. These equilibrium constants permit one to calculate the equilibrium concentrations or partial pressures of chemically reacting species that occur in gas mixtures at any given condition of pressure and temperature or volume and temperature.

  20. Study of PrP - I heterogeneous equilibrium

    International Nuclear Information System (INIS)

    Vasil'eva, I.G.; Mironov, K.E.; Tarasenko, A.D.

    1976-01-01

    Using static methods the authors have measured the equilibrium vapor pressure in the system PrP+I 2 at different temperatures and different initial iodine concentrations. The equilibrium reactions in the system have been determined. The reaction of PrP with iodine is irreversible. The content of PrI 3 and I 2 in the gas phase is negligible. The pressure in the system is determined by the partial pressure of phosphorus

  1. What are the key drivers of MAC curves? A partial-equilibrium modelling approach for the UK

    International Nuclear Information System (INIS)

    Kesicki, Fabian

    2013-01-01

    Marginal abatement cost (MAC) curves are widely used for the assessment of costs related to CO 2 emissions reduction in environmental economics, as well as domestic and international climate policy. Several meta-analyses and model comparisons have previously been performed that aim to identify the causes for the wide range of MAC curves. Most of these concentrate on general equilibrium models with a focus on aspects such as specific model type and technology learning, while other important aspects remain almost unconsidered, including the availability of abatement technologies and level of discount rates. This paper addresses the influence of several key parameters on MAC curves for the United Kingdom and the year 2030. A technology-rich energy system model, UK MARKAL, is used to derive the MAC curves. The results of this study show that MAC curves are robust even to extreme fossil fuel price changes, while uncertainty around the choice of the discount rate, the availability of key abatement technologies and the demand level were singled out as the most important influencing factors. By using a different model type and studying a wider range of influencing factors, this paper contributes to the debate on the sensitivity of MAC curves. - Highlights: ► A partial-equilibrium model is employed to test key sensitivities of MAC curves. ► MAC curves are found to be robust to wide-ranging changes in fossil fuel prices. ► Most influencing factors are the discount rate, availability of key technologies. ► Further important uncertainty in MAC curves is related to demand changes

  2. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  3. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio

    2017-11-16

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  4. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    International Nuclear Information System (INIS)

    Gene Golub; Kwok Ko

    2009-01-01

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  5. Organic tank safety project: Equilibrium moisture determination task. FY 1998 annual progress report

    International Nuclear Information System (INIS)

    Scheele, R.D.; Bredt, P.R.; Sell, R.L.

    1998-08-01

    During fiscal year 1998, PNNL investigated the effect of P H 2 O at or near maximum tank waste surface temperatures on the equilibrium water content of selected Hanford waste samples. These studies were performed to determine how dry organic-bearing wastes will become if exposed to environmental Hanford water partial pressures. The samples tested were obtained from Organic Watch List Tanks. At 26 C, the lowest temperature used, the water partial pressures ranged from 2 to 22 torr. At 41 C, the highest temperature used, the water partial pressures ranged from 3.5 to 48 torr. When the aliquots exposed to the lowest and highest water partial pressures reached their equilibrium or near-equilibrium water contents, they were exchanged to determine if hysteresis occurred. In some experiments, once equilibrated, aliquots not used in the hysteresis experiments were allowed to equilibrate at room temperature (23 C) until the hysteresis experiments ended; this provides a measure of the effect of temperature

  6. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    Directory of Open Access Journals (Sweden)

    Zhi Gao

    2018-05-01

    Full Text Available Light detection and ranging (LiDAR sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs and unmanned aerial vehicles (UAVs to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  7. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    Science.gov (United States)

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  8. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  9. Parallel preconditioning techniques for sparse CG solvers

    Energy Technology Data Exchange (ETDEWEB)

    Basermann, A.; Reichel, B.; Schelthoff, C. [Central Institute for Applied Mathematics, Juelich (Germany)

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  10. Decentralized modal identification using sparse blind source separation

    International Nuclear Information System (INIS)

    Sadhu, A; Hazra, B; Narasimhan, S; Pandey, M D

    2011-01-01

    Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time–frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure

  11. Decentralized modal identification using sparse blind source separation

    Science.gov (United States)

    Sadhu, A.; Hazra, B.; Narasimhan, S.; Pandey, M. D.

    2011-12-01

    Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time-frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure.

  12. Lifting the US crude oil export ban: A numerical partial equilibrium analysis

    International Nuclear Information System (INIS)

    Langer, Lissy; Huppmann, Daniel; Holz, Franziska

    2016-01-01

    The upheaval in global crude oil markets and the boom in shale oil production in North America brought scrutiny on the US export ban for crude oil from 1975. The ban was eventually lifted in early 2016. This paper examines the shifts of global trade flows and strategic refinery investments in a spatial, game-theoretic partial equilibrium model. We consider detailed oil supply chain infrastructure with multiple crude oil types, distinct oil products, as well as specific refinery configurations and modes of transport. Prices, quantities produced and consumed, as well as infrastructure and refining capacity investments are endogenous to the model. We compare two scenarios: an insulated US crude oil market, and a counter-factual with lifted export restrictions. We find a significant expansion of US sweet crude exports with the lift of the export ban. In the US refinery sector, more (imported) heavy sour crude is transformed. Countries importing US sweet crude gain from higher product output, while avoiding costly refinery investments. Producers of heavy sour crude (e.g. the Middle East) are incentivised to climb up the value chain to defend their market share and maintain their dominant position. - Highlights: • We study the impacts of lifting the US crude ban on global oil flows and investments. • We find massive expansion of US sweet crude oil exports. • We analyze the resulting welfare effects for US producers, refiners and consumers. • We indicate the changes on global trade patterns. • We conclude that lifting the ban is the right policy for the US and the global economy.

  13. Internalisation of external costs in the Polish power generation sector: A partial equilibrium model

    International Nuclear Information System (INIS)

    Kudelko, Mariusz

    2006-01-01

    This paper presents a methodical framework, which is the basis for the economic analysis of the mid-term planning of development of the Polish energy system. The description of the partial equilibrium model and its results are demonstrated for different scenarios applied. The model predicts the generation, investment and pricing of mid-term decisions that refer to the Polish electricity and heat markets. The current structure of the Polish energy sector is characterised by interactions between the supply and demand sides of the energy sector. The supply side regards possibilities to deliver fuels from domestic and import sources and their conversion through transformation processes. Public power plants, public CHP plants, industry CHP plants and municipal heat plants represent the main producers of energy in Poland. Demand is characterised by the major energy consumers, i.e. industry and construction, transport, agriculture, trade and services, individual consumers and export. The relationships between the domestic electricity and heat markets are modelled taking into account external costs estimates. The volume and structure of energy production, electricity and heat prices, emissions, external costs and social welfare of different scenarios are presented. Results of the model demonstrate that the internalisation of external costs through the increase in energy prices implies significant improvement in social welfare

  14. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  15. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  16. Turbulent flows over sparse canopies

    Science.gov (United States)

    Sharma, Akshath; García-Mayoral, Ricardo

    2018-04-01

    Turbulent flows over sparse and dense canopies exerting a similar drag force on the flow are investigated using Direct Numerical Simulations. The dense canopies are modelled using a homogeneous drag force, while for the sparse canopy, the geometry of the canopy elements is represented. It is found that on using the friction velocity based on the local shear at each height, the streamwise velocity fluctuations and the Reynolds stress within the sparse canopy are similar to those from a comparable smooth-wall case. In addition, when scaled with the local friction velocity, the intensity of the off-wall peak in the streamwise vorticity for sparse canopies also recovers a value similar to a smooth-wall. This indicates that the sparse canopy does not significantly disturb the near-wall turbulence cycle, but causes its rescaling to an intensity consistent with a lower friction velocity within the canopy. In comparison, the dense canopy is found to have a higher damping effect on the turbulent fluctuations. For the case of the sparse canopy, a peak in the spectral energy density of the wall-normal velocity, and Reynolds stress is observed, which may indicate the formation of Kelvin-Helmholtz-like instabilities. It is also found that a sparse canopy is better modelled by a homogeneous drag applied on the mean flow alone, and not the turbulent fluctuations.

  17. Two-step superresolution approach for surveillance face image through radial basis function-partial least squares regression and locality-induced sparse representation

    Science.gov (United States)

    Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun

    2013-10-01

    Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.

  18. Partial biotinidase deficiency associated with Coffin-Siris syndrome.

    Science.gov (United States)

    Burlina, A B; Sherwood, W G; Zacchello, F

    1990-06-01

    Coffin-Siris syndrome is an infrequent condition characterised by mental retardation, nail hypoplasia or absence with fifth digit involvement and feeding problems. In addition, sparse scalp hair and chronic intractable eczema has been described in this syndrome. We report a 26-month-old girl with the disease and partial biotinidase deficiency.

  19. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  20. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  1. In-place sparse suffix sorting

    DEFF Research Database (Denmark)

    Prezza, Nicola

    2018-01-01

    information regarding the lexicographical order of a size-b subset of all n text suffixes is often needed. Such information can be stored space-efficiently (in b words) in the sparse suffix array (SSA). The SSA and its relative sparse LCP array (SLCP) can be used as a space-efficient substitute of the sparse...... suffix tree. Very recently, Gawrychowski and Kociumaka [11] showed that the sparse suffix tree (and therefore SSA and SLCP) can be built in asymptotically optimal O(b) space with a Monte Carlo algorithm running in O(n) time. The main reason for using the SSA and SLCP arrays in place of the sparse suffix...... tree is, however, their reduced space of b words each. This leads naturally to the quest for in-place algorithms building these arrays. Franceschini and Muthukrishnan [8] showed that the full suffix array can be built in-place and in optimal running time. On the other hand, finding sub-quadratic in...

  2. Algorithms for sparse, symmetric, definite quadratic lambda-matrix eigenproblems

    International Nuclear Information System (INIS)

    Scott, D.S.; Ward, R.C.

    1981-01-01

    Methods are presented for computing eigenpairs of the quadratic lambda-matrix, M lambda 2 + C lambda + K, where M, C, and K are large and sparse, and have special symmetry-type properties. These properties are sufficient to insure that all the eigenvalues are real and that theory analogous to the standard symmetric eigenproblem exists. The methods employ some standard techniques such as partial tri-diagonalization via the Lanczos Method and subsequent eigenpair calculation, shift-and- invert strategy and subspace iteration. The methods also employ some new techniques such as Rayleigh-Ritz quadratic roots and the inertia of symmetric, definite, quadratic lambda-matrices

  3. Generalized multivalued equilibrium-like problems: auxiliary principle technique and predictor-corrector methods

    Directory of Open Access Journals (Sweden)

    Vahid Dadashi

    2016-02-01

    Full Text Available Abstract This paper is dedicated to the introduction a new class of equilibrium problems named generalized multivalued equilibrium-like problems which includes the classes of hemiequilibrium problems, equilibrium-like problems, equilibrium problems, hemivariational inequalities, and variational inequalities as special cases. By utilizing the auxiliary principle technique, some new predictor-corrector iterative algorithms for solving them are suggested and analyzed. The convergence analysis of the proposed iterative methods requires either partially relaxed monotonicity or jointly pseudomonotonicity of the bifunctions involved in generalized multivalued equilibrium-like problem. Results obtained in this paper include several new and known results as special cases.

  4. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    Science.gov (United States)

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  5. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  6. Solving Sparse Polynomial Optimization Problems with Chordal Structure Using the Sparse, Bounded-Degree Sum-of-Squares Hierarchy

    NARCIS (Netherlands)

    Marandi, Ahmadreza; de Klerk, Etienne; Dahl, Joachim

    The sparse bounded degree sum-of-squares (sparse-BSOS) hierarchy of Weisser, Lasserre and Toh [arXiv:1607.01151,2016] constructs a sequence of lower bounds for a sparse polynomial optimization problem. Under some assumptions, it is proven by the authors that the sequence converges to the optimal

  7. Sparse tensor spherical harmonics approximation in radiative transfer

    International Nuclear Information System (INIS)

    Grella, K.; Schwab, Ch.

    2011-01-01

    The stationary monochromatic radiative transfer equation is a partial differential transport equation stated on a five-dimensional phase space. To obtain a well-posed problem, boundary conditions have to be prescribed on the inflow part of the domain boundary. We solve the equation with a multi-level Galerkin FEM in physical space and a spectral discretization with harmonics in solid angle and show that the benefits of the concept of sparse tensor products, known from the context of sparse grids, can also be leveraged in combination with a spectral discretization. Our method allows us to include high spectral orders without incurring the 'curse of dimension' of a five-dimensional computational domain. Neglecting boundary conditions, we find analytically that for smooth solutions, the convergence rate of the full tensor product method is retained in our method up to a logarithmic factor, while the number of degrees of freedom grows essentially only as fast as for the purely spatial problem. For the case with boundary conditions, we propose a splitting of the physical function space and a conforming tensorization. Numerical experiments in two physical and one angular dimension show evidence for the theoretical convergence rates to hold in the latter case as well.

  8. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  9. The structural phase diagram and oxygen equilibrium partial pressure of YBa 2Cu 3O 6+ x studied by neutron powder diffraction and gas volumetry

    Science.gov (United States)

    Andersen, N. H.; Lebech, B.; Poulsen, H. F.

    1990-12-01

    An experimental technique based on neutron powder diffraction and gas volumetry is presented and used to study the structural phase diagram of YBa 2Cu 3O 6+ x under equilibrium conditions in an extended part of ( x, T)-phase (0.15< x<0.92 and 25° C< T<725°C). Our experimental observations lend strong support to a recent two-dimensional anisotropic next-nearest-neighbour Ising model calculation (the ASYNNNI model) of the basal plane oxygen ordering based of first principle interaction parameters. Simultaneous measurements of the oxygen equilibrium partial pressure show anomalies, one of which proves the thermodynamic stability of the orthorhombic OII double cell structure. Striking similarity with predictions of recent model calculations support that another anomaly may be interpreted to result from local one-dimensional fluctuations in the distribution of oxygen atoms in the basal plane of tetragonal YBCO. Our pressure data also indicate that x=0.92 is a maximum obtainable oxygen concentration for oxygen pressures below 760 Torr.

  10. Composition and partition functions of partially ionized hydrogen plasma in Non-Local Thermal Equilibrium (Non-LThE) and Non-Local Chemical Equilibrium (Non-LChE)

    International Nuclear Information System (INIS)

    Chen Kuan; Eddy, T.L.

    1993-01-01

    A GTME (Generalized MultiThermodynamic Equilibrium) plasma model is developed for plasmas in both Non-LThE (Non-Local Thermal Equilibrium) and Non-LChE (Non-Local Chemical Equilibrium). The model uses multitemperatures for thermal nonequilibrium and non-zero chemical affinities as a measure of the deviation from chemical equilibrium. The plasma is treated as an ideal gas with the Debye-Hueckel approximation employed for pressure correction. The proration method is used when the cutoff energy level is between two discrete levels. The composition and internal partition functions of a hydrogen plasma are presented for electron temperatures ranging from 5000 to 35000 K and pressures from 0.1 to 1000 kPa. Number densities of 7 different species of hydrogen plasma and internal partition functions of different energy modes (rotational, vibrational, and electronic excitation) are computed for three affinity values. The results differ from other plasma properties in that they 1) are not based on equilibrium properties; and 2) are expressed as a function of different energy distribution parameters (temperatures) within each energy mode of each species as appropriate. The computed number densities and partition functions are applicable to calculating the thermodynamic, transport, and radiation properties of a hydrogen plasma not in thermal and chemical equilibria. The nonequilibrium plasma model and plasma compositions presented in this paper are very useful to the diagnosis of high-speed and/or low-pressure plasma flows in which the assumptions of local thermal and chemical equilibrium are invalid. (orig.)

  11. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan; Cui, Xuefeng; Yu, Ge; Guo, Lili; Gao, Xin

    2017-01-01

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays

  12. Nonflat equilibrium liquid shapes on flat surfaces.

    Science.gov (United States)

    Starov, Victor M

    2004-01-15

    The hydrostatic pressure in thin liquid layers differs from the pressure in the ambient air. This difference is caused by the actions of surface forces and capillary pressure. The manifestation of the surface force action is the disjoining pressure, which has a very special S-shaped form in the case of partial wetting (aqueous thin films and thin films of aqueous electrolyte and surfactant solutions, both free films and films on solid substrates). In thin flat liquid films the disjoining pressure acts alone and determines their thickness. However, if the film surface is curved then both the disjoining and the capillary pressures act simultaneously. In the case of partial wetting their simultaneous action results in the existence of nonflat equilibrium liquid shapes. It is shown that in the case of S-shaped disjoining pressure isotherm microdrops, microdepressions, and equilibrium periodic films exist on flat solid substrates. Criteria are found for both the existence and the stability of these nonflat equilibrium liquid shapes. It is shown that a transition from thick films to thinner films can go via intermediate nonflat states, microdepressions and periodic films, which both can be more stable than flat films within some range of hydrostatic pressure. Experimental investigations of shapes of the predicted nonflat layers can open new possibilities of determination of disjoining pressure in the range of thickness in which flat films are unstable.

  13. Sparse Image Reconstruction in Computed Tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer

    In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... applications. This thesis takes a systematic approach toward establishing quantitative understanding of conditions for sparse reconstruction to work well in CT. A general framework for analyzing sparse reconstruction methods in CT is introduced and two sets of computational tools are proposed: 1...... contributions to a general set of computational characterization tools. Thus, the thesis contributions help advance sparse reconstruction methods toward routine use in...

  14. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J.; Zhao, Hongyu

    2015-01-01

    predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths

  15. Regret Theory and Equilibrium Asset Prices

    Directory of Open Access Journals (Sweden)

    Jiliang Sheng

    2014-01-01

    Full Text Available Regret theory is a behavioral approach to decision making under uncertainty. In this paper we assume that there are two representative investors in a frictionless market, a representative active investor who selects his optimal portfolio based on regret theory and a representative passive investor who invests only in the benchmark portfolio. In a partial equilibrium setting, the objective of the representative active investor is modeled as minimization of the regret about final wealth relative to the benchmark portfolio. In equilibrium this optimal strategy gives rise to a behavioral asset priciting model. We show that the market beta and the benchmark beta that is related to the investor’s regret are the determinants of equilibrium asset prices. We also extend our model to a market with multibenchmark portfolios. Empirical tests using stock price data from Shanghai Stock Exchange show strong support to the asset pricing model based on regret theory.

  16. Sparse decompositions in 'incoherent' dictionaries

    DEFF Research Database (Denmark)

    Gribonval, R.; Nielsen, Morten

    2003-01-01

    a unique sparse representation in such a dictionary. In particular, it is proved that the result of Donoho and Huo, concerning the replacement of a combinatorial optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may...

  17. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  18. A sparse-grid isogeometric solver

    KAUST Repository

    Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo

    2018-01-01

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  19. A sparse-grid isogeometric solver

    KAUST Repository

    Beck, Joakim

    2018-02-28

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  20. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  1. Conformational stability and self-association equilibrium in biologics.

    Science.gov (United States)

    Clarkson, Benjamin R; Schön, Arne; Freire, Ernesto

    2016-02-01

    Biologics exist in equilibrium between native, partially denatured, and denatured conformational states. The population of any of these states is dictated by their Gibbs energy and can be altered by changes in physical and solution conditions. Some conformations have a tendency to self-associate and aggregate, an undesirable phenomenon in protein therapeutics. Conformational equilibrium and self-association are linked thermodynamic functions. Given that any associative reaction is concentration dependent, conformational stability studies performed at different protein concentrations can provide early clues to future aggregation problems. This analysis can be applied to the selection of protein variants or the identification of better formulation solutions. In this review, we discuss three different aggregation situations and their manifestation in the observed conformational equilibrium of a protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  3. Estimating Equilibrium Effects of Job Search Assistance

    DEFF Research Database (Denmark)

    Gautier, Pieter; Muller, Paul; van der Klaauw, Bas

    that the nonparticipants in the experiment regions find jobs slower after the introduction of the activation program (relative to workers in other regions). We then estimate an equilibrium search model. This model shows that a large scale role out of the activation program decreases welfare, while a standard partial...... microeconometric cost-benefit analysis would conclude the opposite....

  4. Gyrokinetic Magnetohydrodynamics and the Associated Equilibrium

    Science.gov (United States)

    Lee, W. W.; Hudson, S. R.; Ma, C. H.

    2017-10-01

    A proposed scheme for the calculations of gyrokinetic MHD and its associated equilibrium is discussed related a recent paper on the subject. The scheme is based on the time-dependent gyrokinetic vorticity equation and parallel Ohm's law, as well as the associated gyrokinetic Ampere's law. This set of equations, in terms of the electrostatic potential, ϕ, and the vector potential, ϕ , supports both spatially varying perpendicular and parallel pressure gradients and their associated currents. The MHD equilibrium can be reached when ϕ -> 0 and A becomes constant in time, which, in turn, gives ∇ . (J|| +J⊥) = 0 and the associated magnetic islands. Examples in simple cylindrical geometry will be given. The present work is partially supported by US DoE Grant DE-AC02-09CH11466.

  5. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  6. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  7. A consistent model for the equilibrium thermodynamic functions of partially ionized flibe plasma with Coulomb corrections

    International Nuclear Information System (INIS)

    Zaghloul, Mofreh R.

    2003-01-01

    Flibe (2LiF-BeF2) is a molten salt that has been chosen as the coolant and breeding material in many design studies of the inertial confinement fusion (ICF) chamber. Flibe plasmas are to be generated in the ICF chamber in a wide range of temperatures and densities. These plasmas are more complex than the plasma of any single chemical species. Nevertheless, the composition and thermodynamic properties of the resulting flibe plasmas are needed for the gas dynamics calculations and the determination of other design parameters in the ICF chamber. In this paper, a simple consistent model for determining the detailed plasma composition and thermodynamic functions of high-temperature, fully dissociated and partially ionized flibe gas is presented and used to calculate different thermodynamic properties of interest to fusion applications. The computed properties include the average ionization state; kinetic pressure; internal energy; specific heats; adiabatic exponent, as well as the sound speed. The presented results are computed under the assumptions of local thermodynamic equilibrium (LTE) and electro-neutrality. A criterion for the validity of the LTE assumption is presented and applied to the computed results. Other attempts in the literature are assessed with their implied inaccuracies pointed out and discussed

  8. Iteration scheme for implicit calculations of kinetic and equilibrium chemical reactions in fluid dynamics

    International Nuclear Information System (INIS)

    Ramshaw, J.D.; Chang, C.H.

    1995-01-01

    An iteration scheme for the implicit treatment of equilibrium chemical reactions in partial equilibrium flow has previously been described. Here we generalize this scheme to kinetic reactions as well as equilibrium reactions. This extends the applicability of the scheme to problems with kinetic reactions that are fast in regions of the flow field but slow in others. The resulting scheme thereby provides a single unified framework for the implicit treatment of an arbitrary number of coupled equilibrium and kinetic reactions in chemically reacting fluid flow. 10 refs., 2 figs

  9. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  10. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  11. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Directory of Open Access Journals (Sweden)

    Chang Li

    2016-07-01

    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  12. Image fusion using sparse overcomplete feature dictionaries

    Science.gov (United States)

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  13. Examining the Competition for Forest Resources in Sweden Using Factor Substitution Analysis and Partial Equilibrium Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, Anna

    2011-07-01

    The overall objective of the thesis is to analyse the procurement competition for forest resources in Sweden. The thesis consists of an introductory part and two self-contained papers. In paper I a translog cost function approach is used to analyse the factor substitution in the sawmill industry, the pulp and paper industry and the heating industry in Sweden over the period 1970 to 2008. The estimated parameters are used to calculate the Allen and Morishima elasticities of substitution as well as the price elasticities of input demand. The utilisation of forest resources in the energy sector has been increasing and this increase is believed to continue. The increase is, to a large extent, caused by economic policies introduced to reduce the emission of greenhouse gases. Such policies could lead to an increase in the procurement competition between the forest industries and the energy sector. The calculated substitution elasticities indicate that it is easier for the heating industry to substitutes between by-products and logging residues than it is for the pulp and paper industry to substitute between by-products and roundwood. This suggests that the pulp and paper industry could suffer from an increase in the procurement competition. However, overall the substitutions elasticities estimated in our study are relatively low. This indicates that substitution possibilities could be rather limited due to rigidities in input prices. This result suggests that competition of forest resources also might be relatively limited. In paper II a partial equilibrium model is constructed in order to asses the effects an increasing utilisation of forest resources in the energy sector. The increasing utilisation of forest fuel is, to a large extent, caused by economic policies introduced to reduce the emission of greenhouse gases. In countries where forests already are highly utilised such policies will lead to an increase in the procurement competition between the forest sector and

  14. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  15. A note on partial vertical integration

    NARCIS (Netherlands)

    G.W.J. Hendrikse (George); H.J.M. Peters (Hans)

    1989-01-01

    textabstractA simple model is constructed to show how partial vertical integration may emerge as an equilibrium market structure in a world characterized by rationing, differences in the reservation prices of buyers, and in the risk attitudes of buyers and sellers. The buyers with the high

  16. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  17. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  18. Sparse adaptive filters for echo cancellation

    CERN Document Server

    Paleologu, Constantin

    2011-01-01

    Adaptive filters with a large number of coefficients are usually involved in both network and acoustic echo cancellation. Consequently, it is important to improve the convergence rate and tracking of the conventional algorithms used for these applications. This can be achieved by exploiting the sparseness character of the echo paths. Identification of sparse impulse responses was addressed mainly in the last decade with the development of the so-called ``proportionate''-type algorithms. The goal of this book is to present the most important sparse adaptive filters developed for echo cancellati

  19. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  20. Non-equilibrium physics at a holographic chiral phase transition

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Nick; Kim, Keun-young [Southampton Univ. (United Kingdom). School of Physics and Astronomy; Kavli Institute for Theoretical Physics China, Beijing (China); Kalaydzhyan, Tigran; Kirsch, Ingo [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-11-15

    The D3/D7 system holographically describes an N=2 gauge theory which spontaneously breaks a chiral symmetry by the formation of a quark condensate in the presence of a magnetic field. At finite temperature it displays a first order phase transition. We study out of equilibrium dynamics associated with this transition by placing probe D7 branes in a geometry describing a boost-invariant expanding or contracting plasma. We use an adiabatic approximation to track the evolution of the quark condensate in a heated system and reproduce the phase structure expected from equilibrium dynamics. We then study solutions of the full partial differential equation that describes the evolution of out of equilibrium configurations to provide a complete description of the phase transition including describing aspects of bubble formation. (orig.)

  1. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  2. Biclustering via Sparse Singular Value Decomposition

    KAUST Repository

    Lee, Mihee

    2010-02-16

    Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.

  3. Evaluating equilibrium and non-equilibrium transport of bromide and isoproturon in disturbed and undisturbed soil columns

    Science.gov (United States)

    Dousset, S.; Thevenot, M.; Pot, V.; Šimunek, J.; Andreux, F.

    2007-12-01

    In this study, displacement experiments of isoproturon were conducted in disturbed and undisturbed columns of a silty clay loam soil under similar rainfall intensities. Solute transport occurred under saturated conditions in the undisturbed soil and under unsaturated conditions in the sieved soil because of a greater bulk density of the compacted undisturbed soil compared to the sieved soil. The objective of this work was to determine transport characteristics of isoproturon relative to bromide tracer. Triplicate column experiments were performed with sieved (structure partially destroyed to simulate conventional tillage) and undisturbed (structure preserved) soils. Bromide experimental breakthrough curves were analyzed using convective-dispersive and dual-permeability (DP) models (HYDRUS-1D). Isoproturon breakthrough curves (BTCs) were analyzed using the DP model that considered either chemical equilibrium or non-equilibrium transport. The DP model described the bromide elution curves of the sieved soil columns well, whereas it overestimated the tailing of the bromide BTCs of the undisturbed soil columns. A higher degree of physical non-equilibrium was found in the undisturbed soil, where 56% of total water was contained in the slow-flow matrix, compared to 26% in the sieved soil. Isoproturon BTCs were best described in both sieved and undisturbed soil columns using the DP model combined with the chemical non-equilibrium. Higher degradation rates were obtained in the transport experiments than in batch studies, for both soils. This was likely caused by hysteresis in sorption of isoproturon. However, it cannot be ruled out that higher degradation rates were due, at least in part, to the adopted first-order model. Results showed that for similar rainfall intensity, physical and chemical non-equilibrium were greater in the saturated undisturbed soil than in the unsaturated sieved soil. Results also suggested faster transport of isoproturon in the undisturbed soil due

  4. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  5. Physics of partially ionized plasmas

    CERN Document Server

    Krishan, Vinod

    2016-01-01

    Plasma is one of the four fundamental states of matter; the other three being solid, liquid and gas. Several components, such as molecular clouds, diffuse interstellar gas, the solar atmosphere, the Earth's ionosphere and laboratory plasmas, including fusion plasmas, constitute the partially ionized plasmas. This book discusses different aspects of partially ionized plasmas including multi-fluid description, equilibrium and types of waves. The discussion goes on to cover the reionization phase of the universe, along with a brief description of high discharge plasmas, tokomak plasmas and laser plasmas. Various elastic and inelastic collisions amongst the three particle species are also presented. In addition, the author demonstrates the novelty of partially ionized plasmas using many examples; for instance, in partially ionized plasma the magnetic induction is subjected to the ambipolar diffusion and the Hall effect, as well as the usual resistive dissipation. Also included is an observation of kinematic dynam...

  6. Sparse Learning with Stochastic Composite Optimization.

    Science.gov (United States)

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  7. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  8. Comments on equilibrium, transient equilibrium, and secular equilibrium in serial radioactive decay

    International Nuclear Information System (INIS)

    Prince, J.R.

    1979-01-01

    Equations describing serial radioactive decay are reviewed along with published descriptions or transient and secular equilibrium. It is shown that terms describing equilibrium are not used in the same way by various authors. Specific definitions are proposed; they suggest that secular equilibrium is a subset of transient equilibrium

  9. Partial Pressures of Te2 and Thermodynamic Properties of Ga-Te System

    Science.gov (United States)

    Su, Ching-Hua; Curreri, Peter A. (Technical Monitor)

    2001-01-01

    The partial pressures of Te2 in equilibrium with Ga(1-x)Te(x) samples were measured by optical absorption technique from 450 to 1100 C for compositions, x, between 0.333 and 0.612. To establish the relationship between the partial pressure of Te, and the measured optical absorbance, the calibration runs of a pure Te sample were also conducted to determine the Beer's Law constants. The partial pressures of Te2 in equilibrium with the GaTe(s) and Ga2Te3(s)compounds, or the so-called three-phase curves, were established. These partial pressure data imply the existence of the Ga3Te4(s) compound. From the partial pressures of Te2 over the Ga-Te melts, partial molar enthalpy and entropy of mixing for Te were derived and they agree reasonable well with the published data. The activities of Te in the Ga-Te melts were also derived from the measured partial pressures of Te2. These data agree well with most of the previous results. The possible reason for the high activity of Te measured for x less than 0.60 is discussed.

  10. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  11. A sparse version of IGA solvers

    KAUST Repository

    Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo

    2017-01-01

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  12. A sparse version of IGA solvers

    KAUST Repository

    Beck, Joakim

    2017-07-30

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  13. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  14. Equilibrium and stability properties of relativistic electron rings and E-layers

    International Nuclear Information System (INIS)

    Uhm, H.

    1976-01-01

    Equilibrium and stability properties of magnetically confined partially-neutralized thin electron ring and E-layer are investigated using the Vlasov-Maxwell equations. The analysis is carried out within the context of the assumption that the minor dimensions (a,b) of the system are much less than the collisionless skin depth (c/antiω/sub p/). The equilibrium configuration of the E-layer is assumed to be an infinitely long, azimuthally symmetric hollow electron beam which is aligned parallel to a uniform axial magnetic field. On the other hand, the electron ring is located at the midplane of an externally imposed mirror field which acts to confine the ring both axially and radially. The equilibrium properties of the E-layer and electron ring are obtained self-consistently for several choices of equilibrium electron distribution function. The negative-mass instability analysis is carried out for the relativistic E-layer equilibrium in which all of the electrons have the same transverse energy and a spread in canonical angular momentum, assuming a fixed ion background. The ion resonance instability properties are investigated for a relativistic nonneutral E-layer aligned parallel to a uniform magnetic field and located between two ground coaxial cylindrical conductors. The stability properties of a nonrelativistic electron ring is investigated within the framework of the linearized Vlasov-Poisson equations. The dispersion relation is obtained for the self-consistent electron distribution function in which all electrons have the same value of energy an the same value of canonical angular momentum. The positive ions in the electron ring are assumed to form an immobile partially neutralizing background. The stability criteria as well as the instability growth rates are derived and discussed including the effect of geometrical configuration of the system. Equilibrium space-charge effects play a significant role in stability behavior

  15. Sparse seismic imaging using variable projection

    NARCIS (Netherlands)

    Aravkin, Aleksandr Y.; Tu, Ning; van Leeuwen, Tristan

    2013-01-01

    We consider an important class of signal processing problems where the signal of interest is known to be sparse, and can be recovered from data given auxiliary information about how the data was generated. For example, a sparse Green's function may be recovered from seismic experimental data using

  16. Tunable Sparse Network Coding for Multicast Networks

    DEFF Research Database (Denmark)

    Feizi, Soheil; Roetter, Daniel Enrique Lucani; Sørensen, Chres Wiant

    2014-01-01

    This paper shows the potential and key enabling mechanisms for tunable sparse network coding, a scheme in which the density of network coded packets varies during a transmission session. At the beginning of a transmission session, sparsely coded packets are transmitted, which benefits decoding...... complexity. At the end of a transmission, when receivers have accumulated degrees of freedom, coding density is increased. We propose a family of tunable sparse network codes (TSNCs) for multicast erasure networks with a controllable trade-off between completion time performance to decoding complexity...... a mechanism to perform efficient Gaussian elimination over sparse matrices going beyond belief propagation but maintaining low decoding complexity. Supporting simulation results are provided showing the trade-off between decoding complexity and completion time....

  17. Sparse PCA with Oracle Property.

    Science.gov (United States)

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  18. Structural Sparse Tracking

    KAUST Repository

    Zhang, Tianzhu; Yang, Ming-Hsuan; Ahuja, Narendra; Ghanem, Bernard; Yan, Shuicheng; Xu, Changsheng; Liu, Si

    2015-01-01

    candidate. We show that our SST algorithm accommodates most existing sparse trackers with the respective merits. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed SST algorithm performs

  19. Technique detection software for Sparse Matrices

    Directory of Open Access Journals (Sweden)

    KHAN Muhammad Taimoor

    2009-12-01

    Full Text Available Sparse storage formats are techniques for storing and processing the sparse matrix data efficiently. The performance of these storage formats depend upon the distribution of non-zeros, within the matrix in different dimensions. In order to have better results we need a technique that suits best the organization of data in a particular matrix. So the decision of selecting a better technique is the main step towards improving the system's results otherwise the efficiency can be decreased. The purpose of this research is to help identify the best storage format in case of reduced storage size and high processing efficiency for a sparse matrix.

  20. Sparse Representations of Hyperspectral Images

    KAUST Repository

    Swanson, Robin J.

    2015-11-23

    Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.

  1. Sparse Representations of Hyperspectral Images

    KAUST Repository

    Swanson, Robin J.

    2015-01-01

    Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.

  2. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  3. Structure-aware Local Sparse Coding for Visual Tracking

    KAUST Repository

    Qi, Yuankai

    2018-01-24

    Sparse coding has been applied to visual tracking and related vision problems with demonstrated success in recent years. Existing tracking methods based on local sparse coding sample patches from a target candidate and sparsely encode these using a dictionary consisting of patches sampled from target template images. The discriminative strength of existing methods based on local sparse coding is limited as spatial structure constraints among the template patches are not exploited. To address this problem, we propose a structure-aware local sparse coding algorithm which encodes a target candidate using templates with both global and local sparsity constraints. For robust tracking, we show local regions of a candidate region should be encoded only with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we design an effective template update scheme. Extensive experiments on challenging image sequences demonstrate the effectiveness of the proposed algorithm against numerous stateof- the-art methods.

  4. The equilibrium crystal shape of nickel

    International Nuclear Information System (INIS)

    Meltzman, Hila; Chatain, Dominique; Avizemer, Dan; Besmann, Theodore M.; Kaplan, Wayne D.

    2011-01-01

    Highlights: → The ECS of pure Ni is completely facetted with both dense and high-index planes. → The partial pressure of oxygen has a significant effect on the surface anisotropy. → The addition of Fe decreased the anisotropy and de-stabilized high-index planes. → During solid dewetting nucleation barriers prevent equilibration of the top facet. - Abstract: The crystal shape of Ni particles, dewetted in the solid state on sapphire substrates, was examined as a function of the partial pressure of oxygen (P(O 2 )) and iron content using scanning and transmission electron microscopy. The chemical composition of the surface was characterized by atom-probe tomography. Unlike other face-centered cubic (fcc) equilibrium crystal shapes, the Ni crystals containing little or no impurities exhibited a faceted shape, indicating large surface anisotropy. In addition to the {1 1 1}, {1 0 0} and {1 1 0} facets, which are usually present in the equilibrium crystal shape of fcc metals, high-index facets were identified such as {1 3 5} and {1 3 8} at low P(O 2 ), and {0 1 2} and {0 1 3} at higher P(O 2 ). The presence of iron altered the crystal shape into a truncated sphere with only facets parallel to denser planes. The issue of particle equilibration is discussed specifically for the case of solid-state dewetting.

  5. Sparse Frequency Waveform Design for Radar-Embedded Communication

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2016-01-01

    Full Text Available According to the Tag application with function of covert communication, a method for sparse frequency waveform design based on radar-embedded communication is proposed. Firstly, sparse frequency waveforms are designed based on power spectral density fitting and quasi-Newton method. Secondly, the eigenvalue decomposition of the sparse frequency waveform sequence is used to get the dominant space. Finally the communication waveforms are designed through the projection of orthogonal pseudorandom vectors in the vertical subspace. Compared with the linear frequency modulation waveform, the sparse frequency waveform can further improve the bandwidth occupation of communication signals, thus achieving higher communication rate. A certain correlation exists between the reciprocally orthogonal communication signals samples and the sparse frequency waveform, which guarantees the low SER (signal error rate and LPI (low probability of intercept. The simulation results verify the effectiveness of this method.

  6. Massive Asynchronous Parallelization of Sparse Matrix Factorizations

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Edmond [Georgia Inst. of Technology, Atlanta, GA (United States)

    2018-01-08

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  7. Physico-chemical investigation of cement carbonation in aqueous solution in equilibrium with calcite and with a controlled CO2 partial pressure at 25 and 50 deg. C

    International Nuclear Information System (INIS)

    Chomat, Laure; Trepy, Nadia; Le Bescop, Patrick; Dauzeres, Alexandre; Monguillon, Corinne

    2012-01-01

    In the framework of radioactive waste geological disposal, structural concretes have to be adapted to underground chemical conditions. For concrete in water saturated medium, it is believed that carbonation will have a major impact on the interaction between concrete and the geological medium. So, to understand the complex degradation of the cement paste in that context, it is interesting to study a simplified system such as degradation in carbonated water solution. This solution must be at equilibrium with a CO 2 partial pressure 30 times higher than the atmospheric pCO 2 , to reproduce underground natural conditions of Callovo-Oxfordian clayey rock of Bure (France). In this study, the behaviour of a new low pH material (CEM I + silica fume + fly ashes) is compared with a CEM I cement paste, both of them being submitted to carbonation in aqueous solution in equilibrium with calcite and with a pCO 2 equal to 1.32 kPa (1.3 10 -2 atm). Two different temperatures, 25 and 50 C, are considered. To realize these experiments, two different original types of devices were developed

  8. Development of chemical equilibrium analysis code 'CHEEQ'

    International Nuclear Information System (INIS)

    Nagai, Shuichiro

    2006-08-01

    'CHEEQ' code which calculates the partial pressure and the mass of the system consisting of ideal gas and pure condensed phase compounds, was developed. Characteristics of 'CHEEQ' code are as follows. All the chemical equilibrium equations were described by the formation reactions from the mono-atomic gases in order to simplify the code structure and input preparation. Chemical equilibrium conditions, Σν i μ i =0 for the gaseous compounds and precipitated condensed phase compounds and Σν i μ i > 0 for the non-precipitated condensed phase compounds, were applied. Where, ν i and μ i are stoichiometric coefficient and chemical potential of component i. Virtual solid model was introduced to perform the calculation of constant partial pressure condition. 'CHEEQ' was consisted of following 3 parts, (1) analysis code, zc132. f. (2) thermodynamic data base, zmdb01 and (3) input data file, zindb. 'CHEEQ' code can calculate the system which consisted of elements (max.20), condensed phase compounds (max.100) and gaseous compounds. (max.200). Thermodynamic data base, zmdb01 contains about 1000 elements and compounds, and 200 of them were Actinide elements and their compounds. This report describes the basic equations, the outline of the solution procedure and instructions to prepare the input data and to evaluate the calculation results. (author)

  9. Non-equilibrium blunt body flows in ionized gases

    International Nuclear Information System (INIS)

    Nishida, Michio

    1981-01-01

    The behaviors of electrons and electronically excited atoms in non-equilibrium and partially ionized blunt-body-flows are described. Formulation has been made separately in a shock layer and in a free stream, and then the free stream solution has been connected with the shock layer solution by matching the two solutions at the shock layer edge. The method of this matching is described here. The partially ionized gas is considered to be composed of neutral atoms, ions and electrons. Furthermore, the neutral atoms are divided into atoms in excited levels. Therefore, it is considered that electron energy released due to excitation, and that gained due to de-excitation, contribute to electron energy. Thus, the electron energy equation including these contributions is solved, coupled with the continuity equations of the excited atoms and the electrons. An electron temperature distribution from a free stream to a blunt body wall has been investigated for a case when the electrons are in thermal non-equilibrium with heavy particles in the free stream. In addition, the distributions of the excited atom density are discussed in the present analysis. (author)

  10. Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2012-11-29

    The numerical approximation of parametric partial differential equations is a computational challenge, in particular when the number of involved parameter is large. This paper considers a model class of second order, linear, parametric, elliptic PDEs on a bounded domain D with diffusion coefficients depending on the parameters in an affine manner. For such models, it was shown in [9, 10] that under very weak assumptions on the diffusion coefficients, the entire family of solutions to such equations can be simultaneously approximated in the Hilbert space V = H0 1(D) by multivariate sparse polynomials in the parameter vector y with a controlled number N of terms. The convergence rate in terms of N does not depend on the number of parameters in V, which may be arbitrarily large or countably infinite, thereby breaking the curse of dimensionality. However, these approximation results do not describe the concrete construction of these polynomial expansions, and should therefore rather be viewed as benchmark for the convergence analysis of numerical methods. The present paper presents an adaptive numerical algorithm for constructing a sequence of sparse polynomials that is proved to converge toward the solution with the optimal benchmark rate. Numerical experiments are presented in large parameter dimension, which confirm the effectiveness of the adaptive approach. © 2012 EDP Sciences, SMAI.

  11. Storage of sparse files using parallel log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-11-07

    A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a single patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.

  12. Computing diffusivities from particle models out of equilibrium

    Science.gov (United States)

    Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia

    2018-04-01

    A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.

  13. Sparse reconstruction using distribution agnostic bayesian matching pursuit

    KAUST Repository

    Masood, Mudassir

    2013-11-01

    A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. The method utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean-square error (MMSE) estimate of the sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  14. Thermal non-equilibrium in porous medium adjacent to vertical plate: ANN approach

    Science.gov (United States)

    Ahmed, N. J. Salman; Ahamed, K. S. Nazim; Al-Rashed, Abdullah A. A. A.; Kamangar, Sarfaraz; Athani, Abdulgaphur

    2018-05-01

    Thermal non-equilibrium in porous medium is a condition that refers to temperature discrepancy in solid matrix and fluid of porous medium. This type of flow is complex flow requiring complex set of partial differential equations that govern the flow behavior. The current work is undertaken to predict the thermal non-equilibrium behavior of porous medium adjacent to vertical plate using artificial neural network. A set of neurons in 3 layers are trained to predict the heat transfer characteristics. It is found that the thermal non-equilibrium heat transfer behavior in terms of Nusselt number of fluid as well as solid phase can be predicted accurately by using well-trained neural network.

  15. Image understanding using sparse representations

    CERN Document Server

    Thiagarajan, Jayaraman J; Turaga, Pavan; Spanias, Andreas

    2014-01-01

    Image understanding has been playing an increasingly crucial role in several inverse problems and computer vision. Sparse models form an important component in image understanding, since they emulate the activity of neural receptors in the primary visual cortex of the human brain. Sparse methods have been utilized in several learning problems because of their ability to provide parsimonious, interpretable, and efficient models. Exploiting the sparsity of natural signals has led to advances in several application areas including image compression, denoising, inpainting, compressed sensing, blin

  16. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  17. Regularized Partial Least Squares with an Application to NMR Spectroscopy

    OpenAIRE

    Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana

    2012-01-01

    High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...

  18. Sparse inpainting and isotropy

    Energy Technology Data Exchange (ETDEWEB)

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V. [Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT (United Kingdom); Marinucci, Domenico; Cammarota, Valentina [Department of Mathematics, University of Rome Tor Vergata, via della Ricerca Scientifica 1, Roma, 00133 (Italy); Wandelt, Benjamin D., E-mail: s.feeney@imperial.ac.uk, E-mail: marinucc@axp.mat.uniroma2.it, E-mail: jason.mcewen@ucl.ac.uk, E-mail: h.peiris@ucl.ac.uk, E-mail: wandelt@iap.fr, E-mail: cammarot@axp.mat.uniroma2.it [Kavli Institute for Theoretical Physics, Kohn Hall, University of California, 552 University Road, Santa Barbara, CA, 93106 (United States)

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  19. Object tracking by occlusion detection via structured sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2013-06-01

    Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object\\'s track. This is the case when significant occlusion occurs. To accommodate for non-sparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our tracker consistently outperforms the state-of-the-art. © 2013 IEEE.

  20. A field-theoretic approach to non-equilibrium work identities

    International Nuclear Information System (INIS)

    Mallick, Kirone; Orland, Henri; Moshe, Moshe

    2011-01-01

    We study non-equilibrium work relations for a space-dependent field with stochastic dynamics (model A). Jarzynski's equality is obtained through symmetries of the dynamical action in the path-integral representation. We derive a set of exact identities that generalize the fluctuation-dissipation relations to non-stationary and far-from-equilibrium situations. These identities are prone to experimental verification. Furthermore, we show that a well-studied invariance of the Langevin equation under supersymmetry, which is known to be broken when the external potential is time dependent, can be partially restored by adding to the action a term which is precisely Jarzynski's work. The work identities can then be retrieved as consequences of the associated Ward-Takahashi identities.

  1. Thermodynamic quantities and defect equilibrium in La2-xSrxNiO4+δ

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Yashiro, Keiji; Sato, Kazuhisa; Mizusaki, Junichiro

    2009-01-01

    In order to elucidate the relation between thermodynamic quantities, the defect structure, and the defect equilibrium in La 2-x Sr x NiO 4+δ , statistical thermodynamic calculation is carried out and calculated results are compared to those obtained from experimental data. Partial molar enthalpy of oxygen and partial molar entropy of oxygen are obtained from δ-P(O 2 )-T relation by using Gibbs-Helmholtz equation. Statistical thermodynamic model is derived from defect equilibrium models proposed before by authors, localized electron model and delocalized electron model which could well explain the variation of oxygen content of La 2-x Sr x NiO 4+δ . Although assumed defect species and their equilibrium are different, the results of thermodynamic calculation by localized electron model and delocalized electron model show minor difference. Calculated results by the both models agree with the thermodynamic quantities obtained from oxygen nonstoichiometry of La 2-x Sr x NiO 4+δ . - Graphical abstract: In order to elucidate the relation between thermodynamic quantities, the defect structure, and the defect equilibrium in La 2-x Sr x NiO 4+δ , statistics thermodynamic calculation is carried out and calculated results are compared to those obtained from experimental data.

  2. Sparse Vector Distributions and Recovery from Compressed Sensing

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

  3. Equilibrium p(CO) measurements over V-C-O system

    International Nuclear Information System (INIS)

    Sayi, Y.S.; Khan, M.I.; Radhakrishna, J.; Shankaran, P.S.; Yadav, C.S.; Chhapru, G.C.; Shukla, N.K.; Prasad, R.; Sood, D.D.

    1986-01-01

    The equilibrium partial pressure of (CO) over hyperstoichiometric U-C-O system has been measured in the temperature range 1300-1700degC. Slope of the curve log p(CO) vs.1/T(K) changed at about 1450degC indicating some change in the reaction involved in the formation of (CO). The enthalpy change for the possible reactions are also determined. (author)

  4. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  5. Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.

    Science.gov (United States)

    Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M

    2018-05-15

    Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.

  6. A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems

    Czech Academy of Sciences Publication Activity Database

    Benzi, M.; Tůma, Miroslav

    1998-01-01

    Roč. 19, č. 3 (1998), s. 968-994 ISSN 1064-8275 R&D Projects: GA ČR GA201/93/0067; GA AV ČR IAA230401 Keywords : large sparse systems * interative methods * preconditioning * approximate inverse * sparse linear systems * sparse matrices * incomplete factorizations * conjugate gradient -type methods Subject RIV: BA - General Mathematics Impact factor: 1.378, year: 1998

  7. Sparse-matrix factorizations for fast symmetric Fourier transforms

    International Nuclear Information System (INIS)

    Sequel, J.

    1987-01-01

    This work proposes new fast algorithms computing the discrete Fourier transform of certain families of symmetric sequences. Sequences commonly found in problems of structure determination by x-ray crystallography and in numerical solutions of boundary-value problems in partial differential equations are dealt with. In the algorithms presented, the redundancies in the input and output data, due to the presence of symmetries in the input data sequence, were eliminated. Using ring-theoretical methods a matrix representation is obtained for the remaining calculations; which factors as the product of a complex block-diagonal matrix times as integral matrix. A basic two-step algorithm scheme arises from this factorization with a first step consisting of pre-additions and a second step containing the calculations involved in computing with the blocks in the block-diagonal factor. These blocks are structured as block-Hankel matrices, and two sparse-matrix factoring formulas are developed in order to diminish their arithmetic complexity

  8. Iterative solution of large sparse systems of equations

    CERN Document Server

    Hackbusch, Wolfgang

    2016-01-01

    In the second edition of this classic monograph, complete with four new chapters and updated references, readers will now have access to content describing and analysing classical and modern methods with emphasis on the algebraic structure of linear iteration, which is usually ignored in other literature. The necessary amount of work increases dramatically with the size of systems, so one has to search for algorithms that most efficiently and accurately solve systems of, e.g., several million equations. The choice of algorithms depends on the special properties the matrices in practice have. An important class of large systems arises from the discretization of partial differential equations. In this case, the matrices are sparse (i.e., they contain mostly zeroes) and well-suited to iterative algorithms. The first edition of this book grew out of a series of lectures given by the author at the Christian-Albrecht University of Kiel to students of mathematics. The second edition includes quite novel approaches.

  9. Structure-based bayesian sparse reconstruction

    KAUST Repository

    Quadeer, Ahmed Abdul

    2012-12-01

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity. © 1991-2012 IEEE.

  10. Generalizations of the Nash Equilibrium Theorem in the KKM Theory

    Directory of Open Access Journals (Sweden)

    Sehie Park

    2010-01-01

    Full Text Available The partial KKM principle for an abstract convex space is an abstract form of the classical KKM theorem. In this paper, we derive generalized forms of the Ky Fan minimax inequality, the von Neumann-Sion minimax theorem, the von Neumann-Fan intersection theorem, the Fan-type analytic alternative, and the Nash equilibrium theorem for abstract convex spaces satisfying the partial KKM principle. These results are compared with previously known cases for G-convex spaces. Consequently, our results unify and generalize most of previously known particular cases of the same nature. Finally, we add some detailed historical remarks on related topics.

  11. Thermodynamic chemical energy transfer mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium chemical reactions

    International Nuclear Information System (INIS)

    Roh, Heui-Seol

    2015-01-01

    Chemical energy transfer mechanisms at finite temperature are explored by a chemical energy transfer theory which is capable of investigating various chemical mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium. Gibbs energy fluxes are obtained as a function of chemical potential, time, and displacement. Diffusion, convection, internal convection, and internal equilibrium chemical energy fluxes are demonstrated. The theory reveals that there are chemical energy flux gaps and broken discrete symmetries at the activation chemical potential, time, and displacement. The statistical, thermodynamic theory is the unification of diffusion and internal convection chemical reactions which reduces to the non-equilibrium generalization beyond the quasi-equilibrium theories of migration and diffusion processes. The relationship between kinetic theories of chemical and electrochemical reactions is also explored. The theory is applied to explore non-equilibrium chemical reactions as an illustration. Three variable separation constants indicate particle number constants and play key roles in describing the distinct chemical reaction mechanisms. The kinetics of chemical energy transfer accounts for the four control mechanisms of chemical reactions such as activation, concentration, transition, and film chemical reactions. - Highlights: • Chemical energy transfer theory is proposed for non-, quasi-, and equilibrium. • Gibbs energy fluxes are expressed by chemical potential, time, and displacement. • Relationship between chemical and electrochemical reactions is discussed. • Theory is applied to explore nonequilibrium energy transfer in chemical reactions. • Kinetics of non-equilibrium chemical reactions shows the four control mechanisms

  12. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  13. A sparse matrix based full-configuration interaction algorithm

    International Nuclear Information System (INIS)

    Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.

    2008-01-01

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers

  14. Sparse canonical methods for biological data integration: application to a cross-platform study

    Directory of Open Access Journals (Sweden)

    Robert-Granié Christèle

    2009-01-01

    Full Text Available Abstract Background In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips were used to study the transcriptome of sixty cancer cell lines. Results We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN and Co-Inertia Analysis (CIA. The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. Conclusion sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same

  15. An in-depth study of sparse codes on abnormality detection

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2016-01-01

    Sparse representation has been applied successfully in abnormal event detection, in which the baseline is to learn a dictionary accompanied by sparse codes. While much emphasis is put on discriminative dictionary construction, there are no comparative studies of sparse codes regarding abnormality...... are carried out from various angles to better understand the applicability of sparse codes, including computation time, reconstruction error, sparsity, detection accuracy, and their performance combining various detection methods. The experiment results show that combining OMP codes with maximum coordinate...

  16. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  17. Sparse reconstruction using distribution agnostic bayesian matching pursuit

    KAUST Repository

    Masood, Mudassir; Al-Naffouri, Tareq Y.

    2013-01-01

    A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics

  18. On the vapor-liquid equilibrium in hydroprocessing reactors

    Energy Technology Data Exchange (ETDEWEB)

    Chen, J.; Munteanu, M.; Farooqi, H. [National Centre for Upgrading Technology, Devon, AB (Canada)

    2009-07-01

    When petroleum distillates undergo hydrotreating and hydrocracking, the feedstock and hydrogen pass through trickle-bed catalytic reactors at high temperatures and pressures with large hydrogen flow. As such, the oil is partially vaporized and the hydrogen is partially dissolved in liquid to form a vapor-liquid equilibrium (VLE) system with both vapor and liquid phases containing oil and hydrogen. This may result in considerable changes in flow rates, physical properties and chemical compositions of both phases. Flow dynamics, mass transfer, heat transfer and reaction kinetics may also be modified. Experimental observations of VLE behaviours in distillates with different feedstocks under a range of operating conditions were presented. In addition, VLE was predicted along with its effects on distillates in pilot and commercial scale plants. tabs., figs.

  19. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    Science.gov (United States)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  20. Parallel transposition of sparse data structures

    DEFF Research Database (Denmark)

    Wang, Hao; Liu, Weifeng; Hou, Kaixi

    2016-01-01

    Many applications in computational sciences and social sciences exploit sparsity and connectivity of acquired data. Even though many parallel sparse primitives such as sparse matrix-vector (SpMV) multiplication have been extensively studied, some other important building blocks, e.g., parallel tr...... transposition in the latest vendor-supplied library on an Intel multicore CPU platform, and the MergeTrans approach achieves on average of 3.4-fold (up to 11.7-fold) speedup on an Intel Xeon Phi many-core processor....

  1. Numerical solution of large sparse linear systems

    International Nuclear Information System (INIS)

    Meurant, Gerard; Golub, Gene.

    1982-02-01

    This note is based on one of the lectures given at the 1980 CEA-EDF-INRIA Numerical Analysis Summer School whose aim is the study of large sparse linear systems. The main topics are solving least squares problems by orthogonal transformation, fast Poisson solvers and solution of sparse linear system by iterative methods with a special emphasis on preconditioned conjuguate gradient method [fr

  2. The substitution of mineral fertilizers by compost from household waste in Cameroon: economic analysis with a partial equilibrium model.

    Science.gov (United States)

    Jaza Folefack, Achille Jean

    2009-05-01

    This paper analyses the possibility of substitution between compost and mineral fertilizer in order to assess the impact on the foreign exchange savings in Cameroon of increasing the use of compost. In this regard, a partial equilibrium model was built up and used as a tool for policy simulations. The review of existing literature already suggests that, the compost commercial value i.e. value of substitution (33,740 FCFA tonne(-1)) is higher compared to the compost real price (30,000 FCFA tonne(-1)), proving that it could be profitable to substitute the mineral fertilizer by compost. Further results from the scenarios used in the modelling exercise show that, increasing the compost availability is the most favourable policy for the substitution of mineral fertilizer by compost. This policy helps to save about 18.55% of the annual imported mineral fertilizer quantity and thus to avoid approximately 8.47% of the yearly total import expenditure in Cameroon. The policy of decreasing the transport rate of compost in regions that are far from the city is also favourable to the substitution. Therefore, in order to encourage the substitution of mineral fertilizer by compost, programmes of popularization of compost should be highlighted and be among the top priorities in the agricultural policy of the Cameroon government.

  3. Sparse Source EEG Imaging with the Variational Garrote

    DEFF Research Database (Denmark)

    Hansen, Sofie Therese; Stahlhut, Carsten; Hansen, Lars Kai

    2013-01-01

    EEG imaging, the estimation of the cortical source distribution from scalp electrode measurements, poses an extremely ill-posed inverse problem. Recent work by Delorme et al. (2012) supports the hypothesis that distributed source solutions are sparse. We show that direct search for sparse solutions...

  4. Low-count PET image restoration using sparse representation

    Science.gov (United States)

    Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli

    2018-04-01

    In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.

  5. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  6. An approach of partial control design for system control and synchronization

    International Nuclear Information System (INIS)

    Hu Wuhua; Wang Jiang; Li Xiumin

    2009-01-01

    In this paper, a general approach of partial control design for system control and synchronization is proposed. It turns control problems into simpler ones by reducing their control variables. This is realized by utilizing the dynamical relations between variables, which are described by the dynamical relation matrix and the dependence-influence matrix. By adopting partial control theory, the presented approach provides a simple and general way to stabilize systems to their partial or whole equilibriums, or to synchronize systems with their partial or whole states. Further, based on this approach, the controllers can be simplified. Two examples of synchronizing chaotic systems are given to illustrate its effectiveness.

  7. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    Science.gov (United States)

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  8. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    Science.gov (United States)

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  9. A sparse electromagnetic imaging scheme using nonlinear landweber iterations

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2015-01-01

    Development and use of electromagnetic inverse scattering techniques for imagining sparse domains have been on the rise following the recent advancements in solving sparse optimization problems. Existing techniques rely on iteratively converting

  10. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  11. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  12. Local posterior concentration rate for multilevel sparse sequences

    NARCIS (Netherlands)

    Belitser, E.N.; Nurushev, N.

    2017-01-01

    We consider empirical Bayesian inference in the many normal means model in the situation when the high-dimensional mean vector is multilevel sparse, that is,most of the entries of the parameter vector are some fixed values. For instance, the traditional sparse signal is a particular case (with one

  13. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Analog system for computing sparse codes

    Science.gov (United States)

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  15. Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids

    KAUST Repository

    Buse, Gerrit; Pflü ger, Dirk; Jacob, Riko

    2014-01-01

    In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated

  16. Occlusion detection via structured sparse learning for robust object tracking

    KAUST Repository

    Zhang, Tianzhu

    2014-01-01

    Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios, these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object’s track. This is the case when significant occlusion occurs. To accommodate for nonsparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Extensive experimental results show that our proposed tracker consistently outperforms the state-of-the-art trackers.

  17. Sparse Representation Based SAR Vehicle Recognition along with Aspect Angle

    Directory of Open Access Journals (Sweden)

    Xiangwei Xing

    2014-01-01

    Full Text Available As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC has attracted much attention in synthetic aperture radar (SAR automatic target recognition (ATR recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA, in which the correlation between the vehicle’s aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle’s aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.

  18. Equilibrium and non-equilibrium phenomena in arcs and torches

    NARCIS (Netherlands)

    Mullen, van der J.J.A.M.

    2000-01-01

    A general treatment of non-equilibrium plasma aspects is obtained by relating transport fluxes to equilibrium restoring processes in so-called disturbed Bilateral Relations. The (non) equilibrium stage of a small microwave induced plasma serves as case study.

  19. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  20. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  1. Preconditioned Inexact Newton for Nonlinear Sparse Electromagnetic Imaging

    KAUST Repository

    Desmal, Abdulla

    2014-05-04

    Newton-type algorithms have been extensively studied in nonlinear microwave imaging due to their quadratic convergence rate and ability to recover images with high contrast values. In the past, Newton methods have been implemented in conjunction with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm is formulated and implemented in conjunction with a linear sparse optimization scheme. A novel preconditioning technique is proposed to increase the convergence rate of the optimization problem. Numerical results demonstrate that the proposed framework produces sharper and more accurate images when applied in sparse/sparsified domains.

  2. Preconditioned Inexact Newton for Nonlinear Sparse Electromagnetic Imaging

    KAUST Repository

    Desmal, Abdulla

    2014-01-06

    Newton-type algorithms have been extensively studied in nonlinear microwave imaging due to their quadratic convergence rate and ability to recover images with high contrast values. In the past, Newton methods have been implemented in conjunction with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm is formulated and implemented in conjunction with a linear sparse optimization scheme. A novel preconditioning technique is proposed to increase the convergence rate of the optimization problem. Numerical results demonstrate that the proposed framework produces sharper and more accurate images when applied in sparse/sparsified domains.

  3. Electromagnetic Formation Flight (EMFF) for Sparse Aperture Arrays

    Science.gov (United States)

    Kwon, Daniel W.; Miller, David W.; Sedwick, Raymond J.

    2004-01-01

    Traditional methods of actuating spacecraft in sparse aperture arrays use propellant as a reaction mass. For formation flying systems, propellant becomes a critical consumable which can be quickly exhausted while maintaining relative orientation. Additional problems posed by propellant include optical contamination, plume impingement, thermal emission, and vibration excitation. For these missions where control of relative degrees of freedom is important, we consider using a system of electromagnets, in concert with reaction wheels, to replace the consumables. Electromagnetic Formation Flight sparse apertures, powered by solar energy, are designed differently from traditional propulsion systems, which are based on V. This paper investigates the design of sparse apertures both inside and outside the Earth's gravity field.

  4. Preconditioned Inexact Newton for Nonlinear Sparse Electromagnetic Imaging

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    Newton-type algorithms have been extensively studied in nonlinear microwave imaging due to their quadratic convergence rate and ability to recover images with high contrast values. In the past, Newton methods have been implemented in conjunction with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm is formulated and implemented in conjunction with a linear sparse optimization scheme. A novel preconditioning technique is proposed to increase the convergence rate of the optimization problem. Numerical results demonstrate that the proposed framework produces sharper and more accurate images when applied in sparse/sparsified domains.

  5. A comprehensive study of sparse codes on abnormality detection

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2017-01-01

    Sparse representation has been applied successfully in abnor-mal event detection, in which the baseline is to learn a dic-tionary accompanied by sparse codes. While much empha-sis is put on discriminative dictionary construction, there areno comparative studies of sparse codes regarding abnormal-ity...... detection. We comprehensively study two types of sparsecodes solutions - greedy algorithms and convex L1-norm so-lutions - and their impact on abnormality detection perfor-mance. We also propose our framework of combining sparsecodes with different detection methods. Our comparative ex-periments are carried...

  6. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir; Al-Naffouri, Tareq Y.

    2013-01-01

    priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal

  7. Time-dependent free boundary equilibrium and resistive diffusion in a tokamak plasma

    International Nuclear Information System (INIS)

    Selig, G.

    2012-12-01

    In a Tokamak, in order to create the necessary conditions for nuclear fusion to occur, a plasma is maintained by applying magnetic fields. Under the hypothesis of an axial symmetry of the tokamak, the study of the magnetic configuration at equilibrium is done in two dimensions, and is deduced from the poloidal flux function. This function is solution of a non linear partial differential equation system, known as equilibrium problem. This thesis presents the time dependent free boundary equilibrium problem, where the circuit equations in the tokamak coils and passive conductors are solved together with the Grad-Shafranov equation to produce a dynamic simulation of the plasma. In this framework, the Finite Element equilibrium code CEDRES has been improved in order to solve the aforementioned dynamic problem. Consistency tests and comparisons with the DINA-CH code on an ITER vertical instability case have validated the results. Then, the resistive diffusion of the plasma current density has been simulated using a coupling between CEDRES and the averaged one-dimensional diffusion equation, and it has been successfully compared with the integrated modeling code CRONOS. (author)

  8. On a class of quantum Langevin equations and the question of approach to equilibrium

    International Nuclear Information System (INIS)

    Maassen, J.D.M.

    1982-01-01

    This thesis is concerned with a very simple 'open' quantum system, i.e. being in contact with the outer world. It is asked whether the motion of this system shows frictional behaviour in that it tends to thermal equilibrium. A partial positive answer is given to this question, more precisely, to the question if the solution of the quantum mechanical Langevin equation that describes the Lamb-model (a harmonic oscillator damped by coupling with a string), approaches an equilibrium state. In two sections, the classical and quantum Langevin equations are treated analogously. (Auth.)

  9. An approximate method for calculating composition of the non-equilibrium explosion products of hydrocarbons and oxygen

    International Nuclear Information System (INIS)

    Shargatov, V A; Gubin, S A; Okunev, D Yu

    2016-01-01

    We develop a method for calculating the changes in composition of the explosion products in the case where the complete chemical equilibrium is absent but the bimolecular reactions are in quasi-equilibrium with the exception bimolecular reactions with one of the components of the mixture. We investigate the possibility of using the method of 'quasiequilibrium' for mixtures of hydrocarbons and oxygen. The method is based on the assumption of the existence of the partial chemical equilibrium in the explosion products. Without significant loss of accuracy to the solution of stiff differential equations detailed kinetic mechanism can be replaced by one or two differential equation and a system of algebraic equations. This method is always consistent with the detailed mechanism and can be used separately or in conjunction with the solution of a stiff system for chemically non-equilibrium mixtures replacing it when bimolecular reactions are near to equilibrium. (paper)

  10. Thermal equilibrium properties of an intense relativistic electron beam

    International Nuclear Information System (INIS)

    Davidson, R.C.; Uhm, H.S.

    1979-01-01

    The thermal equilibrium properties of an intense relativistic electron beam with distribution function f 0 /sub b/=Z -1 /sub b/exp[-(H-β/sub b/cP/sub z/-ω/sub b/P/sub theta/) /T] are investigated. This choice of f 0 /sub b/ allows for a mean azimuthal rotation of the beam electrons (when ω/sub b/not =0), and corresponds to an important generalization of the distribution function first analyzed by Bennett. Beam equilibrium properties, including axial velocity profile V 0 /sub z/b(r), azimuthal velocity profile V 0 /sub thetab/(r), beam temperature profile T 0 /sub b/(r), beam density profile n 0 /sub b/(r), and equilibrium self-field profiles, are calculated for a broad range of system parameters. For appropriate choice of beam rotation velocity ω/sub b/, it is found that radially confined equilibrium solutions [with n 0 /sub b/(r→infinity) =0] exist even in the absence of a partially neutralizing ion background that weakens the repulsive space-charge force. The necessary and sufficient conditions for radially confined equilibria are ω - /sub b/ + /sub b/ for 0 2 /sub b/p /ω 2 /sub b/c) (1-f-β 2 /sub b/) 2 /sub b/p/ω 2 /sub b/c) (1-f-β 2 /sub b/) <0

  11. Selectivity and sparseness in randomly connected balanced networks.

    Directory of Open Access Journals (Sweden)

    Cengiz Pehlevan

    Full Text Available Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.

  12. SPARSE ELECTROMAGNETIC IMAGING USING NONLINEAR LANDWEBER ITERATIONS

    KAUST Repository

    Desmal, Abdulla

    2015-07-29

    A scheme for efficiently solving the nonlinear electromagnetic inverse scattering problem on sparse investigation domains is described. The proposed scheme reconstructs the (complex) dielectric permittivity of an investigation domain from fields measured away from the domain itself. Least-squares data misfit between the computed scattered fields, which are expressed as a nonlinear function of the permittivity, and the measured fields is constrained by the L0/L1-norm of the solution. The resulting minimization problem is solved using nonlinear Landweber iterations, where at each iteration a thresholding function is applied to enforce the sparseness-promoting L0/L1-norm constraint. The thresholded nonlinear Landweber iterations are applied to several two-dimensional problems, where the ``measured\\'\\' fields are synthetically generated or obtained from actual experiments. These numerical experiments demonstrate the accuracy, efficiency, and applicability of the proposed scheme in reconstructing sparse profiles with high permittivity values.

  13. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  14. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  15. Structure-based bayesian sparse reconstruction

    KAUST Repository

    Quadeer, Ahmed Abdul; Al-Naffouri, Tareq Y.

    2012-01-01

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical

  16. Analyzing the dependence of oxygen incorporation current density on overpotential and oxygen partial pressure in mixed conducting oxide electrodes.

    Science.gov (United States)

    Guan, Zixuan; Chen, Di; Chueh, William C

    2017-08-30

    The oxygen incorporation reaction, which involves the transformation of an oxygen gas molecule to two lattice oxygen ions in a mixed ionic and electronic conducting solid, is a ubiquitous and fundamental reaction in solid-state electrochemistry. To understand the reaction pathway and to identify the rate-determining step, near-equilibrium measurements have been employed to quantify the exchange coefficients as a function of oxygen partial pressure and temperature. However, because the exchange coefficient contains contributions from both forward and reverse reaction rate constants and depends on both oxygen partial pressure and oxygen fugacity in the solid, unique and definitive mechanistic assessment has been challenging. In this work, we derive a current density equation as a function of both oxygen partial pressure and overpotential, and consider both near and far from equilibrium limits. Rather than considering specific reaction pathways, we generalize the multi-step oxygen incorporation reaction into the rate-determining step, preceding and following quasi-equilibrium steps, and consider the number of oxygen ions and electrons involved in each. By evaluating the dependence of current density on oxygen partial pressure and overpotential separately, one obtains the reaction orders for oxygen gas molecules and for solid-state species in the electrode. We simulated the oxygen incorporation current density-overpotential curves for praseodymium-doped ceria for various candidate rate-determining steps. This work highlights a promising method for studying the exchange kinetics far away from equilibrium.

  17. Binary Sparse Phase Retrieval via Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Wei Peng

    2016-01-01

    Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.

  18. Confidence of model based shape reconstruction from sparse data

    DEFF Research Database (Denmark)

    Baka, N.; de Bruijne, Marleen; Reiber, J. H. C.

    2010-01-01

    Statistical shape models (SSM) are commonly applied for plausible interpolation of missing data in medical imaging. However, when fitting a shape model to sparse information, many solutions may fit the available data. In this paper we derive a constrained SSM to fit noisy sparse input landmarks...

  19. Proportionate Minimum Error Entropy Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-08-01

    Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.

  20. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  1. Ordering sparse matrices for cache-based systems

    International Nuclear Information System (INIS)

    Biswas, Rupak; Oliker, Leonid

    2001-01-01

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning

  2. A flexible framework for sparse simultaneous component based data integration

    Directory of Open Access Journals (Sweden)

    Van Deun Katrijn

    2011-11-01

    Full Text Available Abstract 1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such

  3. A flexible framework for sparse simultaneous component based data integration.

    Science.gov (United States)

    Van Deun, Katrijn; Wilderjans, Tom F; van den Berg, Robert A; Antoniadis, Anestis; Van Mechelen, Iven

    2011-11-15

    High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics) that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins) have to be taken into account. We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such, structures can be found that are exclusively tied to one data platform

  4. P-SPARSLIB: A parallel sparse iterative solution package

    Energy Technology Data Exchange (ETDEWEB)

    Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  5. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  6. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  8. Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms

    Directory of Open Access Journals (Sweden)

    Valeria Cardellini

    2014-01-01

    Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

  9. An Efficient GPU General Sparse Matrix-Matrix Multiplication for Irregular Data

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2014-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method, breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM algorithm has to handle extra...... irregularity from three aspects: (1) the number of the nonzero entries in the result sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the result sparse matrix dominate the execution time, and (3) load balancing must account for sparse data in both input....... Load balancing builds on the number of the necessary arithmetic operations on the nonzero entries and is guaranteed in all stages. Compared with the state-of-the-art GPU SpGEMM methods in the CUSPARSE library and the CUSP library and the latest CPU SpGEMM method in the Intel Math Kernel Library, our...

  10. Comparison of Methods for Sparse Representation of Musical Signals

    DEFF Research Database (Denmark)

    Endelt, Line Ørtoft; la Cour-Harbo, Anders

    2005-01-01

    by a number of sparseness measures and results are shown on the ℓ1 norm of the coefficients, using a dictionary containing a Dirac basis, a Discrete Cosine Transform, and a Wavelet Packet. Evaluated only on the sparseness Matching Pursuit is the best method, and it is also relatively fast....

  11. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  12. An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling

    Science.gov (United States)

    Kane, Patrick; Zollman, Kevin J. S.

    2015-01-01

    The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the “hybrid equilibrium,” to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith’s Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. PMID:26348617

  13. Discussion of CoSA: Clustering of Sparse Approximations

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-07

    The purpose of this talk is to discuss the possible applications of CoSA (Clustering of Sparse Approximations) to the exploitation of HSI (HyperSpectral Imagery) data. CoSA is presented by Moody et al. in the Journal of Applied Remote Sensing (“Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries”, Vol. 8, 2014) and is based on machine learning techniques.

  14. Application Research of the Sparse Representation of Eigenvector on the PD Positioning in the Transformer Oil

    Directory of Open Access Journals (Sweden)

    Qing Xie

    2016-01-01

    Full Text Available The partial discharge (PD detection of electrical equipment is important for the safe operation of power system. The ultrasonic signal generated by the PD in the oil is a broadband signal. However, most methods of the array signal processing are used for the narrowband signal at present, and the effect of some methods for processing wideband signals is not satisfactory. Therefore, it is necessary to find new broadband signal processing methods to improve detection ability of the PD source. In this paper, the direction of arrival (DOA estimation method based on sparse representation of eigenvector is proposed, and this method can further reduce the noise interference. Moreover, the simulation results show that this direction finding method is feasible for broadband signal and thus improve the following positioning accuracy of the three-array localization method. And experimental results verify that the direction finding method based on sparse representation of eigenvector is feasible for the ultrasonic array, which can achieve accurate estimation of direction of arrival and improve the following positioning accuracy. This can provide important guidance information for the equipment maintenance in the practical application.

  15. Equilibrium and pre-equilibrium emissions in proton-induced ...

    Indian Academy of Sciences (India)

    necessary for the domain of fission-reactor technology for the calculation of nuclear transmutation ... tions occur in three stages: INC, pre-equilibrium and equilibrium (or compound. 344. Pramana ... In the evaporation phase of the reaction, the.

  16. Shape characteristics of equilibrium and non-equilibrium fractal clusters.

    Science.gov (United States)

    Mansfield, Marc L; Douglas, Jack F

    2013-07-28

    It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other

  17. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  18. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  19. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  20. Joint sparse representation for robust multimodal biometrics recognition.

    Science.gov (United States)

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  1. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  2. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu

    2014-06-19

    Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.

  3. Efficient collaborative sparse channel estimation in massive MIMO

    KAUST Repository

    Masood, Mudassir

    2015-08-12

    We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.

  4. Efficient collaborative sparse channel estimation in massive MIMO

    KAUST Repository

    Masood, Mudassir; Afify, Laila H.; Al-Naffouri, Tareq Y.

    2015-01-01

    We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.

  5. Sparse dictionary learning of resting state fMRI networks.

    Science.gov (United States)

    Eavani, Harini; Filipovych, Roman; Davatzikos, Christos; Satterthwaite, Theodore D; Gur, Raquel E; Gur, Ruben C

    2012-07-02

    Research in resting state fMRI (rsfMRI) has revealed the presence of stable, anti-correlated functional subnetworks in the brain. Task-positive networks are active during a cognitive process and are anti-correlated with task-negative networks, which are active during rest. In this paper, based on the assumption that the structure of the resting state functional brain connectivity is sparse, we utilize sparse dictionary modeling to identify distinct functional sub-networks. We propose two ways of formulating the sparse functional network learning problem that characterize the underlying functional connectivity from different perspectives. Our results show that the whole-brain functional connectivity can be concisely represented with highly modular, overlapping task-positive/negative pairs of sub-networks.

  6. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  7. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  8. Regularized generalized eigen-decomposition with applications to sparse supervised feature extraction and sparse discriminant analysis

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2015-01-01

    We propose a general technique for obtaining sparse solutions to generalized eigenvalue problems, and call it Regularized Generalized Eigen-Decomposition (RGED). For decades, Fisher's discriminant criterion has been applied in supervised feature extraction and discriminant analysis, and it is for...

  9. A performance study of sparse Cholesky factorization on INTEL iPSC/860

    Science.gov (United States)

    Zubair, M.; Ghose, M.

    1992-01-01

    The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.

  10. Long-term measurements of equilibrium factor with electrochemically etched CR-39 SSNTD

    International Nuclear Information System (INIS)

    Ng, F.M.F.; Nikezic, D.; Yu, K.N.

    2007-01-01

    Recently, our group proposed a method (proxy equilibrium factor method) using a bare LR 115 detector for long-term monitoring of the equilibrium factor. Due to the presence of an upper alpha-particle energy threshold for track formation in the LR 115 detector, the partial sensitivities to 222 Rn, 218 Po and 214 Po were the same, which made possible measurements of a proxy equilibrium factor F p that was well correlated with the equilibrium factor. In the present work, the method is extended to CR-39 detectors which have better-controlled etching properties but do not have an upper energy threshold. An exposed bare CR-39 detector is first pre-etched in 6.25 N NaOH solution at 70 o C for 6 h, and then etched electrochemically in a 6.25 N NaOH solution with ac voltage of 400 V (peak to peak) and 5 kHz applied across the detectors for 1 h at room temperature. Under these conditions, for tracks corresponding to incident angles larger than or equal to 50 deg., the treeing efficiency is 0% and 100% for incident energies smaller than and larger than 4 MeV, respectively. A simple method is then proposed to obtain the total number of tracks formed below the upper energy threshold of 4 MeV, from which the proxy equilibrium factor method can apply

  11. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  12. With timing options and heterogeneous costs, the lognormal diffusion is hardly an equilibrium price process for exhaustible resources

    International Nuclear Information System (INIS)

    Lund, D.

    1992-01-01

    The report analyses the possibility that the lognormal diffusion process should be an equilibrium spot price process for an exhaustible resource. A partial equilibrium model is used under the assumption that the resource deposits have different extraction costs. Two separate problems have been pointed out. Under full certainty, when the process reduces to an exponentially growing price, the equilibrium places a very strong restriction on a relationship between the demand function and the cost density function. Under uncertainty there is an additional problem that during periods in which the price is lower than its previously recorded high, no new deposits will start extraction. 30 refs., 1 fig

  13. Image fusion via nonlocal sparse K-SVD dictionary learning.

    Science.gov (United States)

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  14. Detection of Pitting in Gears Using a Deep Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Yongzhi Qu

    2017-05-01

    Full Text Available In this paper; a new method for gear pitting fault detection is presented. The presented method is developed based on a deep sparse autoencoder. The method integrates dictionary learning in sparse coding into a stacked autoencoder network. Sparse coding with dictionary learning is viewed as an adaptive feature extraction method for machinery fault diagnosis. An autoencoder is an unsupervised machine learning technique. A stacked autoencoder network with multiple hidden layers is considered to be a deep learning network. The presented method uses a stacked autoencoder network to perform the dictionary learning in sparse coding and extract features from raw vibration data automatically. These features are then used to perform gear pitting fault detection. The presented method is validated with vibration data collected from gear tests with pitting faults in a gearbox test rig and compared with an existing deep learning-based approach.

  15. In-Storage Embedded Accelerator for Sparse Pattern Processing

    OpenAIRE

    Jun, Sang-Woo; Nguyen, Huy T.; Gadepally, Vijay N.; Arvind

    2016-01-01

    We present a novel architecture for sparse pattern processing, using flash storage with embedded accelerators. Sparse pattern processing on large data sets is the essence of applications such as document search, natural language processing, bioinformatics, subgraph matching, machine learning, and graph processing. One slice of our prototype accelerator is capable of handling up to 1TB of data, and experiments show that it can outperform C/C++ software solutions on a 16-core system at a fracti...

  16. Process Knowledge Discovery Using Sparse Principal Component Analysis

    DEFF Research Database (Denmark)

    Gao, Huihui; Gajjar, Shriram; Kulahci, Murat

    2016-01-01

    As the goals of ensuring process safety and energy efficiency become ever more challenging, engineers increasingly rely on data collected from such processes for informed decision making. During recent decades, extracting and interpreting valuable process information from large historical data sets...... SPCA approach that helps uncover the underlying process knowledge regarding variable relations. This approach systematically determines the optimal sparse loadings for each sparse PC while improving interpretability and minimizing information loss. The salient features of the proposed approach...

  17. Interpreting equilibrium-conductivity and conductivity-relaxation measurements to establish thermodynamic and transport properties for multiple charged defect conducting ceramics.

    Science.gov (United States)

    Zhu, Huayang; Ricote, Sandrine; Coors, W Grover; Kee, Robert J

    2015-01-01

    A model-based interpretation of measured equilibrium conductivity and conductivity relaxation is developed to establish thermodynamic, transport, and kinetics parameters for multiple charged defect conducting (MCDC) ceramic materials. The present study focuses on 10% yttrium-doped barium zirconate (BZY10). In principle, using the Nernst-Einstein relationship, equilibrium conductivity measurements are sufficient to establish thermodynamic and transport properties. However, in practice it is difficult to establish unique sets of properties using equilibrium conductivity alone. Combining equilibrium and conductivity-relaxation measurements serves to significantly improve the quantitative fidelity of the derived material properties. The models are developed using a Nernst-Planck-Poisson (NPP) formulation, which enables the quantitative representation of conductivity relaxations caused by very large changes in oxygen partial pressure.

  18. Massively parallel sparse matrix function calculations with NTPoly

    Science.gov (United States)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  19. Deformable segmentation via sparse representation and dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Sparseness- and continuity-constrained seismic imaging

    Science.gov (United States)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  1. Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Chow, Edmond; Pothen, Alex

    2005-03-18

    This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.

  2. Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Detian Huang

    2018-01-01

    Full Text Available Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.

  3. Efficient implementations of block sparse matrix operations on shared memory vector machines

    International Nuclear Information System (INIS)

    Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.

    2000-01-01

    In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)

  4. A Projected Conjugate Gradient Method for Sparse Minimax Problems

    DEFF Research Database (Denmark)

    Madsen, Kaj; Jonasson, Kristjan

    1993-01-01

    A new method for nonlinear minimax problems is presented. The method is of the trust region type and based on sequential linear programming. It is a first order method that only uses first derivatives and does not approximate Hessians. The new method is well suited for large sparse problems...... as it only requires that software for sparse linear programming and a sparse symmetric positive definite equation solver are available. On each iteration a special linear/quadratic model of the function is minimized, but contrary to the usual practice in trust region methods the quadratic model is only...... with the method are presented. In fact, we find that the number of iterations required is comparable to that of state-of-the-art quasi-Newton codes....

  5. Identification of MIMO systems with sparse transfer function coefficients

    Science.gov (United States)

    Qiu, Wanzhi; Saleem, Syed Khusro; Skafidas, Efstratios

    2012-12-01

    We study the problem of estimating transfer functions of multivariable (multiple-input multiple-output--MIMO) systems with sparse coefficients. We note that subspace identification methods are powerful and convenient tools in dealing with MIMO systems since they neither require nonlinear optimization nor impose any canonical form on the systems. However, subspace-based methods are inefficient for systems with sparse transfer function coefficients since they work on state space models. We propose a two-step algorithm where the first step identifies the system order using the subspace principle in a state space format, while the second step estimates coefficients of the transfer functions via L1-norm convex optimization. The proposed algorithm retains good features of subspace methods with improved noise-robustness for sparse systems.

  6. MULTISCALE SPARSE APPEARANCE MODELING AND SIMULATION OF PATHOLOGICAL DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    Rami Zewail

    2017-08-01

    Full Text Available Machine learning and statistical modeling techniques has drawn much interest within the medical imaging research community. However, clinically-relevant modeling of anatomical structures continues to be a challenging task. This paper presents a novel method for multiscale sparse appearance modeling in medical images with application to simulation of pathological deformations in X-ray images of human spine. The proposed appearance model benefits from the non-linear approximation power of Contourlets and its ability to capture higher order singularities to achieve a sparse representation while preserving the accuracy of the statistical model. Independent Component Analysis is used to extract statistical independent modes of variations from the sparse Contourlet-based domain. The new model is then used to simulate clinically-relevant pathological deformations in radiographic images.

  7. Phase equilibrium constraints on the origin of basalts, picrites, and komatiites

    Science.gov (United States)

    Herzberg, C.; O'Hara, M. J.

    1998-07-01

    Experimental phase equilibrium studies at pressures ranging from 1 atm to 10 GPa are sufficient to constrain the origin of igneous rocks formed along oceanic ridges and in hotspots. The major element geochemistry of MORB is dominated by partial crystallization at low pressures in the oceanic crust and uppermost mantle, forcing compliance with liquid compositions in low-pressure cotectic equilibrium with olivine, plagioclase and often augite too; parental magmas to MORB formed by partial melting, mixing, and pooling have not survived these effects. Similarly, picrites and komatiites can transform to basalts by partial crystallization in the crust and lithosphere. However, parental picrites and komatiites that were successful in erupting to the surface typically have compositions that can be matched to experimentally-observed anhydrous primary magmas in equilibrium with harzburgite [L+Ol+Opx] at 3.0 to 4.5 GPa. This pressure is likely to represent an average for pooled magmas that collected at the top of a plume head as it flattened below the lithosphere. There is substantial uniformity in the normative olivine content of primary magmas at all depths in a plume melt column, and this results in pooled komatiitic magmas that are equally uniform in normative olivine. However, the imposition of pressure above 3 GPa produces picrites and komatiites with variations in normative enstatite and Al 2O 3 that reveal plume potential temperature and depths of initial melting. Hotter plumes begin to melt deeper than cooler plumes, yielding picrites and komatiites that are enriched in normative enstatite and depleted in Al 2O 3 because of a deeper column within which orthopyroxene can dissolve during decompression. Pressures of initial melting span the 4 to 10 GPa range, increasing in the following order: Iceland, Hawaii, Gorgona, Belingwe, Barberton. Parental komatiites and picrites from a single plume also exhibit internal variability in normative enstatite and Al 2O 3

  8. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  9. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  10. Inference algorithms and learning theory for Bayesian sparse factor analysis

    International Nuclear Information System (INIS)

    Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John

    2009-01-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  11. Inference algorithms and learning theory for Bayesian sparse factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)

    2009-12-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  12. Isospin equilibrium and non-equilibrium in heavy-ion collisions at intermediate energies

    International Nuclear Information System (INIS)

    Chen Liewen; Ge Lingxiao; Zhang Xiaodong; Zhang Fengshou

    1997-01-01

    The equilibrium and non-equilibrium of the isospin degree of freedom are studied in terms of an isospin-dependent QMD model, which includes isospin-dependent symmetry energy, Coulomb energy, N-N cross sections and Pauli blocking. It is shown that there exists a transition from the isospin equilibrium to non-equilibrium as the incident energy from below to above a threshold energy in central, asymmetric heavy-ion collisions. Meanwhile, it is found that the phenomenon results from the co-existence and competition of different reaction mechanisms, namely, the isospin degree of freedom reaches an equilibrium if the incomplete fusion (ICF) component is dominant and does not reach equilibrium if the fragmentation component is dominant. Moreover, it is also found that the isospin-dependent N-N cross sections and symmetry energy are crucial for the equilibrium of the isospin degree of freedom in heavy-ion collisions around the Fermi energy. (author)

  13. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  14. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  16. Efficient coordinated recovery of sparse channels in massive MIMO

    KAUST Repository

    Masood, Mudassir

    2015-01-01

    This paper addresses the problem of estimating sparse channels in massive MIMO-OFDM systems. Most wireless channels are sparse in nature with large delay spread. In addition, these channels as observed by multiple antennas in a neighborhood have approximately common support. The sparsity and common support properties are attractive when it comes to the efficient estimation of large number of channels in massive MIMO systems. Moreover, to avoid pilot contamination and to achieve better spectral efficiency, it is important to use a small number of pilots. We present a novel channel estimation approach which utilizes the sparsity and common support properties to estimate sparse channels and requires a small number of pilots. Two algorithms based on this approach have been developed that perform Bayesian estimates of sparse channels even when the prior is non-Gaussian or unknown. Neighboring antennas share among each other their beliefs about the locations of active channel taps to perform estimation. The coordinated approach improves channel estimates and also reduces the required number of pilots. Further improvement is achieved by the data-aided version of the algorithm. Extensive simulation results are provided to demonstrate the performance of the proposed algorithms.

  17. Non-Equilibrium Liouville and Wigner Equations: Moment Methods and Long-Time Approximations

    Directory of Open Access Journals (Sweden)

    Ramon F. Álvarez-Estrada

    2014-03-01

    Full Text Available We treat the non-equilibrium evolution of an open one-particle statistical system, subject to a potential and to an external “heat bath” (hb with negligible dissipation. For the classical equilibrium Boltzmann distribution, Wc,eq, a non-equilibrium three-term hierarchy for moments fulfills Hermiticity, which allows one to justify an approximate long-time thermalization. That gives partial dynamical support to Boltzmann’s Wc,eq, out of the set of classical stationary distributions, Wc;st, also investigated here, for which neither Hermiticity nor that thermalization hold, in general. For closed classical many-particle systems without hb (by using Wc,eq, the long-time approximate thermalization for three-term hierarchies is justified and yields an approximate Lyapunov function and an arrow of time. The largest part of the work treats an open quantum one-particle system through the non-equilibrium Wigner function, W. Weq for a repulsive finite square well is reported. W’s (< 0 in various cases are assumed to be quasi-definite functionals regarding their dependences on momentum (q. That yields orthogonal polynomials, HQ,n(q, for Weq (and for stationary Wst, non-equilibrium moments, Wn, of W and hierarchies. For the first excited state of the harmonic oscillator, its stationary Wst is a quasi-definite functional, and the orthogonal polynomials and three-term hierarchy are studied. In general, the non-equilibrium quantum hierarchies (associated with Weq for the Wn’s are not three-term ones. As an illustration, we outline a non-equilibrium four-term hierarchy and its solution in terms of generalized operator continued fractions. Such structures also allow one to formulate long-time approximations, but make it more difficult to justify thermalization. For large thermal and de Broglie wavelengths, the dominant Weq and a non-equilibrium equation for W are reported: the non-equilibrium hierarchy could plausibly be a three-term one and possibly not

  18. Agonists and partial agonists of rhodopsin: retinal polyene methylation affects receptor activation.

    Science.gov (United States)

    Vogel, Reiner; Lüdeke, Steffen; Siebert, Friedrich; Sakmar, Thomas P; Hirshfeld, Amiram; Sheves, Mordechai

    2006-02-14

    Using Fourier transform infrared (FTIR) difference spectroscopy, we have studied the impact of sites and extent of methylation of the retinal polyene with respect to position and thermodynamic parameters of the conformational equilibrium between the Meta I and Meta II photoproducts of rhodopsin. Deletion of methyl groups to form 9-demethyl and 13-demethyl analogues, as well as addition of a methyl group at C10 or C12, shifted the Meta I/Meta II equilibrium toward Meta I, such that the retinal analogues behaved like partial agonists. This equilibrium shift resulted from an apparent reduction of the entropy gain of the transition of up to 65%, which was only partially offset by a concomitant reduction of the enthalpy increase. The analogues produced Meta II photoproducts with relatively small alterations, while their Meta I states were significantly altered, which accounted for the aberrant transitions to Meta II. Addition of a methyl group at C14 influenced the thermodynamic parameters but had little impact on the position of the Meta I/Meta II equilibrium. Neutralization of the residue 134 in the E134Q opsin mutant increased the Meta II content of the 13-demethyl analogue, but not of the 9-demethyl analogue, indicating a severe impairment of the allosteric coupling between the conserved cytoplasmic ERY motif involved in proton uptake and the Schiff base/Glu 113 microdomain in the 9-demethyl analogue. The 9-methyl group appears therefore essential for the correct positioning of retinal to link protonation of the cytoplasmic motif with protonation of Glu 113 during receptor activation.

  19. Robust Fringe Projection Profilometry via Sparse Representation.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment.

  20. Sparse DOA estimation with polynomial rooting

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren

    2015-01-01

    Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...

  1. A General Sparse Tensor Framework for Electronic Structure Theory.

    Science.gov (United States)

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I; Head-Gordon, Martin

    2017-03-14

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. However, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We avoid cumbersome machine-generated code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.

  2. Equilibrium solubility of carbon dioxide in the amine solvent system of (triethanolamine + piperazine + water)

    International Nuclear Information System (INIS)

    Chung, P.-Y.; Soriano, Allan N.; Leron, Rhoda B.; Li, M.-H.

    2010-01-01

    In this study, a new set of data for the equilibrium solubility of carbon dioxide in the amine solvent system that consists of triethanolamine (TEA), piperazine (PZ), and water is presented. Equilibrium solubility values were obtained at T = (313.2, 333.2, and 353.2) K and pressures up to 153 kPa using the vapour-recirculation equilibrium cell. The TEA concentrations in the considered ternary (solvent) mixture were (2 and 3) kmol . m -3 and those of PZ's were (0.5, 1.0, and 1.5) kmol . m -3 . The solubility data (CO 2 loading in the amine solution) obtained were correlated as a function of CO 2 partial pressure, system temperature, and amine composition via the modified Kent-Eisenberg model. Results showed that the model applied is generally satisfactory in representing the CO 2 absorption into mixed aqueous solutions of TEA and PZ.

  3. Low-rank and sparse modeling for visual analysis

    CERN Document Server

    Fu, Yun

    2014-01-01

    This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic

  4. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  5. Hopf bifurcation in a partial dependent predator-prey system with delay

    International Nuclear Information System (INIS)

    Zhao Huitao; Lin Yiping

    2009-01-01

    In this paper, a partial dependent predator-prey model with time delay is studied by using the theory of functional differential equation and Hassard's method, the condition on which positive equilibrium exists and Hopf bifurcation occurs are given. Finally, numerical simulations are performed to support the analytical results, and the chaotic behaviors are observed.

  6. Real-time SPARSE-SENSE cardiac cine MR imaging: optimization of image reconstruction and sequence validation.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-12-01

    Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.

  7. Security-enhanced phase encryption assisted by nonlinear optical correlation via sparse phase

    International Nuclear Information System (INIS)

    Chen, Wen; Chen, Xudong; Wang, Xiaogang

    2015-01-01

    We propose a method for security-enhanced phase encryption assisted by a nonlinear optical correlation via a sparse phase. Optical configurations are established based on a phase retrieval algorithm for embedding an input image and the secret data into phase-only masks. We found that when one or a few phase-only masks generated during data hiding are sparse, it is possible to integrate these sparse masks into those phase-only masks generated during the encoding of the input image. Synthesized phase-only masks are used for the recovery, and sparse distributions (i.e., binary maps) for generating the incomplete phase-only masks are considered as additional parameters for the recovery of secret data. It is difficult for unauthorized receivers to know that a useful phase has been sparsely distributed in the finally generated phase-only masks for secret-data recovery. Only when the secret data are correctly verified can the input image obtained with valid keys be claimed as targeted information. (paper)

  8. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    Science.gov (United States)

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.

  9. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.

  10. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  11. Sparse electromagnetic imaging using nonlinear iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2015-01-01

    A sparse nonlinear electromagnetic imaging scheme is proposed for reconstructing dielectric contrast of investigation domains from measured fields. The proposed approach constructs the optimization problem by introducing the sparsity constraint to the data misfit between the scattered fields expressed as a nonlinear function of the contrast and the measured fields and solves it using the nonlinear iterative shrinkage thresholding algorithm. The thresholding is applied to the result of every nonlinear Landweber iteration to enforce the sparsity constraint. Numerical results demonstrate the accuracy and efficiency of the proposed method in reconstructing sparse dielectric profiles.

  12. Sparse electromagnetic imaging using nonlinear iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla

    2015-04-13

    A sparse nonlinear electromagnetic imaging scheme is proposed for reconstructing dielectric contrast of investigation domains from measured fields. The proposed approach constructs the optimization problem by introducing the sparsity constraint to the data misfit between the scattered fields expressed as a nonlinear function of the contrast and the measured fields and solves it using the nonlinear iterative shrinkage thresholding algorithm. The thresholding is applied to the result of every nonlinear Landweber iteration to enforce the sparsity constraint. Numerical results demonstrate the accuracy and efficiency of the proposed method in reconstructing sparse dielectric profiles.

  13. On the definition of equilibrium and non-equilibrium states in dynamical systems

    OpenAIRE

    Akimoto, Takuma

    2008-01-01

    We propose a definition of equilibrium and non-equilibrium states in dynamical systems on the basis of the time average. We show numerically that there exists a non-equilibrium non-stationary state in the coupled modified Bernoulli map lattice.

  14. Uranium mineral - groundwater equilibrium at the Palmottu natural analogue study site, Finland

    International Nuclear Information System (INIS)

    Ahonen, L.; Ruskeeniemi, T.; Blomqvist, R.; Ervanne, H.; Jaakkola, T.

    1993-01-01

    The redox-potential, pH, chemical composition of fracture waters, and uraninite alteration associated with the Palmottu uranium mineralization (a natural analogue study site for radioactive waste disposal in southwestern Finland), have been studied. The data have been interpreted by means of thermodynamic calculations. The results indicate equilibrium between uraninite, ferric hydroxide and groundwater in the bedrock of the study site. Partially oxidized uraninite (UO 2 .33) and ferric hydroxide are in equilibrium with fresh, slightly acidic and oxidized water type, while primary uraninite is stable with deeper waters that have a higher pH and lower Eh. Measured Eh-pH values of groundwater cluster within a relatively narrow range indicating buffering by heterogenous redox-processes. A good consistency between measured Eh and analyzed uranium oxidation states was observed

  15. On the Automatic Parallelization of Sparse and Irregular Fortran Programs

    Directory of Open Access Journals (Sweden)

    Yuan Lin

    1999-01-01

    Full Text Available Automatic parallelization is usually believed to be less effective at exploiting implicit parallelism in sparse/irregular programs than in their dense/regular counterparts. However, not much is really known because there have been few research reports on this topic. In this work, we have studied the possibility of using an automatic parallelizing compiler to detect the parallelism in sparse/irregular programs. The study with a collection of sparse/irregular programs led us to some common loop patterns. Based on these patterns new techniques were derived that produced good speedups when manually applied to our benchmark codes. More importantly, these parallelization methods can be implemented in a parallelizing compiler and can be applied automatically.

  16. Split-Bregman-based sparse-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Vandeghinste, Bert; Vandenberghe, Stefaan [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Goossens, Bart; Pizurica, Aleksandra; Philips, Wilfried [Ghent Univ. (Belgium). Image Processing and Interpretation Research Group (IPI); Beenhouwer, Jan de [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Wilrijk (Belgium). The Vision Lab; Staelens, Steven [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Edegem (Belgium). Molecular Imaging Centre Antwerp

    2011-07-01

    Total variation minimization has been extensively researched for image denoising and sparse view reconstruction. These methods show superior denoising performance for simple images with little texture, but result in texture information loss when applied to more complex images. It could thus be beneficial to use other regularizers within medical imaging. We propose a general regularization method, based on a split-Bregman approach. We show results for this framework combined with a total variation denoising operator, in comparison to ASD-POCS. We show that sparse-view reconstruction and noise regularization is possible. This general method will allow us to investigate other regularizers in the context of regularized CT reconstruction, and decrease the acquisition times in {mu}CT. (orig.)

  17. Equilibrium and generators

    International Nuclear Information System (INIS)

    Balter, H.S.

    1994-01-01

    This work studies the behaviour of radionuclides when it produce a desintegration activity,decay and the isotopes stable creation. It gives definitions about the equilibrium between activity of parent and activity of the daughter, radioactive decay,isotope stable and transient equilibrium and maxim activity time. Some considerations had been given to generators that permit a disgregation of two radioisotopes in equilibrium and its good performance. Tabs

  18. Approximate thermodynamic state relations in partially ionized gas mixtures

    International Nuclear Information System (INIS)

    Ramshaw, John D.

    2004-01-01

    Thermodynamic state relations for mixtures of partially ionized nonideal gases are often approximated by artificially partitioning the mixture into compartments or subvolumes occupied by the pure partially ionized constituent gases, and requiring these subvolumes to be in temperature and pressure equilibrium. This intuitively reasonable procedure is easily shown to reproduce the correct thermal and caloric state equations for a mixture of neutral (nonionized) ideal gases. The purpose of this paper is to point out that (a) this procedure leads to incorrect state equations for a mixture of partially ionized ideal gases, whereas (b) the alternative procedure of requiring that the subvolumes all have the same temperature and free electron density reproduces the correct thermal and caloric state equations for such a mixture. These results readily generalize to the case of partially degenerate and/or relativistic electrons, to a common approximation used to represent pressure ionization effects, and to two-temperature plasmas. This suggests that equating the subvolume electron number densities or chemical potentials instead of pressures is likely to provide a more accurate approximation in nonideal plasma mixtures

  19. A framework for general sparse matrix-matrix multiplication on GPUs and heterogeneous processors

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2015-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method (AMG), breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM implementation has to handle...... extra irregularity from three aspects: (1) the number of nonzero entries in the resulting sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the resulting sparse matrix dominate the execution time, and (3) load balancing must account for sparse data...... memory space and efficiently utilizes the very limited on-chip scratchpad memory. Parallel insert operations of the nonzero entries are implemented through the GPU merge path algorithm that is experimentally found to be the fastest GPU merge approach. Load balancing builds on the number of necessary...

  20. Partial stabilization and control of distributed parameter systems with elastic elements

    CERN Document Server

    Zuyev, Alexander L

    2015-01-01

     This monograph provides a rigorous treatment of problems related to partial asymptotic stability and controllability for models of flexible structures described by coupled nonlinear ordinary and partial differential equations or equations in abstract spaces. The text is self-contained, beginning with some basic results from the theory of continuous semigroups of operators in Banach spaces. The problem of partial asymptotic stability with respect to a continuous functional is then considered for a class of abstract multivalued systems on a metric space. Next, the results of this study are applied to the study of a rotating body with elastic attachments. Professor Zuyev demonstrates that the equilibrium cannot be made strongly asymptotically stable in the general case, motivating consideration of the problem of partial stabilization with respect to the functional that represents “averaged” oscillations. The book’s focus moves on to spillover analysis for infinite-dimensional systems with finite-dimensio...

  1. Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen

    2017-08-29

    In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.

  2. Equilibrium Total Pressure and CO2 Solubility in Binary and Ternary Aqueous Solutions of 2-(Diethylamino)ethanol (DEEA) and 3-(Methylamino)propylamine (MAPA)

    DEFF Research Database (Denmark)

    Waseem Arshad, Muhammad; Svendsen, Hallvard Fjøsne; Fosbøl, Philip Loldrup

    2014-01-01

    Equilibrium total pressures were measured and equilibrium CO2 partial pressures were calculated from the measured total pressure data in binary and ternary aqueous solutions of 2-(diethylamino)ethanol (DEEA) and 3-(methylamino)propylamine (MAPA). The measurements were carried out in a commercially...... available calorimeter used as an equilibrium cell. The examined systems were the binary aqueous solutions of 5 M DEEA, 2 M MAPA, and 1 M MAPA and the ternary aqueous mixtures of 5 M DEEA + 2 M MAPA (5D2M) and 5 M DEEA + 1 M MAPA (5D1M), which gave liquid–liquid phase split upon CO2 absorption. The total...... pressures were measured and the CO2 partial pressures were calculated as a function of CO2 loading at three different temperatures 40 °C, 80 °C, and 120 °C. All experiments were reproduced with good repeatability. The measurements were carried out for 30 mass % MEA solutions to validate the experimental...

  3. Two-dimensional sparse wavenumber recovery for guided wavefields

    Science.gov (United States)

    Sabeti, Soroosh; Harley, Joel B.

    2018-04-01

    The multi-modal and dispersive behavior of guided waves is often characterized by their dispersion curves, which describe their frequency-wavenumber behavior. In prior work, compressive sensing based techniques, such as sparse wavenumber analysis (SWA), have been capable of recovering dispersion curves from limited data samples. A major limitation of SWA, however, is the assumption that the structure is isotropic. As a result, SWA fails when applied to composites and other anisotropic structures. There have been efforts to address this issue in the literature, but they either are not easily generalizable or do not sufficiently express the data. In this paper, we enhance the existing approaches by employing a two-dimensional wavenumber model to account for direction-dependent velocities in anisotropic media. We integrate this model with tools from compressive sensing to reconstruct a wavefield from incomplete data. Specifically, we create a modified two-dimensional orthogonal matching pursuit algorithm that takes an undersampled wavefield image, with specified unknown elements, and determines its sparse wavenumber characteristics. We then recover the entire wavefield from the sparse representations obtained with our small number of data samples.

  4. Chemical Principles Revisited: Chemical Equilibrium.

    Science.gov (United States)

    Mickey, Charles D.

    1980-01-01

    Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN)

  5. Sparse matrix test collections

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  6. Para-equilibrium phase diagrams

    International Nuclear Information System (INIS)

    Pelton, Arthur D.; Koukkari, Pertti; Pajarre, Risto; Eriksson, Gunnar

    2014-01-01

    Highlights: • A rapidly cooled system may attain a state of para-equilibrium. • In this state rapidly diffusing elements reach equilibrium but others are immobile. • Application of the Phase Rule to para-equilibrium phase diagrams is discussed. • A general algorithm to calculate para-equilibrium phase diagrams is described. - Abstract: If an initially homogeneous system at high temperature is rapidly cooled, a temporary para-equilibrium state may result in which rapidly diffusing elements have reached equilibrium but more slowly diffusing elements have remained essentially immobile. The best known example occurs when homogeneous austenite is quenched. A para-equilibrium phase assemblage may be calculated thermodynamically by Gibbs free energy minimization under the constraint that the ratios of the slowly diffusing elements are the same in all phases. Several examples of calculated para-equilibrium phase diagram sections are presented and the application of the Phase Rule is discussed. Although the rules governing the geometry of these diagrams may appear at first to be somewhat different from those for full equilibrium phase diagrams, it is shown that in fact they obey exactly the same rules with the following provision. Since the molar ratios of non-diffusing elements are the same in all phases at para-equilibrium, these ratios act, as far as the geometry of the diagram is concerned, like “potential” variables (such as T, pressure or chemical potentials) rather than like “normal” composition variables which need not be the same in all phases. A general algorithm to calculate para-equilibrium phase diagrams is presented. In the limit, if a para-equilibrium calculation is performed under the constraint that no elements diffuse, then the resultant phase diagram shows the single phase with the minimum Gibbs free energy at any point on the diagram; such calculations are of interest in physical vapor deposition when deposition is so rapid that phase

  7. Behavior of corroded bonded partially prestressed concrete beams

    Directory of Open Access Journals (Sweden)

    Mohamed Moawad

    2018-04-01

    Full Text Available Prestressed concrete is widely used in the construction industry in buildings. And corrosion of reinforcing steel is one of the most important and prevalent mechanisms of deterioration for concrete structures. Consequently the capacity of post-tension elements decreased after exposure to corrosion. This study presents results of the experimental investigation of the performance and the behavior of partially prestressed beams, with 40 and 80 MPa compressive strength exposed to corrosion. The experimental program of this study consisted of six partially prestressed beams with overall dimensions equal to 150 × 400 × 4500 mm. The variables were considered in terms of concrete compressive strength, and corrosion location effect. The mode of failure, and strain of steel reinforcement, cracking, yield, ultimate load and the corresponding deflection of each beam, and crack width and distribution were recorded. The results showed that the partially prestressed beam with 80 MPa compressive strength has higher resistance to corrosion exposure than that of partially prestressed concrete beam with 40 MPa compressive strength. Not big difference in deterioration against fully/partially corrosion exposure found between partially prestressed beams at the same compressive strength. The most of deterioration incident in partially prestressed beam acts on non prestressed steel reinforcement. Because the bonded tendons are less likely to corrode, cement grout and duct act as a barrier to moisture and chloride penetration, especially plastic duct without splices and connections. The theoretical analysis based on strain compatibility and force equilibrium gave a good prediction of the deformational behavior for high/normal partially prestressed beams. Keywords: Beam, Corrosion, Deterioration, Partially prestressed, High strength concrete

  8. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure

    KAUST Repository

    Labschutz, Matthias

    2015-08-12

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  9. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure

    KAUST Repository

    Labschutz, Matthias; Bruckner, Stefan; Groller, M. Eduard; Hadwiger, Markus; Rautek, Peter

    2015-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  10. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    Science.gov (United States)

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  11. Uncovering Transcriptional Regulatory Networks by Sparse Bayesian Factor Model

    Directory of Open Access Journals (Sweden)

    Qi Yuan(Alan

    2010-01-01

    Full Text Available Abstract The problem of uncovering transcriptional regulation by transcription factors (TFs based on microarray data is considered. A novel Bayesian sparse correlated rectified factor model (BSCRFM is proposed that models the unknown TF protein level activity, the correlated regulations between TFs, and the sparse nature of TF-regulated genes. The model admits prior knowledge from existing database regarding TF-regulated target genes based on a sparse prior and through a developed Gibbs sampling algorithm, a context-specific transcriptional regulatory network specific to the experimental condition of the microarray data can be obtained. The proposed model and the Gibbs sampling algorithm were evaluated on the simulated systems, and results demonstrated the validity and effectiveness of the proposed approach. The proposed model was then applied to the breast cancer microarray data of patients with Estrogen Receptor positive ( status and Estrogen Receptor negative ( status, respectively.

  12. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.

  13. Two-proton correlation functions for equilibrium and non-equilibrium emission

    International Nuclear Information System (INIS)

    Gong, W.G.; Gelbke, C.K.; Carlin, N.; De Souza, R.T.; Kim, Y.D.; Lynch, W.G.; Murakami, T.; Poggi, G.; Sanderson, D.; Tsang, M.B.; Xu, H.M.; Michigan State Univ., East Lansing; Fields, D.E.; Kwiatkowski, K.; Planeta, R.; Viola, V.E. Jr.; Yennello, S.J.; Indiana Univ., Bloomington; Indiana Univ., Bloomington; Pratt, S.

    1990-01-01

    Two-proton correlation functions are compared for equilibrium and non-equilibrium emission processes investigated, respectively, in ''reverse kinematics'' for the reactions 129 Xe+ 27 Al and 129 Xe+ 122 Sn at E/A=31 MeV and in ''forward kinematics'' for the reaction 14 N+ 197 Au at E/A=75 MeV. Observed differences in the shapes of the correlation functions are understood in terms of the different time scales for equilibrium and preequilibrium emission. Transverse and longitudinal correlation functions are very similar. (orig.)

  14. Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning

    Directory of Open Access Journals (Sweden)

    Li Jun-Bao

    2017-06-01

    Full Text Available Magnetic Resonance Super-resolution Imaging Measurement (MRIM is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

  15. Compact data structure and scalable algorithms for the sparse grid technique

    KAUST Repository

    Murarasu, Alin

    2011-01-01

    The sparse grid discretization technique enables a compressed representation of higher-dimensional functions. In its original form, it relies heavily on recursion and complex data structures, thus being far from well-suited for GPUs. In this paper, we describe optimizations that enable us to implement compression and decompression, the crucial sparse grid algorithms for our application, on Nvidia GPUs. The main idea consists of a bijective mapping between the set of points in a multi-dimensional sparse grid and a set of consecutive natural numbers. The resulting data structure consumes a minimum amount of memory. For a 10-dimensional sparse grid with approximately 127 million points, it consumes up to 30 times less memory than trees or hash tables which are typically used. Compared to a sequential CPU implementation, the speedups achieved on GPU are up to 17 for compression and up to 70 for decompression, respectively. We show that the optimizations are also applicable to multicore CPUs. Copyright © 2011 ACM.

  16. The effect of additional equilibrium stress functions on the three-node hybrid-mixed curved beam element

    International Nuclear Information System (INIS)

    Kim, Jin Gon; Park, Yong Kuk

    2008-01-01

    To develop an effective hybrid-mixed element, it is extremely critical as to how to assume the stress field. This research article demonstrates the effect of additional equilibrium stress functions to enhance the numerical performance of the locking-free three-node hybrid-mixed curved beam element, proposed in Saleeb and Chang's previous work. It is exceedingly complicated or even infeasible to determine the stress functions to satisfy fully both the equilibrium conditions and suppression of kinematic deformation modes in the three-node hybrid-mixed formulation. Accordingly, the additional stress functions to satisfy partially or fully equilibrium conditions are incorporated in this study. Several numerical examples for static and dynamic problems confirm that the newly proposed element with these additional stress functions is highly effective regardless of the slenderness ratio and curvature of arches in static and dynamic analyses

  17. Multisnapshot Sparse Bayesian Learning for DOA

    DEFF Research Database (Denmark)

    Gerstoft, Peter; Mecklenbrauker, Christoph F.; Xenaki, Angeliki

    2016-01-01

    The directions of arrival (DOA) of plane waves are estimated from multisnapshot sensor array data using sparse Bayesian learning (SBL). The prior for the source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters, the unknown variances (i.e., the source...

  18. Continuous speech recognition with sparse coding

    CSIR Research Space (South Africa)

    Smit, WJ

    2009-04-01

    Full Text Available generative model. The spike train is classified by making use of a spike train model and dynamic programming. It is computationally expensive to find a sparse code. We use an iterative subset selection algorithm with quadratic programming for this process...

  19. A density functional for sparse matter

    DEFF Research Database (Denmark)

    Langreth, D.C.; Lundqvist, Bengt; Chakarova-Kack, S.D.

    2009-01-01

    forces in molecules, to adsorbed molecules, like benzene, naphthalene, phenol and adenine on graphite, alumina and metals, to polymer and carbon nanotube (CNT) crystals, and hydrogen storage in graphite and metal-organic frameworks (MOFs), and to the structure of DNA and of DNA with intercalators......Sparse matter is abundant and has both strong local bonds and weak nonbonding forces, in particular nonlocal van der Waals (vdW) forces between atoms separated by empty space. It encompasses a broad spectrum of systems, like soft matter, adsorption systems and biostructures. Density-functional...... theory (DFT), long since proven successful for dense matter, seems now to have come to a point, where useful extensions to sparse matter are available. In particular, a functional form, vdW-DF (Dion et al 2004 Phys. Rev. Lett. 92 246401; Thonhauser et al 2007 Phys. Rev. B 76 125112), has been proposed...

  20. Sparse learning of stochastic dynamical equations

    Science.gov (United States)

    Boninsegna, Lorenzo; Nüske, Feliks; Clementi, Cecilia

    2018-06-01

    With the rapid increase of available data for complex systems, there is great interest in the extraction of physically relevant information from massive datasets. Recently, a framework called Sparse Identification of Nonlinear Dynamics (SINDy) has been introduced to identify the governing equations of dynamical systems from simulation data. In this study, we extend SINDy to stochastic dynamical systems which are frequently used to model biophysical processes. We prove the asymptotic correctness of stochastic SINDy in the infinite data limit, both in the original and projected variables. We discuss algorithms to solve the sparse regression problem arising from the practical implementation of SINDy and show that cross validation is an essential tool to determine the right level of sparsity. We demonstrate the proposed methodology on two test systems, namely, the diffusion in a one-dimensional potential and the projected dynamics of a two-dimensional diffusion process.

  1. A novel method to design sparse linear arrays for ultrasonic phased array.

    Science.gov (United States)

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  2. MHD equilibrium with toroidal rotation

    International Nuclear Information System (INIS)

    Li, J.

    1987-03-01

    The present work attempts to formulate the equilibrium of axisymmetric plasma with purely toroidal flow within ideal MHD theory. In general, the inertial term Rho(v.Del)v caused by plasma flow is so complicated that the equilibrium equation is completely different from the Grad-Shafranov equation. However, in the case of purely toroidal flow the equilibrium equation can be simplified so that it resembles the Grad-Shafranov equation. Generally one arbitrary two-variable functions and two arbitrary single variable functions, instead of only four single-variable functions, are allowed in the new equilibrium equations. Also, the boundary conditions of the rotating (with purely toroidal fluid flow, static - without any fluid flow) equilibrium are the same as those of the static equilibrium. So numerically one can calculate the rotating equilibrium as a static equilibrium. (author)

  3. Sparse synthetic aperture with Fresnel elements (S-SAFE) using digital incoherent holograms

    Science.gov (United States)

    Kashter, Yuval; Rivenson, Yair; Stern, Adrian; Rosen, Joseph

    2015-01-01

    Creating a large-scale synthetic aperture makes it possible to break the resolution boundaries dictated by the wave nature of light of common optical systems. However, their implementation is challenging, since the generation of a large size continuous mosaic synthetic aperture composed of many patterns is complicated in terms of both phase matching and time-multiplexing duration. In this study we present an advanced configuration for an incoherent holographic imaging system with super resolution qualities that creates a partial synthetic aperture. The new system, termed sparse synthetic aperture with Fresnel elements (S-SAFE), enables significantly decreasing the number of the recorded elements, and it is free from positional constrains on their location. Additionally, in order to obtain the best image quality we propose an optimal mosaicking structure derived on the basis of physical and numerical considerations, and introduce three reconstruction approaches which are compared and discussed. The super-resolution capabilities of the proposed scheme and its limitations are analyzed, numerically simulated and experimentally demonstrated. PMID:26367947

  4. Equilibrium and non equilibrium in fragmentation

    International Nuclear Information System (INIS)

    Dorso, C.O.; Chernomoretz, A.; Lopez, J.A.

    2001-01-01

    Full text: In this communication we present recent results regarding the interplay of equilibrium and non equilibrium in the process of fragmentation of excited finite Lennard Jones drops. Because the general features of such a potential resemble the ones of the nuclear interaction (fact that is reinforced by the similarity between the EOS of both systems) these studies are not only relevant from a fundamental point of view but also shed light on the problem of nuclear multifragmentation. We focus on the microscopic analysis of the state of the fragmenting system at fragmentation time. We show that the Caloric Curve (i e. the functional relationship between the temperature of the system and the excitation energy) is of the type rise plateau with no vapor branch. The usual rise plateau rise pattern is only recovered when equilibrium is artificially imposed. This result puts a serious question on the validity of the freeze out hypothesis. This feature is independent of the dimensionality or excitation mechanism. Moreover we explore the behavior of magnitudes which can help us determine the degree of the assumed phase transition. It is found that no clear cut criteria is presently available. (Author)

  5. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  6. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  7. Discriminative object tracking via sparse representation and online dictionary learning.

    Science.gov (United States)

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  8. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin; Weidendorfer, Josef

    2012-01-01

    bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation

  9. Non-equilibrium Economics

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2007-02-01

    Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.

  10. A Strategic-Equilibrium Based

    Directory of Open Access Journals (Sweden)

    Gabriel J. Turbay

    2011-03-01

    Full Text Available The strategic equilibrium of an N-person cooperative game with transferable utility is a system composed of a cover collection of subsets of N and a set of extended imputations attainable through such equilibrium cover. The system describes a state of coalitional bargaining stability where every player has a bargaining alternative against any other player to support his corresponding equilibrium claim. Any coalition in the sable system may form and divide the characteristic value function of the coalition as prescribed by the equilibrium payoffs. If syndicates are allowed to form, a formed coalition may become a syndicate using the equilibrium payoffs as disagreement values in bargaining for a part of the complementary coalition incremental value to the grand coalition when formed. The emergent well known-constant sum derived game in partition function is described in terms of parameters that result from incumbent binding agreements. The strategic-equilibrium corresponding to the derived game gives an equal value claim to all players.  This surprising result is alternatively explained in terms of strategic-equilibrium based possible outcomes by a sequence of bargaining stages that when the binding agreements are in the right sequential order, von Neumann and Morgenstern (vN-M non-discriminatory solutions emerge. In these solutions a preferred branch by a sufficient number of players is identified: the weaker players syndicate against the stronger player. This condition is referred to as the stronger player paradox.  A strategic alternative available to the stronger players to overcome the anticipated not desirable results is to voluntarily lower his bargaining equilibrium claim. In doing the original strategic equilibrium is modified and vN-M discriminatory solutions may occur, but also a different stronger player may emerge that has eventually will have to lower his equilibrium claim. A sequence of such measures converges to the equal

  11. Jointly-check iterative decoding algorithm for quantum sparse graph codes

    International Nuclear Information System (INIS)

    Jun-Hu, Shao; Bao-Ming, Bai; Wei, Lin; Lin, Zhou

    2010-01-01

    For quantum sparse graph codes with stabilizer formalism, the unavoidable girth-four cycles in their Tanner graphs greatly degrade the iterative decoding performance with a standard belief-propagation (BP) algorithm. In this paper, we present a jointly-check iterative algorithm suitable for decoding quantum sparse graph codes efficiently. Numerical simulations show that this modified method outperforms the standard BP algorithm with an obvious performance improvement. (general)

  12. Ion exchange equilibrium constants

    CERN Document Server

    Marcus, Y

    2013-01-01

    Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and

  13. Rotational image deblurring with sparse matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Nagy, James G.; Tigkos, Konstantinos

    2014-01-01

    We describe iterative deblurring algorithms that can handle blur caused by a rotation along an arbitrary axis (including the common case of pure rotation). Our algorithms use a sparse-matrix representation of the blurring operation, which allows us to easily handle several different boundary...

  14. Normalization for sparse encoding of odors by a wide-field interneuron.

    Science.gov (United States)

    Papadopoulou, Maria; Cassenaer, Stijn; Nowotny, Thomas; Laurent, Gilles

    2011-05-06

    Sparse coding presents practical advantages for sensory representations and memory storage. In the insect olfactory system, the representation of general odors is dense in the antennal lobes but sparse in the mushroom bodies, only one synapse downstream. In locusts, this transformation relies on the oscillatory structure of antennal lobe output, feed-forward inhibitory circuits, intrinsic properties of mushroom body neurons, and connectivity between antennal lobe and mushroom bodies. Here we show the existence of a normalizing negative-feedback loop within the mushroom body to maintain sparse output over a wide range of input conditions. This loop consists of an identifiable "giant" nonspiking inhibitory interneuron with ubiquitous connectivity and graded release properties.

  15. Sparse Representation Denoising for Radar High Resolution Range Profiling

    Directory of Open Access Journals (Sweden)

    Min Li

    2014-01-01

    Full Text Available Radar high resolution range profile has attracted considerable attention in radar automatic target recognition. In practice, radar return is usually contaminated by noise, which results in profile distortion and recognition performance degradation. To deal with this problem, in this paper, a novel denoising method based on sparse representation is proposed to remove the Gaussian white additive noise. The return is sparsely described in the Fourier redundant dictionary and the denoising problem is described as a sparse representation model. Noise level of the return, which is crucial to the denoising performance but often unknown, is estimated by performing subspace method on the sliding subsequence correlation matrix. Sliding window process enables noise level estimation using only one observation sequence, not only guaranteeing estimation efficiency but also avoiding the influence of profile time-shift sensitivity. Experimental results show that the proposed method can effectively improve the signal-to-noise ratio of the return, leading to a high-quality profile.

  16. The Real-Valued Sparse Direction of Arrival (DOA Estimation Based on the Khatri-Rao Product

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-05-01

    Full Text Available There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR product called the L1-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV model, and a new virtual overcomplete dictionary is constructed according to the KR product’s property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV. The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm.

  17. Integrative analysis of multiple diverse omics datasets by sparse group multitask regression

    Directory of Open Access Journals (Sweden)

    Dongdong eLin

    2014-10-01

    Full Text Available A variety of high throughput genome-wide assays enable the exploration of genetic risk factors underlying complex traits. Although these studies have remarkable impact on identifying susceptible biomarkers, they suffer from issues such as limited sample size and low reproducibility. Combining individual studies of different genetic levels/platforms has the promise to improve the power and consistency of biomarker identification. In this paper, we propose a novel integrative method, namely sparse group multitask regression, for integrating diverse omics datasets, platforms and populations to identify risk genes/factors of complex diseases. This method combines multitask learning with sparse group regularization, which will: 1 treat the biomarker identification in each single study as a task and then combine them by multitask learning; 2 group variables from all studies for identifying significant genes; 3 enforce sparse constraint on groups of variables to overcome the ‘small sample, but large variables’ problem. We introduce two sparse group penalties: sparse group lasso and sparse group ridge in our multitask model, and provide an effective algorithm for each model. In addition, we propose a significance test for the identification of potential risk genes. Two simulation studies are performed to evaluate the performance of our integrative method by comparing it with conventional meta-analysis method. The results show that our sparse group multitask method outperforms meta-analysis method significantly. In an application to our osteoporosis studies, 7 genes are identified as significant genes by our method and are found to have significant effects in other three independent studies for validation. The most significant gene SOD2 has been identified in our previous osteoporosis study involving the same expression dataset. Several other genes such as TREML2, HTR1E and GLO1 are shown to be novel susceptible genes for osteoporosis, as confirmed

  18. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    Science.gov (United States)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  19. Porting of the DBCSR library for Sparse Matrix-Matrix Multiplications to Intel Xeon Phi systems

    OpenAIRE

    Bethune, Iain; Gloess, Andeas; Hutter, Juerg; Lazzaro, Alfio; Pabst, Hans; Reid, Fiona

    2017-01-01

    Multiplication of two sparse matrices is a key operation in the simulation of the electronic structure of systems containing thousands of atoms and electrons. The highly optimized sparse linear algebra library DBCSR (Distributed Block Compressed Sparse Row) has been specifically designed to efficiently perform such sparse matrix-matrix multiplications. This library is the basic building block for linear scaling electronic structure theory and low scaling correlated methods in CP2K. It is para...

  20. Fast Solution in Sparse LDA for Binary Classification

    Science.gov (United States)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic

  1. Non-equilibrium fluctuation-induced interactions

    International Nuclear Information System (INIS)

    Dean, David S

    2012-01-01

    We discuss non-equilibrium aspects of fluctuation-induced interactions. While the equilibrium behavior of such interactions has been extensively studied and is relatively well understood, the study of these interactions out of equilibrium is relatively new. We discuss recent results on the non-equilibrium behavior of systems whose dynamics is of the dissipative stochastic type and identify a number of outstanding problems concerning non-equilibrium fluctuation-induced interactions.

  2. Phase equilibrium engineering

    CERN Document Server

    Brignole, Esteban Alberto

    2013-01-01

    Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and

  3. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  4. Robust visual tracking via multiscale deep sparse networks

    Science.gov (United States)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  5. Efficient MATLAB computations with sparse and factored tensors.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  6. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  7. Sparse grid techniques for particle-in-cell schemes

    Science.gov (United States)

    Ricketson, L. F.; Cerfon, A. J.

    2017-02-01

    We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.

  8. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-01-01

    combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity

  9. Group sparse canonical correlation analysis for genomic data integration.

    Science.gov (United States)

    Lin, Dongdong; Zhang, Jigang; Li, Jingyao; Calhoun, Vince D; Deng, Hong-Wen; Wang, Yu-Ping

    2013-08-12

    The emergence of high-throughput genomic datasets from different sources and platforms (e.g., gene expression, single nucleotide polymorphisms (SNP), and copy number variation (CNV)) has greatly enhanced our understandings of the interplay of these genomic factors as well as their influences on the complex diseases. It is challenging to explore the relationship between these different types of genomic data sets. In this paper, we focus on a multivariate statistical method, canonical correlation analysis (CCA) method for this problem. Conventional CCA method does not work effectively if the number of data samples is significantly less than that of biomarkers, which is a typical case for genomic data (e.g., SNPs). Sparse CCA (sCCA) methods were introduced to overcome such difficulty, mostly using penalizations with l-1 norm (CCA-l1) or the combination of l-1and l-2 norm (CCA-elastic net). However, they overlook the structural or group effect within genomic data in the analysis, which often exist and are important (e.g., SNPs spanning a gene interact and work together as a group). We propose a new group sparse CCA method (CCA-sparse group) along with an effective numerical algorithm to study the mutual relationship between two different types of genomic data (i.e., SNP and gene expression). We then extend the model to a more general formulation that can include the existing sCCA models. We apply the model to feature/variable selection from two data sets and compare our group sparse CCA method with existing sCCA methods on both simulation and two real datasets (human gliomas data and NCI60 data). We use a graphical representation of the samples with a pair of canonical variates to demonstrate the discriminating characteristic of the selected features. Pathway analysis is further performed for biological interpretation of those features. The CCA-sparse group method incorporates group effects of features into the correlation analysis while performs individual feature

  10. Information filtering in sparse online systems: recommendation via semi-local diffusion.

    Science.gov (United States)

    Zeng, Wei; Zeng, An; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2013-01-01

    With the rapid growth of the Internet and overwhelming amount of information and choices that people are confronted with, recommender systems have been developed to effectively support users' decision-making process in the online systems. However, many recommendation algorithms suffer from the data sparsity problem, i.e. the user-object bipartite networks are so sparse that algorithms cannot accurately recommend objects for users. This data sparsity problem makes many well-known recommendation algorithms perform poorly. To solve the problem, we propose a recommendation algorithm based on the semi-local diffusion process on the user-object bipartite network. The simulation results on two sparse datasets, Amazon and Bookcross, show that our method significantly outperforms the state-of-the-art methods especially for those small-degree users. Two personalized semi-local diffusion methods are proposed which further improve the recommendation accuracy. Finally, our work indicates that sparse online systems are essentially different from the dense online systems, so it is necessary to reexamine former algorithms and conclusions based on dense data in sparse systems.

  11. Codesign of Beam Pattern and Sparse Frequency Waveforms for MIMO Radar

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2015-01-01

    Full Text Available Multiple-input multiple-output (MIMO radar takes the advantages of high degrees of freedom for beam pattern design and waveform optimization, because each antenna in centralized MIMO radar system can transmit different signal waveforms. When continuous band is divided into several pieces, sparse frequency radar waveforms play an important role due to the special pattern of the sparse spectrum. In this paper, we start from the covariance matrix of the transmitted waveform and extend the concept of sparse frequency design to the study of MIMO radar beam pattern. With this idea in mind, we first solve the problem of semidefinite constraint by optimization tools and get the desired covariance matrix of the ideal beam pattern. Then, we use the acquired covariance matrix and generalize the objective function by adding the constraint of both constant modulus of the signals and corresponding spectrum. Finally, we solve the objective function by the cyclic algorithm and obtain the sparse frequency MIMO radar waveforms with desired beam pattern. The simulation results verify the effectiveness of this method.

  12. Quantity Constrained General Equilibrium

    NARCIS (Netherlands)

    Babenko, R.; Talman, A.J.J.

    2006-01-01

    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  13. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    Science.gov (United States)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  14. Comparison of sparse point distribution models

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus

    2010-01-01

    This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...

  15. Non-equilibrium versus equilibrium emission of complex fragments from hot nuclei

    International Nuclear Information System (INIS)

    Viola, V.E.; Kwiatkowski, K.; Yennello, S.; Fields, D.E.

    1989-01-01

    The relative contributions of equilibrium and non-equilibrium mechanisms for intermediate-mass fragment emission have been deduced for Z=3-14 fragments formed in 3 He- and 14 N-induced reactions on Ag and Au targets. Complete inclusive excitation function measurements have been performed for 3 He projectiles from E/A=67 to 1,200 MeV and for 14 N from E/A=20 to 50 MeV. The data are consistent with a picture in which equilibrated emission is important at the lowest energies, but with increasing bombarding energy the cross sections are increasingly dominated by non-equilibrium processes. Non-equilibrium emission is also shown to be favored for light fragments relative to heavy fragments. These results are supported by coincidence studies of intermediate-mass fragments tagged by linear momentum transfer measurements

  16. Non-equilibrium supramolecular polymerization.

    Science.gov (United States)

    Sorrenti, Alessandro; Leira-Iglesias, Jorge; Markvoort, Albert J; de Greef, Tom F A; Hermans, Thomas M

    2017-09-18

    Supramolecular polymerization has been traditionally focused on the thermodynamic equilibrium state, where one-dimensional assemblies reside at the global minimum of the Gibbs free energy. The pathway and rate to reach the equilibrium state are irrelevant, and the resulting assemblies remain unchanged over time. In the past decade, the focus has shifted to kinetically trapped (non-dissipative non-equilibrium) structures that heavily depend on the method of preparation (i.e., pathway complexity), and where the assembly rates are of key importance. Kinetic models have greatly improved our understanding of competing pathways, and shown how to steer supramolecular polymerization in the desired direction (i.e., pathway selection). The most recent innovation in the field relies on energy or mass input that is dissipated to keep the system away from the thermodynamic equilibrium (or from other non-dissipative states). This tutorial review aims to provide the reader with a set of tools to identify different types of self-assembled states that have been explored so far. In particular, we aim to clarify the often unclear use of the term "non-equilibrium self-assembly" by subdividing systems into dissipative, and non-dissipative non-equilibrium states. Examples are given for each of the states, with a focus on non-dissipative non-equilibrium states found in one-dimensional supramolecular polymerization.

  17. Galaxy redshift surveys with sparse sampling

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-01-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V survey ∼ 10Gpc 3 ) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V survey , we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V survey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V survey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys

  18. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  19. Uniform sparse bounds for discrete quadratic phase Hilbert transforms

    Science.gov (United States)

    Kesler, Robert; Arias, Darío Mena

    2017-09-01

    For each α \\in T consider the discrete quadratic phase Hilbert transform acting on finitely supported functions f : Z → C according to H^{α }f(n):= \\sum _{m ≠ 0} e^{iα m^2} f(n - m)/m. We prove that, uniformly in α \\in T , there is a sparse bound for the bilinear form for every pair of finitely supported functions f,g : Z→ C . The sparse bound implies several mapping properties such as weighted inequalities in an intersection of Muckenhoupt and reverse Hölder classes.

  20. Sparse Matrix for ECG Identification with Two-Lead Features

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2015-01-01

    Full Text Available Electrocardiograph (ECG human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.

  1. Partial molar volumes of hydrogen and deuterium in niobium and vanadium

    International Nuclear Information System (INIS)

    Herro, H.M.

    1979-01-01

    Lattice dilation studies and direct pressure experiments gave comparable values for the partial molar volumes of hydrogen and deuterium in niobium and vanadium. Small isotope effects in the partial molar volume of hydrogen were measured in both metals by the differential isotope method. Hydrogen had a larger partial molar volume than deuterium in niobium, but the reverse was true in vanadium. The isotope effect measured in niobium can be represented as being due to the larger amplitude of vibration of the hydrogen atom than the deuterium atom in the metal lattice. Since hydrogen has a larger mean displacement from the equilibrium position than does deuterium, the average force hydrogen exerts on the metal atoms is greater than the force deuterium exerts. The isotope effect in vanadium is likely a result of anharmonic effects in the lattice and local vibrational modes

  2. Survival analysis with functional covariates for partial follow-up studies.

    Science.gov (United States)

    Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming

    2016-12-01

    Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of

  3. A comparative study of transfer coefficient of Iodine from grass to cow milk under equilibrium and postulated accidental scenario

    International Nuclear Information System (INIS)

    Geetha, P.V.; Karunakara, N.; Prabhu, Ujwal; Yashodhara, I.; Ravi, P.M.; Dileep, B.N.; Karpe, Rupali

    2014-01-01

    Extensive studies on transfer of 131 I through grass-cow-milk pathway after the Chernobyl accident were reported. But, under nor mal operational conditions of a power reactor, 131 I is not present in measurable concentration in environmental matrices around a nuclear power generating station. Hence, database on 131 I transfer coefficients for grass-cow-milk pathway in equilibrium conditions in the environment of a nuclear power plant are sparse. One of method to estimate the equilibrium transfer coefficient is to use stable iodine, which is present naturally in very low levels in the environmental matrices. By measuring the concentration of stable iodine concentration in grass and cow milk, the grass-to-milk transfer coefficient of iodine can be estimated. Since the metabolism of stable and radioiodine is same, the data obtained for transfer coefficient of stable iodine could be used for predicting the transfer for radioiodine to cow milk. The measurement of stable iodine in the environmental sample is very challenging because of its extremely low concentration. Neutron Activation Analysis (NAA) can be used to estimate stable iodine in the environment matrices after suitably optimizing the condition to minimize interferences. This paper presents the results of a systematic study on the transfer coefficients for grass-cow milk pathway of iodine in normal (equilibrium) situations as well as for a postulated (simulated) emergency condition in Kaiga region

  4. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit

    2012-06-01

    The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.

  5. Application of ultra-small-angle X-ray scattering / X-ray photon correlation spectroscopy to relate equilibrium or non-equilibrium dynamics to microstructure

    Science.gov (United States)

    Allen, Andrew; Zhang, Fan; Levine, Lyle; Ilavsky, Jan

    2013-03-01

    Ultra-small-angle X-ray scattering (USAXS) can probe microstructures over the nanometer-to-micrometer scale range. Through use of a small instrument entrance slit, X-ray photon correlation spectroscopy (XPCS) exploits the partial coherence of an X-ray synchrotron undulator beam to provide unprecedented sensitivity to the dynamics of microstructural change. In USAXS/XPCS studies, the dynamics of local structures in a scale range of 100 nm to 1000 nm can be related to an overall hierarchical microstructure extending from 1 nm to more than 1000 nm. Using a point-detection scintillator mode, the equilibrium dynamics at ambient temperature of small particles (which move more slowly than nanoparticles) in aqueous suspension have been quantified directly for the first time. Using a USAXS-XPCS scanning mode for non-equilibrium dynamics incipient processes within dental composites have been elucidated, prior to effects becoming detectable using any other technique. Use of the Advanced Photon Source, an Office of Science User Facility operated for the United States Department of Energy (U.S. DOE) Office of Science by Argonne National Laboratory, was supported by the U.S. DOE under Contract No. DE-AC02-06CH11357.

  6. Data-driven discovery of partial differential equations.

    Science.gov (United States)

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  7. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  8. Electron-Impact Excitation Cross Sections for Modeling Non-Equilibrium Gas

    Science.gov (United States)

    Huo, Winifred M.; Liu, Yen; Panesi, Marco; Munafo, Alessandro; Wray, Alan; Carbon, Duane F.

    2015-01-01

    In order to provide a database for modeling hypersonic entry in a partially ionized gas under non-equilibrium, the electron-impact excitation cross sections of atoms have been calculated using perturbation theory. The energy levels covered in the calculation are retrieved from the level list in the HyperRad code. The downstream flow-field is determined by solving a set of continuity equations for each component. The individual structure of each energy level is included. These equations are then complemented by the Euler system of equations. Finally, the radiation field is modeled by solving the radiative transfer equation.

  9. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics

    Science.gov (United States)

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-01-01

    Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465

  10. The intrinsic role of nanoconfinement in chemical equilibrium: evidence from DNA hybridization.

    Science.gov (United States)

    Rubinovich, Leonid; Polak, Micha

    2013-05-08

    Recently we predicted that when a reaction involving a small number of molecules occurs in a nanometric-scale domain entirely segregated from the surrounding media, the nanoconfinement can shift the position of equilibrium toward products via reactant-product reduced mixing. In this Letter, we demonstrate how most-recently reported single-molecule fluorescence measurements of partial hybridization of ssDNA confined within nanofabricated chambers provide the first experimental confirmation of this entropic nanoconfinement effect. Thus, focusing separately on each occupancy-specific equilibrium constant, quantitatively reveals extra stabilization of the product upon decreasing the chamber occupancy or size. Namely, the DNA hybridization under nanoconfined conditions is significantly favored over the identical reaction occurring in bulk media with the same reactant concentrations. This effect, now directly verified for DNA, can be relevant to actual biological processes, as well as to diverse reactions occurring within molecular capsules, nanotubes, and other functional nanospaces.

  11. Nucleon-nucleon partial-wave analysis to 1100 MeV

    International Nuclear Information System (INIS)

    Arndt, R.A.; Hyslop, J.S. III; Roper, L.D.

    1987-01-01

    Comprehensive analyses of nucleon-nucleon elastic-scattering data below 1100 MeV laboratory kinetic energy are presented. The data base from which an energy-dependent solution and 22 single-energy solutions are obtained consists of 7223 pp and 5474 np data. A resonancelike structure is found to occur in the 1 D 2 , 3 F 3 , 3 P 2 - 3 F 2 , and 3 F 4 - 3 H 4 partial waves; this behavior is associated with poles in the complex energy plane. The pole positions and residues are obtained by analytic continuation of the ''production'' piece of the T matrix obtained in the energy-dependent solution. The new phases differ somewhat from previously published VPIandSU solutions, especially in I = 0 waves above 500 MeV, where np data are very sparse. The partial waves are, however, based upon a significantly larger data base and reflect correspondingly smaller errors. The full data base and solution files can be obtained through a computer scattering analysis interactive dial-in (SAID) system at VPIandSU, which also exists at many institutions around the world and which can be transferred to any site with a suitable computer system. The SAID system can be used to modify solutions, plan experiments, and obtain any of the multitude of predictions which derive from partial-wave analyses of the world data base

  12. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    Science.gov (United States)

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  13. SPARSE ELECTROMAGNETIC IMAGING USING NONLINEAR LANDWEBER ITERATIONS

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2015-01-01

    minimization problem is solved using nonlinear Landweber iterations, where at each iteration a thresholding function is applied to enforce the sparseness-promoting L0/L1-norm constraint. The thresholded nonlinear Landweber iterations are applied to several two

  14. Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation

    Science.gov (United States)

    Kim, Sunwoo

    This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.

  15. Effects of sparse sampling schemes on image quality in low-dose CT

    International Nuclear Information System (INIS)

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-01-01

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  16. Pulse-Width-Modulation of Neutral-Point-Clamped Sparse Matrix Converter

    DEFF Research Database (Denmark)

    Loh, P.C.; Blaabjerg, Frede; Gao, F.

    2007-01-01

    input current and output voltage can be achieved with minimized rectification switching loss, rendering the sparse matrix converter as a competitive choice for interfacing the utility grid to (e.g.) defense facilities that require a different frequency supply. As an improvement, sparse matrix converter...... with improved waveform quality. Performances and practicalities of the designed schemes are verified in simulation and experimentally using an implemented laboratory prototype with some representative results captured and presented in the paper....

  17. Identifying apparent local stable isotope equilibrium in a complex non-equilibrium system.

    Science.gov (United States)

    He, Yuyang; Cao, Xiaobin; Wang, Jianwei; Bao, Huiming

    2018-02-28

    Although being out of equilibrium, biomolecules in organisms have the potential to approach isotope equilibrium locally because enzymatic reactions are intrinsically reversible. A rigorous approach that can describe isotope distribution among biomolecules and their apparent deviation from equilibrium state is lacking, however. Applying the concept of distance matrix in graph theory, we propose that apparent local isotope equilibrium among a subset of biomolecules can be assessed using an apparent fractionation difference (|Δα|) matrix, in which the differences between the observed isotope composition (δ') and the calculated equilibrium fractionation factor (1000lnβ) can be more rigorously evaluated than by using a previous approach for multiple biomolecules. We tested our |Δα| matrix approach by re-analyzing published data of different amino acids (AAs) in potato and in green alga. Our re-analysis shows that biosynthesis pathways could be the reason for an apparently close-to-equilibrium relationship inside AA families in potato leaves. Different biosynthesis/degradation pathways in tubers may have led to the observed isotope distribution difference between potato leaves and tubers. The analysis of data from green algae does not support the conclusion that AAs are further from equilibrium in glucose-cultured green algae than in the autotrophic ones. Application of the |Δα| matrix can help us to locate potential reversible reactions or reaction networks in a complex system such as a metabolic system. The same approach can be broadly applied to all complex systems that have multiple components, e.g. geochemical or atmospheric systems of early Earth or other planets. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Effects of intravenous solutions on acid-base equilibrium: from crystalloids to colloids and blood components.

    Science.gov (United States)

    Langer, Thomas; Ferrari, Michele; Zazzeron, Luca; Gattinoni, Luciano; Caironi, Pietro

    2014-01-01

    Intravenous fluid administration is a medical intervention performed worldwide on a daily basis. Nevertheless, only a few physicians are aware of the characteristics of intravenous fluids and their possible effects on plasma acid-base equilibrium. According to Stewart's theory, pH is independently regulated by three variables: partial pressure of carbon dioxide, strong ion difference (SID), and total amount of weak acids (ATOT). When fluids are infused, plasma SID and ATOT tend toward the SID and ATOT of the administered fluid. Depending on their composition, fluids can therefore lower, increase, or leave pH unchanged. As a general rule, crystalloids having a SID greater than plasma bicarbonate concentration (HCO₃-) cause an increase in plasma pH (alkalosis), those having a SID lower than HCO₃- cause a decrease in plasma pH (acidosis), while crystalloids with a SID equal to HCO₃- leave pH unchanged, regardless of the extent of the dilution. Colloids and blood components are composed of a crystalloid solution as solvent, and the abovementioned rules partially hold true also for these fluids. The scenario is however complicated by the possible presence of weak anions (albumin, phosphates and gelatins) and their effect on plasma pH. The present manuscript summarises the characteristics of crystalloids, colloids, buffer solutions and blood components and reviews their effect on acid-base equilibrium. Understanding the composition of intravenous fluids, along with the application of simple physicochemical rules best described by Stewart's approach, are pivotal steps to fully elucidate and predict alterations of plasma acid-base equilibrium induced by fluid therapy.

  19. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    Directory of Open Access Journals (Sweden)

    Jacquelyn A Shelton

    Full Text Available Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule, the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (nonlinear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.

  20. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    Science.gov (United States)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  1. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  2. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  3. Magnetar giant flares in multipolar magnetic fields. I. Fully and partially open eruptions of flux ropes

    International Nuclear Information System (INIS)

    Huang, Lei; Yu, Cong

    2014-01-01

    We propose a catastrophic eruption model for the enormous energy release of magnetars during giant flares, in which a toroidal and helically twisted flux rope is embedded within a force-free magnetosphere. The flux rope stays in stable equilibrium states initially and evolves quasi-statically. Upon the loss of equilibrium, the flux rope cannot sustain the stable equilibrium states and erupts catastrophically. During the process, the magnetic energy stored in the magnetosphere is rapidly released as the result of destabilization of global magnetic topology. The magnetospheric energy that could be accumulated is of vital importance for the outbursts of magnetars. We carefully establish the fully open fields and partially open fields for various boundary conditions at the magnetar surface and study the relevant energy thresholds. By investigating the magnetic energy accumulated at the critical catastrophic point, we find that it is possible to drive fully open eruptions for dipole-dominated background fields. Nevertheless, it is hard to generate fully open magnetic eruptions for multipolar background fields. Given the observational importance of the multipolar magnetic fields in the vicinity of the magnetar surface, it would be worthwhile to explore the possibility of the alternative eruption approach in multipolar background fields. Fortunately, we find that flux ropes may give rise to partially open eruptions in the multipolar fields, which involve only partial opening of background fields. The energy release fractions are greater for cases with central-arcaded multipoles than those with central-caved multipoles that emerged in background fields. Eruptions would fail only when the centrally caved multipoles become extremely strong.

  4. Dose-shaping using targeted sparse optimization

    International Nuclear Information System (INIS)

    Sayre, George A.; Ruan, Dan

    2013-01-01

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E tot sparse ), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L 1 norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E tot sparse improves

  5. Dose-shaping using targeted sparse optimization.

    Science.gov (United States)

    Sayre, George A; Ruan, Dan

    2013-07-01

    Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method. In designing the energy minimization objective (E tot (sparse)), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L1 norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E tot (sparse) improves tradeoff between

  6. Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data

    Science.gov (United States)

    Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924

  7. Vapor-liquid equilibrium ratio of trace furfural in water+1-butanol system; Mizu+1-butanorukei ni okeru biryo no furufuraru no kieki heikohi

    Energy Technology Data Exchange (ETDEWEB)

    Ikari, A.; Hatate, Y.; Aikou, R. [Kagoshima Univ. (Japan). Faculty of Engineering

    1997-11-01

    Vapor-liquid equilibria of a water + 1-butanol system containing a trace amount of furfural were measured at atmospheric pressure by use of a Iino-type still for systems of limited miscibility. Vapor-liquid compositions for the major components (water and 1-butanol) are shown to be nearly coincident with those of the binary system. In the partially miscible region, the vapor-liquid equilibrium ratios of the trace component (furfural) at bubble point were found to be 2.5 and 0.46. Consequently, the partition coefficient of the trace component between the two liquid phases is 5.4. The equilibrium ratio curve of the trace component is presented, in which the calculated curve within the partially miscible region is shown to be coincident with the experimental data. 5 refs., 3 figs., 1 tab.

  8. Equilibrium and off-equilibrium trap-size scaling in one-dimensional ultracold bosonic gases

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We study some aspects of equilibrium and off-equilibrium quantum dynamics of dilute bosonic gases in the presence of a trapping potential. We consider systems with a fixed number of particles and study their scaling behavior with increasing the trap size. We focus on one-dimensional bosonic systems, such as gases described by the Lieb-Liniger model and its Tonks-Girardeau limit of impenetrable bosons, and gases constrained in optical lattices as described by the Bose-Hubbard model. We study their quantum (zero-temperature) behavior at equilibrium and off equilibrium during the unitary time evolution arising from changes of the trapping potential, which may be instantaneous or described by a power-law time dependence, starting from the equilibrium ground state for an initial trap size. Renormalization-group scaling arguments and analytical and numerical calculations show that the trap-size dependence of the equilibrium and off-equilibrium dynamics can be cast in the form of a trap-size scaling in the low-density regime, characterized by universal power laws of the trap size, in dilute gases with repulsive contact interactions and lattice systems described by the Bose-Hubbard model. The scaling functions corresponding to several physically interesting observables are computed. Our results are of experimental relevance for systems of cold atomic gases trapped by tunable confining potentials.

  9. Isotope effects in the equilibrium and non-equilibrium vaporization of tritiated water and ice

    International Nuclear Information System (INIS)

    Baumgaertner, F.; Kim, M.-A.

    1990-01-01

    The vaporization isotope effect of the HTO/H 2 O system has been measured at various temperatures and pressures under equilibrium as well as non-equilibrium conditions. The isotope effect values measured in equilibrium sublimation or distillation are in good agreement with the theoretical values based on the harmonic oscillator model. In non-equilibrium vaporization at low temperatures ( 0 C), the isotope effect decreases rapidly with decreasing system pressure and becomes negligible when the system pressure is lowered more than one tenth of the equilibrium vapor pressure. At higher temperatures, the isotope effect decreases very slowly with decreasing system pressure. Discussion is extended for the application of the present results to the study of biological enrichment of tritium. (author)

  10. Feature based omnidirectional sparse visual path following

    OpenAIRE

    Goedemé, Toon; Tuytelaars, Tinne; Van Gool, Luc; Vanacker, Gerolf; Nuttin, Marnix

    2005-01-01

    Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Feature based omnidirectional sparse visual path following'', Proceedings IEEE/RSJ international conference on intelligent robots and systems - IROS2005, pp. 1003-1008, August 2-6, 2005, Edmonton, Alberta, Canada.

  11. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit; Pfluger, Dirk; Murarasu, Alin; Jacob, Riko

    2012-01-01

    performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations

  12. New methods for sampling sparse populations

    Science.gov (United States)

    Anna Ringvall

    2007-01-01

    To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...

  13. On generalized operator quasi-equilibrium problems

    Science.gov (United States)

    Kum, Sangho; Kim, Won Kyu

    2008-09-01

    In this paper, we will introduce the generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which generalize the operator equilibrium problem due to Kazmi and Raouf [K.R. Kazmi, A. Raouf, A class of operator equilibrium problems, J. Math. Anal. Appl. 308 (2005) 554-564] into multi-valued and quasi-equilibrium problems. Using a Fan-Browder type fixed point theorem in [S. Park, Foundations of the KKM theory via coincidences of composites of upper semicontinuous maps, J. Korean Math. Soc. 31 (1994) 493-519] and an existence theorem of equilibrium for 1-person game in [X.-P. Ding, W.K. Kim, K.-K. Tan, Equilibria of non-compact generalized games with L*-majorized preferences, J. Math. Anal. Appl. 164 (1992) 508-517] as basic tools, we prove new existence theorems on generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which includes operator equilibrium problems.

  14. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael; Duursma, Iwan; Dau, Hoang; Hassibi, Babak

    2017-01-01

    We construct balanced and sparse generator matrices for Tamo and Barg's Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  15. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael

    2017-08-29

    We construct balanced and sparse generator matrices for Tamo and Barg\\'s Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  16. Equilibrium and out-of-equilibrium thermodynamics in supercooled liquids and glasses

    International Nuclear Information System (INIS)

    Mossa, S; Nave, E La; Tartaglia, P; Sciortino, F

    2003-01-01

    We review the inherent structure thermodynamical formalism and the formulation of an equation of state (EOS) for liquids in equilibrium based on the (volume) derivatives of the statistical properties of the potential energy surface. We also show that, under the hypothesis that during ageing the system explores states associated with equilibrium configurations, it is possible to generalize the proposed EOS to out-of-equilibrium (OOE) conditions. The proposed formulation is based on the introduction of one additional parameter which, in the chosen thermodynamic formalism, can be chosen as the local minimum where the slowly relaxing OOE liquid is trapped

  17. Fall Back Equilibrium

    NARCIS (Netherlands)

    Kleppe, J.; Borm, P.E.M.; Hendrickx, R.L.P.

    2008-01-01

    Fall back equilibrium is a refinement of the Nash equilibrium concept. In the underly- ing thought experiment each player faces the possibility that, after all players decided on their action, his chosen action turns out to be blocked. Therefore, each player has to decide beforehand on a back-up

  18. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  19. Mutation rules and the evolution of sparseness and modularity in biological systems.

    Directory of Open Access Journals (Sweden)

    Tamar Friedlander

    Full Text Available Biological systems exhibit two structural features on many levels of organization: sparseness, in which only a small fraction of possible interactions between components actually occur; and modularity--the near decomposability of the system into modules with distinct functionality. Recent work suggests that modularity can evolve in a variety of circumstances, including goals that vary in time such that they share the same subgoals (modularly varying goals, or when connections are costly. Here, we studied the origin of modularity and sparseness focusing on the nature of the mutation process, rather than on connection cost or variations in the goal. We use simulations of evolution with different mutation rules. We found that commonly used sum-rule mutations, in which interactions are mutated by adding random numbers, do not lead to modularity or sparseness except for in special situations. In contrast, product-rule mutations in which interactions are mutated by multiplying by random numbers--a better model for the effects of biological mutations--led to sparseness naturally. When the goals of evolution are modular, in the sense that specific groups of inputs affect specific groups of outputs, product-rule mutations also lead to modular structure; sum-rule mutations do not. Product-rule mutations generate sparseness and modularity because they tend to reduce interactions, and to keep small interaction terms small.

  20. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    Science.gov (United States)

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    Science.gov (United States)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary

  2. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions

    International Nuclear Information System (INIS)

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C.; Brooks, Scott C; Pace, Molly; Kim, Young Jin; Jardine, Philip M.; Watson, David B.

    2007-01-01

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M. partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M. species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing NE equilibrium reactions and a set of reactive transport equations of M-NE kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions

  3. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions.

    Science.gov (United States)

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B

    2007-06-16

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

  4. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    Directory of Open Access Journals (Sweden)

    Haruo Hosoya

    2017-07-01

    Full Text Available Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009. These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance, and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  5. Equilibrium relationships in the system Ni-U-O

    International Nuclear Information System (INIS)

    Mansour, N.A.L.

    1980-01-01

    Phase relationships were established in air and in oxygen. Mixtures of NiO and U 3 O 8 oxidize to NiUO 4 . The results of analysis of NiUO 4 were identical to those previously published for NiU 3 O 10 . The uranate dissociates to NiO and U 3 O 8 at a temperature higher than that of their oxidation back to the uranate on cooling because of the difficulty of oxygen diffusion and uranate nucleation. Accordingly, dissociation temperature was taken to represent equilibrium and was used to calculate roughly ΔH and ΔS for the reaction of dissociation. In presence of NiO, U 3 O 8 melts partially and does not dissociate to the lower oxide. (orig.) [de

  6. Non-equilibrium phase transitions

    CERN Document Server

    Henkel, Malte; Lübeck, Sven

    2009-01-01

    This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.

  7. Influence of collective excitations on pre-equilibrium and equilibrium processes

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.; Lunev, V.P.

    1990-01-01

    The influence of the collective states excitations on equilibrium and preequilibrium processes in reaction is discussed. It is shown that for a consistent description of the contribution of preequilibrium and equilibrium compound processes collective states should be taken into account in the level density calculations. The microscopic and phenomenological approaches for the level density calculations are discussed. 13 refs.; 8 figs

  8. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  9. Are the Concepts of Dynamic Equilibrium and the Thermodynamic Criteria for Spontaneity, Nonspontaneity, and Equilibrium Compatible?

    Science.gov (United States)

    Silverberg, Lee J.; Raff, Lionel M.

    2015-01-01

    Thermodynamic spontaneity-equilibrium criteria require that in a single-reaction system, reactions in either the forward or reverse direction at equilibrium be nonspontaneous. Conversely, the concept of dynamic equilibrium holds that forward and reverse reactions both occur at equal rates at equilibrium to the extent allowed by kinetic…

  10. Implementing an Equilibrium Law Teaching Sequence for Secondary School Students to Learn Chemical Equilibrium

    Science.gov (United States)

    Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio

    2015-01-01

    A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…

  11. Transition from equilibrium ignition to non-equilibrium burn for ICF capsules surrounded by a high-Z pusher

    International Nuclear Information System (INIS)

    Li, Ji W.; Chang, Lei; Li, Yun S.; Li, Jing H.

    2011-01-01

    For the ICF capsule surrounded by a high-Z pusher which traps the radiation and confines the hot fuel, the fuel will first be ignited in thermal equilibrium with radiation at a much lower temperature than hot-spot ignition, which is also the low temperature ignition. Because of the lower areal density for ICF capsules, the equilibrium ignition must be developed into a non-equilibrium burn to shorten the reaction time and lower the drive energy. In this paper, the transition from the equilibrium ignition to non-equilibrium burn is discussed and the energy deposited by α particles required for the equilibrium ignition and non-equilibrium burn to occur is estimated.

  12. The free energies of partially open coronal magnetic fields

    Science.gov (United States)

    Low, B. C.; Smith, D. F.

    1993-01-01

    A simple model of the low corona is examined in terms of a static polytropic atmosphere in equilibrium with a global magnetic field. The question posed is whether magnetostatic states with partially open magnetic fields may contain magnetic energies in excess of those in fully open magnetic fields. Based on the analysis presented here, it is concluded that the cross-field electric currents in the pre-eruption corona are a viable source of the bulk of the energies in a mass ejection and its associated flare.

  13. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    Energy Technology Data Exchange (ETDEWEB)

    Pilipchuk, L. A., E-mail: pilipchik@bsu.by [Belarussian State University, 220030 Minsk, 4, Nezavisimosti avenue, Republic of Belarus (Belarus); Pilipchuk, A. S., E-mail: an.pilipchuk@gmail.com [The Natural Resources and Environmental Protestion Ministry of the Republic of Belarus, 220004 Minsk, 10 Kollektornaya Street, Republic of Belarus (Belarus)

    2015-11-30

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.

  14. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    International Nuclear Information System (INIS)

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2015-01-01

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure

  15. Examples of equilibrium and non-equilibrium behavior in evolutionary systems

    Science.gov (United States)

    Soulier, Arne

    With this thesis, we want to shed some light into the darkness of our understanding of simply defined statistical mechanics systems and the surprisingly complex dynamical behavior they exhibit. We will do so by presenting in turn one equilibrium and then one non-equilibrium system with evolutionary dynamics. In part 1, we will present the seceder-model, a newly developed system that cannot equilibrate. We will then study several properties of the system and obtain an idea of the richness of the dynamics of the seceder model, which is particular impressive given the minimal amount of modeling necessary in its setup. In part 2, we will present extensions to the directed polymer in random media problem on a hypercube and its connection to the Eigen model of evolution. Our main interest will be the influence of time-dependent and time-independent changes in the fitness landscape viewed by an evolving population. This part contains the equilibrium dynamics. The stochastic models and the topic of evolution and non-equilibrium in general will allow us to point out similarities to the various lines of thought in game theory.

  16. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  17. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    Science.gov (United States)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  18. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm

  19. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  20. An Improved Information Hiding Method Based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Minghai Yao

    2015-01-01

    Full Text Available A novel biometric authentication information hiding method based on the sparse representation is proposed for enhancing the security of biometric information transmitted in the network. In order to make good use of abundant information of the cover image, the sparse representation method is adopted to exploit the correlation between the cover and biometric images. Thus, the biometric image is divided into two parts. The first part is the reconstructed image, and the other part is the residual image. The biometric authentication image cannot be restored by any one part. The residual image and sparse representation coefficients are embedded into the cover image. Then, for the sake of causing much less attention of attackers, the visual attention mechanism is employed to select embedding location and embedding sequence of secret information. Finally, the reversible watermarking algorithm based on histogram is utilized for embedding the secret information. For verifying the validity of the algorithm, the PolyU multispectral palmprint and the CASIA iris databases are used as biometric information. The experimental results show that the proposed method exhibits good security, invisibility, and high capacity.

  1. Equilibrium models and variational inequalities

    CERN Document Server

    Konnov, Igor

    2007-01-01

    The concept of equilibrium plays a central role in various applied sciences, such as physics (especially, mechanics), economics, engineering, transportation, sociology, chemistry, biology and other fields. If one can formulate the equilibrium problem in the form of a mathematical model, solutions of the corresponding problem can be used for forecasting the future behavior of very complex systems and, also, for correcting the the current state of the system under control. This book presents a unifying look on different equilibrium concepts in economics, including several models from related sciences.- Presents a unifying look on different equilibrium concepts and also the present state of investigations in this field- Describes static and dynamic input-output models, Walras, Cassel-Wald, spatial price, auction market, oligopolistic equilibrium models, transportation and migration equilibrium models- Covers the basics of theory and solution methods both for the complementarity and variational inequality probl...

  2. Mechanism of alkalinity lowering and chemical equilibrium model of high fly ash silica fume cement

    International Nuclear Information System (INIS)

    Hoshino, Seiichi; Honda, Akira; Negishi, Kumi

    2014-01-01

    The mechanism of alkalinity lowering of a High Fly ash Silica fume Cement (HFSC) under liquid/solid ratio conditions where the pH is largely controlled by the soluble alkali components (Region I) has been studied. This mechanism was incorporated in the chemical equilibrium model of HFSC. As a result, it is suggested that the dissolution and precipitation behavior of SO 4 2- partially contributes to alkalinity lowering of HFSC in Region I. A chemical equilibrium model of HFSC incorporating alkali (Na, K) adsorption, which was presumed as another contributing factor of the alkalinity lowering effect, was also developed, and an HFSC immersion experiment was analyzed using the model. The results of the developed model showed good agreement with the experiment results. From the above results, it was concluded that the alkalinity lowering of HFSC in Region I was attributed to both the dissolution and precipitation behavior of SO 4 2- and alkali adsorption, in addition to the absence of Ca(OH) 2 . A chemical equilibrium model of HFSC incorporating alkali and SO 4 2- adsorption was also proposed. (author)

  3. Sparse reconstruction by means of the standard Tikhonov regularization

    International Nuclear Information System (INIS)

    Lu Shuai; Pereverzev, Sergei V

    2008-01-01

    It is a common belief that Tikhonov scheme with || · ||L 2 -penalty fails in sparse reconstruction. We are going to show, however, that this standard regularization can help if the stability measured in L 1 -norm will be properly taken into account in the choice of the regularization parameter. The crucial point is that now a stability bound may depend on the bases with respect to which the solution of the problem is assumed to be sparse. We discuss how such a stability can be estimated numerically and present the results of computational experiments giving the evidence of the reliability of our approach.

  4. Sparse modeling applied to patient identification for safety in medical physics applications

    Science.gov (United States)

    Lewkowitz, Stephanie

    Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling

  5. Deviations from thermal equilibrium in plasmas

    International Nuclear Information System (INIS)

    Burm, K.T.A.L.

    2004-01-01

    A plasma system in local thermal equilibrium can usually be described with only two parameters. To describe deviations from equilibrium two extra parameters are needed. However, it will be shown that deviations from temperature equilibrium and deviations from Saha equilibrium depend on one another. As a result, non-equilibrium plasmas can be described with three parameters. This reduction in parameter space will ease the plasma describing effort enormously

  6. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho; Huang, Jianhua Z.; Hu, Jianhua

    2010-01-01

    with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated

  7. Sparse-View Ultrasound Diffraction Tomography Using Compressed Sensing with Nonuniform FFT

    Directory of Open Access Journals (Sweden)

    Shaoyan Hua

    2014-01-01

    Full Text Available Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT. In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed.

  8. A model for non-equilibrium, non-homogeneous two-phase critical flow

    International Nuclear Information System (INIS)

    Bassel, Wageeh Sidrak; Ting, Daniel Kao Sun

    1999-01-01

    Critical two phase flow is a very important phenomena in nuclear reactor technology for the analysis of loss of coolant accident. Several recent papers, Lee and Shrock (1990), Dagan (1993) and Downar (1996) , among others, treat the phenomena using complex models which require heuristic parameters such as relaxation constants or interfacial transfer models. In this paper a mathematical model for one dimensional non equilibrium and non homogeneous two phase flow in constant area duct is developed. The model is constituted of three conservation equations type mass ,momentum and energy. Two important variables are defined in the model: equilibrium constant in the energy equation and the impulse function in the momentum equation. In the energy equation, the enthalpy of the liquid phase is determined by a linear interpolation function between the liquid phase enthalpy at inlet condition and the saturated liquid enthalpy at local pressure. The interpolation coefficient is the equilibrium constant. The momentum equation is expressed in terms of the impulse function. It is considered that there is slip between the liquid and vapor phases, the liquid phase is in metastable state and the vapor phase is in saturated stable state. The model is not heuristic in nature and does not require complex interface transfer models. It is proved numerically that for the critical condition the partial derivative of two phase pressure drop with respect to the local pressure or to phase velocity must be zero.This criteria is demonstrated by numerical examples. The experimental work of Fauske (1962) and Jeandey (1982) were analyzed resulting in estimated numerical values for important parameters like slip ratio, equilibrium constant and two phase frictional drop. (author)

  9. LP Well-Posedness for Bilevel Vector Equilibrium and Optimization Problems with Equilibrium Constraints

    OpenAIRE

    Khanh, Phan Quoc; Plubtieng, Somyot; Sombut, Kamonrat

    2014-01-01

    The purpose of this paper is introduce several types of Levitin-Polyak well-posedness for bilevel vector equilibrium and optimization problems with equilibrium constraints. Base on criterion and characterizations for these types of Levitin-Polyak well-posedness we argue on diameters and Kuratowski’s, Hausdorff’s, or Istrǎtescus measures of noncompactness of approximate solution sets under suitable conditions, and we prove the Levitin-Polyak well-posedness for bilevel vector equilibrium and op...

  10. On Sparse Multi-Task Gaussian Process Priors for Music Preference Learning

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Larsen, Jan

    In this paper we study pairwise preference learning in a music setting with multitask Gaussian processes and examine the effect of sparsity in the input space as well as in the actual judgments. To introduce sparsity in the inputs, we extend a classic pairwise likelihood model to support sparse...... simulation shows the performance on a real-world music preference dataset which motivates and demonstrates the potential of the sparse Gaussian process formulation for pairwise likelihoods....

  11. Linear Regression on Sparse Features for Single-Channel Speech Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Olsson, Rasmus Kongsgaard

    2007-01-01

    In this work we address the problem of separating multiple speakers from a single microphone recording. We formulate a linear regression model for estimating each speaker based on features derived from the mixture. The employed feature representation is a sparse, non-negative encoding of the speech...... mixture in terms of pre-learned speaker-dependent dictionaries. Previous work has shown that this feature representation by itself provides some degree of separation. We show that the performance is significantly improved when regression analysis is performed on the sparse, non-negative features, both...

  12. Quasi optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

    KAUST Repository

    Tamellini, Lorenzo

    2016-01-05

    In this talk we discuss possible strategies to minimize the impact of the curse of dimensionality effect when building sparse-grid approximations of a multivariate function u = u(y1, ..., yN ). More precisely, we present a knapsack approach , in which we estimate the cost and the error reduction contribution of each possible component of the sparse grid, and then we choose the components with the highest error reduction /cost ratio. The estimates of the error reduction are obtained by either a mixed a-priori / a-posteriori approach, in which we first derive a theoretical bound and then tune it with some inexpensive auxiliary computations (resulting in the so-called quasi-optimal sparse grids ), or by a fully a-posteriori approach (obtaining the so-called adaptive sparse grids ). This framework is very general and can be used to build quasi-optimal/adaptive sparse grids on bounded and unbounded domains (e.g. u depending on uniform and normal random distributions for yn), using both nested and non-nested families of univariate collocation points. We present some theoretical convergence results as well as numerical results showing the efficiency of the proposed approach for the approximation of the solution of elliptic PDEs with random diffusion coefficients. In this context, to treat the case of rough permeability fields in which a sparse grid approach may not be suitable, we propose to use the sparse grids as a control variate in a Monte Carlo simulation.

  13. Sparse Channel Estimation Including the Impact of the Transceiver Filters with Application to OFDM

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Pedersen, Niels Lovmand; Manchón, Carles Navarro

    2014-01-01

    Traditionally, the dictionary matrices used in sparse wireless channel estimation have been based on the discrete Fourier transform, following the assumption that the channel frequency response (CFR) can be approximated as a linear combination of a small number of multipath components, each one......) and receive (demodulation) filters. Hence, the assumption of the CFR being sparse in the canonical Fourier dictionary may no longer hold. In this work, we derive a signal model and subsequently a novel dictionary matrix for sparse estimation that account for the impact of transceiver filters. Numerical...... results obtained in an OFDM transmission scenario demonstrate the superior accuracy of a sparse estimator that uses our proposed dictionary rather than the classical Fourier dictionary, and its robustness against a mismatch in the assumed transmit filter characteristics....

  14. Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.

    Science.gov (United States)

    Hu, Yujing; Gao, Yang; An, Bo

    2015-07-01

    An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.

  15. A Multiperiod Equilibrium Pricing Model

    Directory of Open Access Journals (Sweden)

    Minsuk Kwak

    2014-01-01

    Full Text Available We propose an equilibrium pricing model in a dynamic multiperiod stochastic framework with uncertain income. There are one tradable risky asset (stock/commodity, one nontradable underlying (temperature, and also a contingent claim (weather derivative written on the tradable risky asset and the nontradable underlying in the market. The price of the contingent claim is priced in equilibrium by optimal strategies of representative agent and market clearing condition. The risk preferences are of exponential type with a stochastic coefficient of risk aversion. Both subgame perfect strategy and naive strategy are considered and the corresponding equilibrium prices are derived. From the numerical result we examine how the equilibrium prices vary in response to changes in model parameters and highlight the importance of our equilibrium pricing principle.

  16. Measured MHD equilibrium in Alcator C

    International Nuclear Information System (INIS)

    Pribyl, P.A.

    1986-03-01

    A method of processing data from a set of partial Rogowski loops is developed to study the MHD equilibrium in Alcator C. Time dependent poloidal fields in the vicinity of the plasma are calculated from measured currents, with field penetration effects being accounted for. Fields from eddy currents induced by the plasma in the tokamak structure are estimated as well. Each of the set of twelve B/sub θ/ measurements can then be separated into a component from the plasma current and a component from currents external to the pickup loops. Harmonic solutions to Maxwell's equations in toroidal coordinates are fit to these measurements in order to infer the fields everywhere in the vacuum region surrounding the plasma. Using this diagnostic, plasma current, position, shape, and the Shafranov term Λ = β/sub p/ + l/sub i//2 - 1 may be computed, and systematic studies of these plasma parameters are undertaken for Alcator C plasmas

  17. Preconditioned Inexact Newton for Nonlinear Sparse Electromagnetic Imaging

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm

  18. Equilibrium studies of helical axis stellarators

    International Nuclear Information System (INIS)

    Hender, T.C.; Carreras, B.A.; Garcia, L.; Harris, J.H.; Rome, J.A.; Cantrell, J.L.; Lynch, V.E.

    1984-01-01

    The equilibrium properties of helical axis stellarators are studied with a 3-D equilibrium code and with an average method (2-D). The helical axis ATF is shown to have a toroidally dominated equilibrium shift and good equilibria up to at least 10% peak beta. Low aspect ratio heliacs, with relatively large toroidal shifts, are shown to have low equilibrium beta limits (approx. 5%). Increasing the aspect ratio and number of field periods proportionally is found to improve the equilibrium beta limit. Alternatively, increasing the number of field periods at fixed aspect ratio which raises and lowers the toroidal shift improves the equilibrium beta limit

  19. Dose-shaping using targeted sparse optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, George A.; Ruan, Dan [Department of Radiation Oncology, University of California - Los Angeles School of Medicine, 200 Medical Plaza, Los Angeles, California 90095 (United States)

    2013-07-15

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot

  20. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  1. A Novel Design of Sparse Prototype Filter for Nearly Perfect Reconstruction Cosine-Modulated Filter Banks

    Directory of Open Access Journals (Sweden)

    Wei Xu

    2018-05-01

    Full Text Available Cosine-modulated filter banks play a major role in digital signal processing. Sparse FIR filter banks have lower implementation complexity than full filter banks, while keeping a good performance level. This paper presents a fast design paradigm for sparse nearly perfect-reconstruction (NPR cosine-modulated filter banks. First, an approximation function is introduced to reduce the non-convex quadratically constrained optimization problem to a linearly constrained optimization problem. Then, the desired sparse linear phase FIR prototype filter is derived through the orthogonal matching pursuit (OMP performed under the weighted l 2 norm. The simulation results demonstrate that the proposed scheme is an effective paradigm to design sparse NPR cosine-modulated filter banks.

  2. The Equilibrium Rule--A Personal Discovery

    Science.gov (United States)

    Hewitt, Paul G.

    2016-01-01

    Examples of equilibrium are evident everywhere and the equilibrium rule provides a reasoned way to view all things, whether in static (balancing rocks, steel beams in building construction) or dynamic (airplanes, bowling balls) equilibrium. Interestingly, the equilibrium rule applies not just to objects at rest but whenever any object or system of…

  3. Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids

    KAUST Repository

    Buse, Gerrit

    2014-01-01

    In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated grids, both with and without boundary grid points. Similar to the implicit data structures proposed in Feuersänger (Dünngitterverfahren für hochdimensionale elliptische partielle Differntialgleichungen. Diploma Thesis, Institut für Numerische Simulation, Universität Bonn, 2005) and Murarasu et al. (Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming. Cambridge University Press, New York, 2011, pp. 25–34) we also define a bijective mapping from the multi-dimensional space of grid points to a contiguous index, such that the grid data can be stored in a simple array without overhead. Our approach is especially well-suited to exploit all levels of current commodity hardware, including cache-levels and vector extensions. Furthermore, this kind of data structure is extremely attractive for today’s real-time applications, as it gives direct access to the hierarchical structure of the grids, while outperforming other common sparse grid structures (hash maps, etc.) which do not match with modern compute platforms that well. For dimensionality d ≤ 10 we achieve good speedups on a 12 core Intel Westmere-EP NUMA platform compared to the results presented in Murarasu et al. (Proceedings of the International Conference on Computational Science—ICCS 2012. Procedia Computer Science, 2012). As we show, this also holds for the results obtained on Nvidia Fermi GPUs, for which we observe speedups over our own CPU implementation of up to 4.5 when dealing with moderate dimensionality. In high-dimensional settings, in the order of tens to hundreds of dimensions, our sparse grid evaluation kernels on the CPU outperform any other known implementation.

  4. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  5. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    Science.gov (United States)

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  6. Fast solution of elliptic partial differential equations using linear combinations of plane waves.

    Science.gov (United States)

    Pérez-Jordá, José M

    2016-02-01

    Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.

  7. Ionic diffusion through confined geometries: from Langevin equations to partial differential equations

    International Nuclear Information System (INIS)

    Nadler, Boaz; Schuss, Zeev; Singer, Amit; Eisenberg, R S

    2004-01-01

    Ionic diffusion through and near small domains is of considerable importance in molecular biophysics in applications such as permeation through protein channels and diffusion near the charged active sites of macromolecules. The motion of the ions in these settings depends on the specific nanoscale geometry and charge distribution in and near the domain, so standard continuum type approaches have obvious limitations. The standard machinery of equilibrium statistical mechanics includes microscopic details, but is also not applicable, because these systems are usually not in equilibrium due to concentration gradients and to the presence of an external applied potential, which drive a non-vanishing stationary current through the system. We present a stochastic molecular model for the diffusive motion of interacting particles in an external field of force and a derivation of effective partial differential equations and their boundary conditions that describe the stationary non-equilibrium system. The interactions can include electrostatic, Lennard-Jones and other pairwise forces. The analysis yields a new type of Poisson-Nernst-Planck equations, that involves conditional and unconditional charge densities and potentials. The conditional charge densities are the non-equilibrium analogues of the well studied pair correlation functions of equilibrium statistical physics. Our proposed theory is an extension of equilibrium statistical mechanics of simple fluids to stationary non-equilibrium problems. The proposed system of equations differs from the standard Poisson-Nernst-Planck system in two important aspects. First, the force term depends on conditional densities and thus on the finite size of ions, and second, it contains the dielectric boundary force on a discrete ion near dielectric interfaces. Recently, various authors have shown that both of these terms are important for diffusion through confined geometries in the context of ion channels

  8. SOLGAS refined: A computerized thermodynamic equilibrium calculation tool

    International Nuclear Information System (INIS)

    Trowbridge, L.D.; Leitnaker, J.M.

    1993-11-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several open-quote bells and whistlesclose quotes have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised format for entering data simplifies and reduces chances for error. Calculated errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed open-quotes on line.close-quote The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatible with at least 384 bytes of low RAM, are available from the authors

  9. A Low Delay and Fast Converging Improved Proportionate Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Benesty Jacob

    2007-01-01

    Full Text Available A sparse system identification algorithm for network echo cancellation is presented. This new approach exploits both the fast convergence of the improved proportionate normalized least mean square (IPNLMS algorithm and the efficient implementation of the multidelay adaptive filtering (MDF algorithm inheriting the beneficial properties of both. The proposed IPMDF algorithm is evaluated using impulse responses with various degrees of sparseness. Simulation results are also presented for both speech and white Gaussian noise input sequences. It has been shown that the IPMDF algorithm outperforms the MDF and IPNLMS algorithms for both sparse and dispersive echo path impulse responses. Computational complexity of the proposed algorithm is also discussed.

  10. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates

  11. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    Science.gov (United States)

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  12. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  13. Shape prior modeling using sparse representation and online dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.

  14. Monotonous and oscillation instability of mechanical equilibrium of isothermal three-components mixture with zero-gradient density

    International Nuclear Information System (INIS)

    Zhavrin, Yu.I.; Kosov, V.N.; Kul'zhanov, D.U.; Karataev, K.K.

    2000-01-01

    Presence of two types of instabilities of mechanical equilibrium of a mixture experimentally is shown at an isothermal diffusion of multicomponent system with zero gradient of density/ Theoretically is proved, that partial Rayleigh numbers R 1 , R 2 having different signs, there are two areas with monotonous (R 1 2 < by 0) instability. The experimental data confirm presence of these areas and satisfactory are described by the represented theory. (author)

  15. DIAGNOSIS OF FINANCIAL EQUILIBRIUM

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-04-01

    Full Text Available The analysis based on the balance sheet tries to identify the state of equilibrium (disequilibrium that exists in a company. The easiest way to determine the state of equilibrium is by looking at the balance sheet and at the information it offers. Because in the balance sheet there are elements that do not reflect their real value, the one established on the market, they must be readjusted, and those elements which are not related to the ordinary operating activities must be eliminated. The diagnosis of financial equilibrium takes into account 2 components: financing sources (ownership equity, loaned, temporarily attracted. An efficient financial equilibrium must respect 2 fundamental requirements: permanent sources represented by ownership equity and loans for more than 1 year should finance permanent needs, and temporary resources should finance the operating cycle.

  16. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    Science.gov (United States)

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  17. Multi-Frequency Polarimetric SAR Classification Based on Riemannian Manifold and Simultaneous Sparse Representation

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2015-07-01

    Full Text Available Normally, polarimetric SAR classification is a high-dimensional nonlinear mapping problem. In the realm of pattern recognition, sparse representation is a very efficacious and powerful approach. As classical descriptors of polarimetric SAR, covariance and coherency matrices are Hermitian semidefinite and form a Riemannian manifold. Conventional Euclidean metrics are not suitable for a Riemannian manifold, and hence, normal sparse representation classification cannot be applied to polarimetric SAR directly. This paper proposes a new land cover classification approach for polarimetric SAR. There are two principal novelties in this paper. First, a Stein kernel on a Riemannian manifold instead of Euclidean metrics, combined with sparse representation, is employed for polarimetric SAR land cover classification. This approach is named Stein-sparse representation-based classification (SRC. Second, using simultaneous sparse representation and reasonable assumptions of the correlation of representation among different frequency bands, Stein-SRC is generalized to simultaneous Stein-SRC for multi-frequency polarimetric SAR classification. These classifiers are assessed using polarimetric SAR images from the Airborne Synthetic Aperture Radar (AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the Electromagnetics Institute Synthetic Aperture Radar (EMISAR sensor of the Technical University of Denmark (DTU. Experiments on single-band and multi-band data both show that these approaches acquire more accurate classification results in comparison to many conventional and advanced classifiers.

  18. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    Science.gov (United States)

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.

  19. Sparse canonical correlation analysis: new formulation and algorithm.

    Science.gov (United States)

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-12-01

    In this paper, we study canonical correlation analysis (CCA), which is a powerful tool in multivariate data analysis for finding the correlation between two sets of multidimensional variables. The main contributions of the paper are: 1) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem, 2) to obtain the explicit characterization for all solutions of the multiple CCA problem even when the corresponding covariance matrices are singular, 3) to develop a new sparse CCA algorithm, and 4) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real-world datasets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms.

  20. System and method for acquiring and inverting sparse-frequency data

    KAUST Repository

    Alkhalifah, Tariq Ali

    2017-01-01

    A method of imaging an object includes generating a plurality of mono-frequency waveforms and applying the plurality of mono-frequency waveforms to the object to be modeled. In addition, sparse mono-frequency data is recorded in response to the plurality of mono-frequency waveforms applied to the object to be modeled. The sparse mono-frequency data is cross-correlated with one or more source functions each having a frequency approximately equal to each of the plurality of mono-frequency waveforms to obtain monochromatic frequency data. The monochromatic frequency data is utilized in an inversion to converge a model to a minimum value.

  1. System and method for acquiring and inverting sparse-frequency data

    KAUST Repository

    Alkhalifah, Tariq Ali

    2017-11-30

    A method of imaging an object includes generating a plurality of mono-frequency waveforms and applying the plurality of mono-frequency waveforms to the object to be modeled. In addition, sparse mono-frequency data is recorded in response to the plurality of mono-frequency waveforms applied to the object to be modeled. The sparse mono-frequency data is cross-correlated with one or more source functions each having a frequency approximately equal to each of the plurality of mono-frequency waveforms to obtain monochromatic frequency data. The monochromatic frequency data is utilized in an inversion to converge a model to a minimum value.

  2. l0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

    KAUST Repository

    Yuan, Ganzhao; Ghanem, Bernard

    2017-01-01

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with l02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called l0TV-PADMM, which solves the TV-based restoration problem with l0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our l0TV-PADMM method finds a desirable solution to the original l0-norm optimization problem and is proven to be convergent under mild conditions. We apply l0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that l0TV-PADMM outperforms state-of-the-art image restoration methods.

  3. l0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

    KAUST Repository

    Yuan, Ganzhao

    2017-12-18

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with l02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called l0TV-PADMM, which solves the TV-based restoration problem with l0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our l0TV-PADMM method finds a desirable solution to the original l0-norm optimization problem and is proven to be convergent under mild conditions. We apply l0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that l0TV-PADMM outperforms state-of-the-art image restoration methods.

  4. Dynamic Representations of Sparse Graphs

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    1999-01-01

    We present a linear space data structure for maintaining graphs with bounded arboricity—a large class of sparse graphs containing e.g. planar graphs and graphs of bounded treewidth—under edge insertions, edge deletions, and adjacency queries. The data structure supports adjacency queries in worst...... case O(c) time, and edge insertions and edge deletions in amortized O(1) and O(c+log n) time, respectively, where n is the number of nodes in the graph, and c is the bound on the arboricity....

  5. Global Asymptotic Stability of Impulsive CNNs with Proportional Delays and Partially Lipschitz Activation Functions

    Directory of Open Access Journals (Sweden)

    Xueli Song

    2014-01-01

    Full Text Available This paper researches global asymptotic stability of impulsive cellular neural networks with proportional delays and partially Lipschitz activation functions. Firstly, by means of the transformation vi(t=ui(et, the impulsive cellular neural networks with proportional delays are transformed into impulsive cellular neural networks with the variable coefficients and constant delays. Secondly, we provide novel criteria for the uniqueness and exponential stability of the equilibrium point of the latter by relative nonlinear measure and prove that the exponential stability of equilibrium point of the latter implies the asymptotic stability of one of the former. We furthermore obtain a sufficient condition to the uniqueness and global asymptotic stability of the equilibrium point of the former. Our method does not require conventional assumptions on global Lipschitz continuity, boundedness, and monotonicity of activation functions. Our results are generalizations and improvements of some existing ones. Finally, an example and its simulations are provided to illustrate the correctness of our analysis.

  6. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  7. A thermodynamic and kinetic study of the de- and rehydration of Ca(OH){sub 2} at high H{sub 2}O partial pressures for thermo-chemical heat storage

    Energy Technology Data Exchange (ETDEWEB)

    Schaube, F.; Koch, L. [German Aerospace Center, Institute of Technical Thermodynamics, Pfaffenwaldring 38-40, 70569 Stuttgart (Germany); Woerner, A., E-mail: antje.woerner@dlr.de [German Aerospace Center, Institute of Technical Thermodynamics, Pfaffenwaldring 38-40, 70569 Stuttgart (Germany); Mueller-Steinhagen, H. [German Aerospace Center, Institute of Technical Thermodynamics, Pfaffenwaldring 38-40, 70569 Stuttgart (Germany)

    2012-06-20

    Highlights: Black-Right-Pointing-Pointer Investigation of the thermodynamic equilibrium and reaction enthalpy of 'Ca(OH){sub 2} {r_reversible} CaO + H{sub 2}O'. Black-Right-Pointing-Pointer Investigation of the reaction kinetics of the dehydration of Ca(OH){sub 2} at partial pressures up to 956 mbar. Black-Right-Pointing-Pointer Investigation of the reaction kinetics of the rehydration of Ca(OH){sub 2} at partial pressures up to 956 mbar. - Abstract: Heat storage technologies are used to improve energy efficiency of power plants and recovery of process heat. Storing thermal energy by reversible thermo-chemical reactions offers a promising option for high storage capacities especially at high temperatures. Due to its low material cost, the use of the reversible reaction Ca(OH){sub 2} Rightwards-Harpoon-Over-Leftwards-Harpoon CaO + H{sub 2}O has been proposed. This paper reports on the physical properties such as heat capacity, thermodynamic equilibrium, reaction enthalpy and kinetics. To achieve high reaction temperatures, high H{sub 2}O partial pressures are required. Therefore the cycling stability is confirmed for H{sub 2}O partial pressures up to 95.6 kPa and the dehydration and hydration kinetics are studied. Quantitative data are collected and expressions are derived which are in good agreement with the presented measurements. At 1 bar H{sub 2}O partial pressure the expected equilibrium temperature is 505 Degree-Sign C and the reaction enthalpy is 104.4 kJ/mol.

  8. Recursive nearest neighbor search in a sparse and multiscale domain for comparing audio signals

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Daudet, Laurent

    2011-01-01

    We investigate recursive nearest neighbor search in a sparse domain at the scale of audio signals. Essentially, to approximate the cosine distance between the signals we make pairwise comparisons between the elements of localized sparse models built from large and redundant multiscale dictionaries...

  9. Nonuniform Sparse Data Clustering Cascade Algorithm Based on Dynamic Cumulative Entropy

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available A small amount of prior knowledge and randomly chosen initial cluster centers have a direct impact on the accuracy of the performance of iterative clustering algorithm. In this paper we propose a new algorithm to compute initial cluster centers for k-means clustering and the best number of the clusters with little prior knowledge and optimize clustering result. It constructs the Euclidean distance control factor based on aggregation density sparse degree to select the initial cluster center of nonuniform sparse data and obtains initial data clusters by multidimensional diffusion density distribution. Multiobjective clustering approach based on dynamic cumulative entropy is adopted to optimize the initial data clusters and the best number of the clusters. The experimental results show that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm and it effectively improves the clustering accuracy of nonuniform sparse data by about 5%.

  10. 2D sparse array transducer optimization for 3D ultrasound imaging

    International Nuclear Information System (INIS)

    Choi, Jae Hoon; Park, Kwan Kyu

    2014-01-01

    A 3D ultrasound image is desired in many medical examinations. However, the implementation of a 2D array, which is needed for a 3D image, is challenging with respect to fabrication, interconnection and cabling. A 2D sparse array, which needs fewer elements than a dense array, is a realistic way to achieve 3D images. Because the number of ways the elements can be placed in an array is extremely large, a method for optimizing the array configuration is needed. Previous research placed the target point far from the transducer array, making it impossible to optimize the array in the operating range. In our study, we focused on optimizing a 2D sparse array transducer for 3D imaging by using a simulated annealing method. We compared the far-field optimization method with the near-field optimization method by analyzing a point-spread function (PSF). The resolution of the optimized sparse array is comparable to that of the dense array.

  11. Sparse and smooth canonical correlation analysis through rank-1 matrix approximation

    Science.gov (United States)

    Aïssa-El-Bey, Abdeldjalil; Seghouane, Abd-Krim

    2017-12-01

    Canonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA and smooth or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far, the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper, two new algorithms for sparse CCA and smooth CCA are proposed. These algorithms differ from the existing ones in their derivation which is based on penalized rank-1 matrix approximation and the orthogonal projectors onto the space spanned by the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithms are tested on simulated experiments. On these results, it can be observed that they outperform the state of the art sparse CCA algorithms.

  12. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors

    Science.gov (United States)

    He, Xingyu; Tong, Ningning; Hu, Xiaowei

    2018-01-01

    Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.

  13. Structure-aware Local Sparse Coding for Visual Tracking

    KAUST Repository

    Qi, Yuankai; Qin, Lei; Zhang, Jian; Zhang, Shengping; Huang, Qingming; Yang, Ming-Hsuan

    2018-01-01

    with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we

  14. Aliasing-free wideband beamforming using sparse signal representation

    NARCIS (Netherlands)

    Tang, Z.; Blacquière, G.; Leus, G.

    2011-01-01

    Sparse signal representation (SSR) is considered to be an appealing alternative to classical beamforming for direction-of-arrival (DOA) estimation. For wideband signals, the SSR-based approach constructs steering matrices, referred to as dictionaries in this paper, corresponding to different

  15. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    Science.gov (United States)

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  16. Face Image Retrieval of Efficient Sparse Code words and Multiple Attribute in Binning Image

    Directory of Open Access Journals (Sweden)

    Suchitra S

    2017-08-01

    Full Text Available ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

  17. Non-Equilibrium Properties from Equilibrium Free Energy Calculations

    Science.gov (United States)

    Pohorille, Andrew; Wilson, Michael A.

    2012-01-01

    Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.

  18. The geometry of finite equilibrium sets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    2009-01-01

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely noncollinear....

  19. ℓ0 -based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation

    Science.gov (United States)

    Xu, Xia; Shi, Zhenwei; Pan, Bin

    2018-07-01

    Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.

  20. Catalytic partial oxidation of pyrolysis oils

    Science.gov (United States)

    Rennard, David Carl

    2009-12-01

    This thesis explores the catalytic partial oxidation (CPO) of pyrolysis oils to syngas and chemicals. First, an exploration of model compounds and their chemistries under CPO conditions is considered. Then CPO experiments of raw pyrolysis oils are detailed. Finally, plans for future development in this field are discussed. In Chapter 2, organic acids such as propionic acid and lactic acid are oxidized to syngas over Pt catalysts. Equilibrium production of syngas can be achieved over Rh-Ce catalysts; alternatively mechanistic evidence is derived using Pt catalysts in a fuel rich mixture. These experiments show that organic acids, present in pyrolysis oils up to 25%, can undergo CPO to syngas or for the production of chemicals. As the fossil fuels industry also provides organic chemicals such as monomers for plastics, the possibility of deriving such species from pyrolysis oils allows for a greater application of the CPO of biomass. However, chemical production is highly dependent on the originating molecular species. As bio oil comprises up to 400 chemicals, it is essential to understand how difficult it would be to develop a pure product stream. Chapter 3 continues the experimentation from Chapter 2, exploring the CPO of another organic functionality: the ester group. These experiments demonstrate that equilibrium syngas production is possible for esters as well as acids in autothermal operation with contact times as low as tau = 10 ms over Rh-based catalysts. Conversion for these experiments and those with organic acids is >98%, demonstrating the high reactivity of oxygenated compounds on noble metal catalysts. Under CPO conditions, esters decompose in a predictable manner: over Pt and with high fuel to oxygen, non-equilibrium products show a similarity to those from related acids. A mechanism is proposed in which ethyl esters thermally decompose to ethylene and an acid, which decarbonylates homogeneously, driven by heat produced at the catalyst surface. Chapter 4

  1. Relevance of equilibrium in multifragmentation

    International Nuclear Information System (INIS)

    Furuta, Takuya; Ono, Akira

    2009-01-01

    The relevance of equilibrium in a multifragmentation reaction of very central 40 Ca + 40 Ca collisions at 35 MeV/nucleon is investigated by using simulations of antisymmetrized molecular dynamics (AMD). Two types of ensembles are compared. One is the reaction ensemble of the states at each reaction time t in collision events simulated by AMD, and the other is the equilibrium ensemble prepared by solving the AMD equation of motion for a many-nucleon system confined in a container for a long time. The comparison of the ensembles is performed for the fragment charge distribution and the excitation energies. Our calculations show that there exists an equilibrium ensemble that well reproduces the reaction ensemble at each reaction time t for the investigated period 80≤t≤300 fm/c. However, there are some other observables that show discrepancies between the reaction and equilibrium ensembles. These may be interpreted as dynamical effects in the reaction. The usual static equilibrium at each instant is not realized since any equilibrium ensemble with the same volume as that of the reaction system cannot reproduce the fragment observables

  2. Sparse Linear Solver for Power System Analysis Using FPGA

    National Research Council Canada - National Science Library

    Johnson, J. R; Nagvajara, P; Nwankpa, C

    2005-01-01

    .... Numerical solution to load flow equations are typically computed using Newton-Raphson iteration, and the most time consuming component of the computation is the solution of a sparse linear system...

  3. Sparse Machine Learning Methods for Understanding Large Text Corpora

    Data.gov (United States)

    National Aeronautics and Space Administration — Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional data with high degree of interpretability, at low computational...

  4. Better Size Estimation for Sparse Matrix Products

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Campagna, Andrea; Pagh, Rasmus

    2010-01-01

    We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse Boolean matrix product. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 ± ε approximation (with small probability of error) in expected t...

  5. From Wang-Chen System with Only One Stable Equilibrium to a New Chaotic System Without Equilibrium

    Science.gov (United States)

    Pham, Viet-Thanh; Wang, Xiong; Jafari, Sajad; Volos, Christos; Kapitaniak, Tomasz

    2017-06-01

    Wang-Chen system with only one stable equilibrium as well as the coexistence of hidden attractors has attracted increasing interest due to its striking features. In this work, the effect of state feedback on Wang-Chen system is investigated by introducing a further state variable. It is worth noting that a new chaotic system without equilibrium is obtained. We believe that the system is an interesting example to illustrate the conversion of hidden attractors with one stable equilibrium to hidden attractors without equilibrium.

  6. Multi scales based sparse matrix spectral clustering image segmentation

    Science.gov (United States)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  7. Immunity by equilibrium.

    Science.gov (United States)

    Eberl, Gérard

    2016-08-01

    The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.

  8. Seismic detection method for small-scale discontinuities based on dictionary learning and sparse representation

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei

    2017-02-01

    Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.

  9. Grinding kinetics and equilibrium states

    Science.gov (United States)

    Opoczky, L.; Farnady, F.

    1984-01-01

    The temporary and permanent equilibrium occurring during the initial stage of cement grinding does not indicate the end of comminution, but rather an increased energy consumption during grinding. The constant dynamic equilibrium occurs after a long grinding period indicating the end of comminution for a given particle size. Grinding equilibrium curves can be constructed to show the stages of comminution and agglomeration for certain particle sizes.

  10. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  11. Existence of equilibrium states of hollow elastic cylinders submerged in a fluid

    Directory of Open Access Journals (Sweden)

    M. B. M. Elgindi

    1992-01-01

    Full Text Available This paper is concerned with the existence of equilibrium states of thin-walled elastic, cylindrical shell fully or partially submerged in a fluid. This problem obviously serves as a model for many problems with engineering importance. Previous studies on the deformation of the shell have assumed that the pressure due to the fluid is uniform. This paper takes into consideration the non-uniformity of the pressure by taking into account the effect of gravity. The presence of a pressure gradient brings additional parameters to the problem which in turn lead to the consideration of several boundary value problems.

  12. A sparse neural code for some speech sounds but not for others.

    Directory of Open Access Journals (Sweden)

    Mathias Scharinger

    Full Text Available The precise neural mechanisms underlying speech sound representations are still a matter of debate. Proponents of 'sparse representations' assume that on the level of speech sounds, only contrastive or otherwise not predictable information is stored in long-term memory. Here, in a passive oddball paradigm, we challenge the neural foundations of such a 'sparse' representation; we use words that differ only in their penultimate consonant ("coronal" [t] vs. "dorsal" [k] place of articulation and for example distinguish between the German nouns Latz ([lats]; bib and Lachs ([laks]; salmon. Changes from standard [t] to deviant [k] and vice versa elicited a discernible Mismatch Negativity (MMN response. Crucially, however, the MMN for the deviant [lats] was stronger than the MMN for the deviant [laks]. Source localization showed this difference to be due to enhanced brain activity in right superior temporal cortex. These findings reflect a difference in phonological 'sparsity': Coronal [t] segments, but not dorsal [k] segments, are based on more sparse representations and elicit less specific neural predictions; sensory deviations from this prediction are more readily 'tolerated' and accordingly trigger weaker MMNs. The results foster the neurocomputational reality of 'representationally sparse' models of speech perception that are compatible with more general predictive mechanisms in auditory perception.

  13. Sparse coding reveals greater functional connectivity in female brains during naturalistic emotional experience.

    Directory of Open Access Journals (Sweden)

    Yudan Ren

    Full Text Available Functional neuroimaging is widely used to examine changes in brain function associated with age, gender or neuropsychiatric conditions. FMRI (functional magnetic resonance imaging studies employ either laboratory-designed tasks that engage the brain with abstracted and repeated stimuli, or resting state paradigms with little behavioral constraint. Recently, novel neuroimaging paradigms using naturalistic stimuli are gaining increasing attraction, as they offer an ecologically-valid condition to approximate brain function in real life. Wider application of naturalistic paradigms in exploring individual differences in brain function, however, awaits further advances in statistical methods for modeling dynamic and complex dataset. Here, we developed a novel data-driven strategy that employs group sparse representation to assess gender differences in brain responses during naturalistic emotional experience. Comparing to independent component analysis (ICA, sparse coding algorithm considers the intrinsic sparsity of neural coding and thus could be more suitable in modeling dynamic whole-brain fMRI signals. An online dictionary learning and sparse coding algorithm was applied to the aggregated fMRI signals from both groups, which was subsequently factorized into a common time series signal dictionary matrix and the associated weight coefficient matrix. Our results demonstrate that group sparse representation can effectively identify gender differences in functional brain network during natural viewing, with improved sensitivity and reliability over ICA-based method. Group sparse representation hence offers a superior data-driven strategy for examining brain function during naturalistic conditions, with great potential for clinical application in neuropsychiatric disorders.

  14. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  15. Sparse Matrices in Frame Theory

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Krahmer, Felix; Kutyniok, Gitta

    2014-01-01

    Frame theory is closely intertwined with signal processing through a canon of methodologies for the analysis of signals using (redundant) linear measurements. The canonical dual frame associated with a frame provides a means for reconstruction by a least squares approach, but other dual frames...... yield alternative reconstruction procedures. The novel paradigm of sparsity has recently entered the area of frame theory in various ways. Of those different sparsity perspectives, we will focus on the situations where frames and (not necessarily canonical) dual frames can be written as sparse matrices...

  16. Programming for Sparse Minimax Optimization

    DEFF Research Database (Denmark)

    Jonasson, K.; Madsen, Kaj

    1994-01-01

    We present an algorithm for nonlinear minimax optimization which is well suited for large and sparse problems. The method is based on trust regions and sequential linear programming. On each iteration, a linear minimax problem is solved for a basic step. If necessary, this is followed...... by the determination of a minimum norm corrective step based on a first-order Taylor approximation. No Hessian information needs to be stored. Global convergence is proved. This new method has been extensively tested and compared with other methods, including two well known codes for nonlinear programming...

  17. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer

    2017-12-25

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.

  18. High-Order Sparse Linear Predictors for Audio Processing

    DEFF Research Database (Denmark)

    Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll

    2010-01-01

    Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efficiently the different...

  19. Sparse Covariance Matrix Estimation by DCA-Based Algorithms.

    Science.gov (United States)

    Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham

    2017-11-01

    This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.

  20. A novel multiphysic model for simulation of swelling equilibrium of ionized thermal-stimulus responsive hydrogels

    Science.gov (United States)

    Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi

    2005-03-01

    In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.