– R. Gribonval (INRIA Rennes) : Sparse dictionary learning in the presence of noise and outliers

Carte non disponible

Date/heure
Date(s) - 25 avril 2013

Catégories Pas de Catégories


Sparse dictionary learning in the presence of noise and outliers. By Rémi Gribonval, INRIA Rennes – Bretagne Atlantique. A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. Considering a probabilistic model of sparse signals, we show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quant ities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations. This is joint work with Rodolphe Jenatton & Francis Bach. Download slides

– R. Gribonval (INRIA Rennes) : Sparse dictionary learning in the presence of noise and outliers

Carte non disponible

Date/heure
Date(s) - 25 avril 2013

Catégories Pas de Catégories


Sparse dictionary learning in the presence of noise and outliers.\nBy Rémi Gribonval, INRIA Rennes – Bretagne Atlantique.\n\nA popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. Considering a probabilistic model of sparse signals, we show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quant\nities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.\n\nThis is joint work with Rodolphe Jenatton & Francis Bach.\n\nDownload slides[