– Full Signal and Machine Learning afternoon session for welcoming new members. Carte non disponible Date/heure Date(s) - 8 novembre 2012 Catégories Pas de Catégories 13h30 Optimization of High Dimensional Functions: Application to a Pulse Shaping Problem, Mattias Gybels, LIF. 14h Nonlinear functional data analysis with reproducing kernels, Hachem Kadri, LIF. 14h30 Confused Multiclass Relevance Vector Machine, Ugo Louche, LIF. 15h Automatic Drum Transcription with informed NM, Antoine Bonnefoy, LIF. 15h30 Coffee break. 16h Proximal methods for multiple removal in seismic data, Caroline Chaux, LATP. 16h30 Cosparse analysis model and uncertainty principle: some basics and challenges, Sangnam Nam, LATP. 17h On the accuracy of fiber tractography, Sebastiano Barbieri, LATP. 17h30 End of the scientific part. Optimization of High Dimensional Functions: Application to a Pulse Shaping Problem by Mattias Gybels, LIF. During that presentation, I will present the work accomplished during my Master degree internship. After a quick overview of the main concepts of optimization, I will detail the optimization problem raised by the Laser-matter interaction research team of the Hubert Curien Laboratory (Saint-Etienne). Finally I will explain the chosen solution and detail some of our results. Nonlinear functional data analysis with reproducing kernels, by Hachem Kadri, LIF. Recent statistical and machine learning studies have revealed the potential benefit of adopting a functional data analysis (FDA) point of view to improve learning when data are objects in infinite dimensional Hilbert spaces. However, nonlinear modeling of such data (aka functional data) is a topic that has not been sufficiently investigated, especially when response data are functions. Reproducing kernel methods provide powerful tools for nonlinear learning problems, but to date they have been used more to learn scalar or vector-valued functions than function-valued functions. Consequently, reproducing kernels for functional data and their associated function-valued RKHS have remained mostly unknown and poorly studied. This work describes a learning methodology for nonlinear FDA based on extending the widely used scalar-valued RKHS framework to the functional response setting. It introduces a set of rigorously defined reproducing operator-valued kernels suitable for functional response data, that can valuably applied to take into account relationships between samples and the functional nature of data. Finally, it shows experimentally that the nonlinear FDA framework is particularly relevant for speech and audio processing applications where attributes are really functions and dependent of each other. Confused Multiclass Relevance Vector Machine by Ugo Louche, LIF. The Relevance Vector Machine (RVM, Tipping 2001) is a Bayesian method for machine learning. It is closely related to the well-known support vector machines (SVM, Vapnik, 1995): RVMs can take advantage of kernel embeddings and they compute sparse solutions (which is beneficial both from the statistical and computational points of view). In addition to the SVMs, though, RVMs do not require any hyperparameters settings, thanks to their Bayesian formulation and they compute predictions with probabilistic outputs. RVMs have been recently extended to the problem of multiclass prediction with composite kernel (mRVM, Damoulas and Girolami, 2009) where it has been shown that their good properties still hold. In this work, we present a quick overview of the RVM/mRVM method and the Variational Bayesian Expectation Maximization approximation (VBEM, Beal and Ghahramani, 2003); as the latter is used to overcome intractability in the mRVM model. We then propose a new multiclass RVM approach capable of handling the case where their might be mislabellings in the training data, as it may be the case in many real-world applications. Based on the idea that we are provided with a confusion matrix, we derive a learning algorithm that computes a multiclass predictor that shows extreme robustness to confused labels. The crux of our work is to provide the various learning equations coming from the need to recourse to the VBEM approximation to solve the full Bayesian and intractable learning problem posed by the mRVM model, in the case of mislabelled data. Automatic Drum Transcription with informed NMF by Antoine Bonnefoy, LIF. Extracting structured data from a musical signal is an active subject of research (Music Information Retrieval). In this context, the drum kit holds an important part of the information, it contains the rhythmic part of the music. The NMF is a powerful tool for source separation, using this particularity one can apply it to separate the sound into several tracks, each one containing only one element of the kit, so as to extract the drum score. We’ve used a NMF method, and added some prior informations based on physical and statistical way of playing drums, to the algorithm in order to improve the results. Proximal methods for multiple removal in seismic data, by Caroline Chaux, LATP. Joint work: Diego Gragnaniello, Mai Quyen Pham, Jean-Christophe Pesquet, Laurent Duval. During the acquisition of seismic data, undesirable coherent seismic events such as multiples, are also recorded, often resulting in a degradation of the signal of interest. The complexity of these data has historically contributed to the development of several efficient signal processing tools; for instance wavelets or robust l1-based sparse restoration. The objective of this work is to propose an original approach to the multiple removal problem. A variational framework is adopted here, but instead of assuming some knowledge on the kernel, we assume that a template is available. Consequently, it turns out that the problem reduces to estimate Finite Impulse Response filters, the latter ones being assumed to vary slowly along time. We assume that the characteristics of the signal of interest are appropriately described through a prior statistical model in a basis of signals, e.g. a wavelet basis. The data fidelity term thus takes into account the statistical properties of the basis coefficients (one can take a l1-norm to favour sparsity), the regularization term models prior informations that are available on the filters and a last constraint modelling the smooth variations of the filters along time is added. The resulting minimization is achieved by using the PPXA+ method which belongs to the class of parallel proximal splitting approaches. Cosparse analysis model and uncertainty principle: some basics and challenges, by Sangnam Nam, LATP. Sparse synthesis model has been studied extensively and intensely over the recent years and has found an impressive number of successful applications. In this talk, we discuss an alternative, but similar looking model called cosparse analysis model. As basics, we show why we think the model is different from the sparse model and then discuss the uniqueness property in the compressive sensing framework. Next, we look at challenging task of analysis operator learning. Uncertainty principle is an important (but rather unfortunate) concept in signal processing (and other fields). Roughly speaking, it says that we cannot achieve simultaneous localization of both time and frequency to arbitrary precisions. While the formulation in continuous domain is beautiful and can be proved elegantly, there appear to be many challenges when we move to discrete domain. We will discuss some of these challeges. We will also discuss how uncertainty principle appears in the analysis model.

– Full Signal and Machine Learning afternoon session for welcoming new members. Carte non disponible Date/heure Date(s) - 8 novembre 2012 Catégories Pas de Catégories 13h30 Optimization of High Dimensional Functions : Application to a Pulse Shaping Problem, Mattias Gybels, LIF.\n14h Nonlinear functional data analysis with reproducing kernels, Hachem Kadri, LIF.\n14h30 Confused Multiclass Relevance Vector Machine, Ugo Louche, LIF.\n15h Automatic Drum Transcription with informed NM, Antoine Bonnefoy, LIF.\n15h30 Coffee break.\n16h Proximal methods for multiple removal in seismic data, Caroline Chaux, LATP.\n16h30 Cosparse analysis model and uncertainty principle : some basics and challenges, Sangnam Nam, LATP.\n17h On the accuracy of fiber tractography, Sebastiano Barbieri, LATP.\n17h30 End of the scientific part.\n\n Optimization of High Dimensional Functions : Application to a\nPulse Shaping Problem by Mattias Gybels, LIF. \nDuring that presentation, I will present the work accomplished during my Master degree internship. After a quick overview of the main concepts of optimization, I will detail the optimization problem raised by the Laser-matter interaction research team of the Hubert Curien Laboratory (Saint-Etienne). Finally I will explain the chosen solution and detail some of our results.\n\n Nonlinear functional data analysis with reproducing kernels, by Hachem Kadri, LIF. \nRecent statistical and machine learning studies have revealed the potential benefit of adopting a functional data analysis (FDA) point of view to improve learning when data are objects in infinite dimensional Hilbert spaces. However, nonlinear modeling of such data (aka functional data) is a topic that has not been sufficiently investigated, especially when response data are functions. Reproducing kernel methods provide powerful tools for nonlinear learning problems, but to date they have been used more to learn scalar or vector-valued functions than function-valued functions. Consequently, reproducing kernels for functional data and their associated function-valued RKHS have remained mostly unknown and poorly studied. This work describes a learning methodology for nonlinear FDA based on extending the widely used scalar-valued RKHS framework to the functional response setting. It introduces a set of rigorously defined reproducing operator-valued kernels suitable for functional response data, that can valuably applied to take into account relationships between samples and the functional nature of data. Finally, it shows experimentally that the nonlinear FDA framework is particularly relevant for speech and audio processing applications where attributes are really functions and dependent of each other. \n\n Confused Multiclass Relevance Vector Machine by Ugo Louche, LIF. \nThe Relevance Vector Machine (RVM, Tipping 2001) is a Bayesian method for machine learning. It is closely related to the well-known support vector machines (SVM, Vapnik, 1995) : RVMs can take advantage of kernel embeddings and they compute sparse solutions (which is beneficial both from the statistical and computational points of view). In addition to the SVMs, though, RVMs do not require any hyperparameters settings, thanks to their Bayesian formulation and they compute predictions with probabilistic outputs.\nRVMs have been recently extended to the problem of multiclass prediction with composite kernel (mRVM, Damoulas and Girolami, 2009) where it has been shown that their good properties still hold.\nIn this work, we present a quick overview of the RVM/mRVM method and the Variational Bayesian Expectation Maximization approximation (VBEM, Beal and Ghahramani, 2003)\ ; as the latter is used to overcome intractability in the mRVM model.\nWe then propose a new multiclass RVM approach capable of handling the case where their might be mislabellings in the training data, as it may be the case in many real-world applications. Based on the idea that we are provided with a confusion matrix, we derive a learning algorithm that computes a multiclass predictor that shows extreme robustness to confused labels. The crux of our work is to provide the various learning equations coming from the need to recourse to the VBEM approximation to solve the full Bayesian and intractable learning problem posed by the mRVM model, in the case of mislabelled data.\n\n Automatic Drum Transcription with informed NMF by Antoine Bonnefoy, LIF. \nExtracting structured data from a musical signal is an active subject of research (Music Information Retrieval). In this context, the drum kit holds an important part of the information, it contains the rhythmic part of the music. The NMF is a powerful tool for source separation, using this particularity one can apply it to separate the sound into several tracks, each one containing only one element of the kit, so as to extract the drum score. We’ve used a NMF method, and added some prior informations based on physical and statistical way of playing drums, to the algorithm in order to improve the results.\n\n Proximal methods for multiple removal in seismic data, by Caroline Chaux, LATP. \nJoint work : Diego Gragnaniello, Mai Quyen Pham, Jean-Christophe Pesquet, Laurent Duval.\n\nDuring the acquisition of seismic data, undesirable coherent seismic events such as multiples, are also recorded, often resulting in a degradation of the signal of interest.\nThe complexity of these data has historically contributed to the development of several efficient signal processing tools\ ; for instance wavelets or robust l1-based sparse restoration.\nThe objective of this work is to propose an original approach to the multiple removal problem. A variational framework is adopted here, but instead of assuming some knowledge on the kernel, we assume that a template is available.\nConsequently, it turns out that the problem reduces to estimate Finite Impulse Response filters, the latter ones being assumed to vary slowly along time.\nWe assume that the characteristics of the signal of interest are appropriately described through a prior statistical model in a basis of signals, e.g. a wavelet basis. The data fidelity term thus takes into account the statistical properties of the basis coefficients (one can take a l1-norm to favour sparsity), the regularization term models prior informations that are available on the filters and a last constraint modelling the smooth variations of the filters along time is added.\nThe resulting minimization is achieved by using the PPXA+ method which belongs to the class of parallel proximal splitting approaches.\n\n Cosparse analysis model and uncertainty principle : some basics and challenges, by Sangnam Nam, LATP. \nSparse synthesis model has been studied extensively and intensely over the recent years and has found an impressive number of successful applications. In this talk, we discuss an alternative, but similar looking model called cosparse analysis model. As basics, we show why we think the model is different from the sparse model and then discuss the uniqueness property in the compressive sensing framework. Next, we look at challenging task of analysis operator learning.\nUncertainty principle is an important (but rather unfortunate) concept in signal\nprocessing (and other fields). Roughly speaking, it says that we cannot achieve simultaneous localization of both time and frequency to arbitrary precisions. While the formulation in continuous domain is beautiful and can be proved elegantly, there appear to be many challenges when we move to discrete domain. We will discuss some of these challeges.\nWe will also discuss how uncertainty principle appears in the analysis model.[