Séparation aveugle de sources : de l'instantané au convolutif

Fangchen FENG
Thesis defended on October 04, 2017, 3:30 PM at CentraleSupelec (Gif-sur-Yvette) Salle des séminaires du L2S

Composition du jury

M. Matthieu KOWALSKI   Université Paris-Sud     Directeur de these
M. Laurent GIRIN              Grenoble-INP, Gipsa-Lab  Rapporteur
M. Emmanuel VINCENT   Inria Grand-Est, Loria     Rapporteur
M. Roland BADEAU      Télécom ParisTech     Examinateur
M. Laurent DAUDET      Univ Paris-Diderot             Examinateur
M. Alexandre GRAMFORT   Inria Saclay, Neurospin     Examinateur 

Mots-clés :  Séparation aveugle de sources, Parcimonie, Représentation de Gabor, Factorisation en matrices nonnégatives, Problème inverse, Optimisation

Résumé : 
La séparation aveugle de source consiste à estimer les signaux de sources uniquement à partir des mélanges observés. Le problème peut être séparé en deux catégories en fonction du modèle de mélange: mélanges instantanés, où le retard et la réverbération (effet multi-chemin) ne sont pas pris en compte, et des mélanges convolutives qui sont plus généraux mais plus compliqués. De plus, le bruit additif au niveaux des capteurs et le réglage sous-déterminé, où il y a moins de capteurs que les sources, rendent le problème encore plus difficile. Dans cette thèse, tout d'abord, nous avons étudié le lien entre deux méthodes existantes pour les mélanges instantanés: analyse des composants indépendants (ICA) et analyse des composant parcimonieux (SCA). Nous avons ensuite proposé une nouveau formulation qui fonctionne dans les cas déterminés et sous-déterminés, avec et sans bruit. Les évaluations numériques montrent l'avantage des approches proposées. Deuxièmement, la formulation proposés est généralisés pour les mélanges convolutifs avec des signaux de parole. En intégrant un nouveau modèle d'approximation, les algorithmes proposés fonctionnent mieux que les méthodes existantes, en particulier dans des scénarios bruyant et / ou de forte réverbération. Ensuite, on prend en compte la technique de décomposition morphologique et l'utilisation de parcimonie structurée qui conduit à des algorithmes qui peuvent mieux exploiter les structures des signaux audio. De telles approches sont testées pour des mélanges convolutifs sous-déterminés dans un scénario non-aveugle. Enfin, en bénéficiant du modèle NMF (factorisation en matrice non-négative), nous avons combiné l'hypothèse de faible-rang et de parcimonie et proposé de nouvelles approches pour les mélanges convolutifs sous-déterminés. Les expériences illustrent la bonne performance des algorithmes proposés pour les signaux de musique, en particulier dans des scénarios de forte réverbération.

Modélisation électromagnétique et imagerie d'endommagements de laminés composites à renforcement de fibres Electromagnetic modeling and imaging of damages of fiber-reinforced composite laminates

Zicheng LIU
Thesis defended on October 03, 2017, 2:00 PM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40

Composition du jury proposé

M. Dominique LESSELIER        CNRS                               Directeur de thèse
Mme Amélie LITMAN                Université de Marseille     Rapportrice
M. Olivier DAZEL                      Université du Maine          Rapporteur
Mme Sonia FLISS                    ENSTA                               Examinatrice
M. Philippe LALANNE              CNRS                                Examinateur
M. Jean-Philippe GROBY        CNRS                                Examinateur
M. André NICOLET                  Université de Marseille     Examinateur
M. Edouard DEMALDENT       CEA LIST                          Invité
M. Yu ZHONG                         A*STAR Singapour            Invité

Mots-clés :  modélisation électromagnétique, imagerie électromagnétique, structure périodique


Résumé : 
On s'intéresse à la modélisation électromagnétique et à l'imagerie de stratifiés fibreux périodiques désorganisés. Les stratifiés ont des couches multiples et chaque couche est composée en incorporant périodiquement des fibres cylindriques dans une dalle homogène. Le matériau et la taille de la fibre peuvent changer de couche en couche, mais les périodes et les orientations sont obligées d'être identiques. Les fibres manquantes, déplacées, expansées, rétrécies et / ou circulaires détruisent la périodicité et les méthodes pour les structures périodiques deviennent inapplicables. La méthodologie Supercell fournit une structure périodique fictive, de sorte que la solution du champ partout dans l'espace peut être modélisée avec précision, à condition que la supercellule soit suffisamment grande. Cependant, l'efficacité de l'approche basée sur la supercellule n'est pas garantie en raison de la grande taille possible. Par conséquent, une approche alternative basée sur la théorie de l'équivalence est proposée, où les dommages sont équivalents à des sources dans les zones initialement intactes. Ensuite, le champ est une synthèse des réponses en raison de l'onde incidente et des sources équivalentes. Sur la base de la théorie de l'équivalence, l'emplacement des dommages se retrouve par recherche de sources équivalentes. Avec plusieurs sources et récepteurs en utilisation, quatre algorithmes de reconstruction, comprenant une solution moindres carrés, une solution "basic matching pursuit", MUSIC, et une approche itérative explorant la parcimonie conjointe de la solution désirée, permettent de récupérer les indices des fibres endommagées. Divers résultats numériques illustrent la disponibilité et la précision de l'approche de la modélisation et des performances d'imagerie haute résolution.

Contributions a l'analyse de données multivoie: algorithmes et applications

Olga Gisela LECHUGA LOPEZ
Thesis defended on July 03, 2017, 2:00 PM at CentraleSupelec (Gif-sur-Yvette) Amphi Blondel

Des méthodes statistiques telles que l'analyse discriminante, la régression logistique, la régression de Cox, et l'analyse canonique généralisée regularisée sont étendues au contexte des données multivoie, pour lesquelles, chaque individu est décrit par plusieurs instances de la même variable. Les données ont ainsi naturellement une structure tensorielle. Contrairement à leur formulation standard, une contrainte structurelle est imposée. L'intérêt de cette contrainte est double: d'une part elle permet une étude séparée de l'influence des variables et de l'influence des modalités, conduisant ainsi à une interprétation facilité des modèles. D'autre part, elle permet de restreindre le nombre de coefficients à estimer, et ainsi de limiter à la fois la complexité calculatoire et le phénomene de sur-apprentissage. Des stratégies pour gérer les problèmes liés au grande dimension des données sont également discutés. Ces différentes méthodes sont illustrées sur deux jeux de données réelles: (i) des données de spectroscopie et (ii) des données d'imagerie par résonance magnétique multi-modales pour prédire le rétablissement à long terme des patients après traumatisme cranien. Dans ces deux cas les méthodes proposées offrent de bons résultats en comparaison des résultats obtenus avec les approches standards.

Mots-clés :  Analyse de données, multiway, classification

Composition du jury proposé
M. Arthur TENENHAUS     CentraleSupélec   Directeur de thèse
M. Hervé ABDI     University of Texas   Rapporteur
M. Mohamed HANAFI     Université de Nantes   Rapporteur
M. Christophe AMBROISE     Université d'Evry   Examinateur
M. Robert SABATIER     Université de Montpellier   Examinateur
M. Remy BOYER     CentraleSupelec   Invité
M. Laurent LE BRUSQUET     CentraleSupelec   Invité

 

S³ seminar : Recursive State Estimation for Nonlinear Stochastic Systems and Application to a Continuous Glucose Monitoring System

Seminar on June 09, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Alexandros Charalampidis (CentraleSupélec, Rennes)

The talk will start with an introduction to recursive state estimation. It will be presented how the problem can be solved exactly in two important cases (systems with finite state space and linear Gaussian systems). The difficulties associated with nonlinear systems will be explained and the main techniques will be presented (Extended Kalman Filter, Unscented Kalman Filter, Gauss-Hermite Kalman Filter, Particle Filtering, Gaussian Sums). Then the talk will focus on systems that consist of linear dynamical systems interconnected through static nonlinear characteristics. It will be explained that for them, it is possible to avoid integration on the space space, which may be of high order, reducing it to the solution of some linear systems and low-order integration. This way, more accurate calculations can be made. Additionally, a novel quadrature technique, alternative to the Gauss-Hermite quadrature, specially designed for nonlinear filters using norm minimization concepts will be presented. The proposed techniques are applied to an example and it is shown that they can lead to a significant improvement. The final part of the talk will deal with the application of filters to data from a Continuous Glucose Monitoring System (CGMS). The importance of the CGMS to the construction of an artificial pancreas will be explained. It will be shown that, using simple models of the system dynamics, the application of Kalman and Particle Filtering to experimental data from ICU patients leads to an important reduction of the glucose estimation error.

S³ seminar : Inversion de données en traitement du signal et des images : régularisation parcimonieuse et algorithmes de minimisation L0.

Seminar on May 23, 2017, 2:00 PM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Charles SOUSSEN, (Centre de Recherche en Automatique de Nancy (CRAN, UMR CNRS 7039), Université de Lorraine)

Dans la première partie de l'exposé, je présenterai différents problèmes inverses auxquels je me suis intéressé ces dernières années et les contextes applicatifs associés : reconstruction d'images en tomographie, analyse d'images biologiques et d'images hyperspectrales en microscopie, problèmes d'inversion de données en spectroscopie optique avec applications biomédicales. Lorsque les données disponibles sont en nombre limité et partiellement informatives sur la quantité à estimer (problèmes inverses mal posés), la prise en compte d’informations a priori sur les inconnues est indispensable, et s’effectue par le biais des techniques de régularisation. Dans la seconde partie de l'exposé, je présenterai plus particulièrement la régularisation parcimonieuse de problèmes inverses, basée sur la minimisation de la "norme" l0. Les algorithmes heuristiques proposés sont conçus pour minimiser des critères mixtes L2-L0 du type

min_x J(x;lambda) = || y - Ax ||_2^2 + lambda || x ||_0.

Ce problème d'optimisation est connu pour être fortement non-convexe et NP-difficile. Les heuristiques proposées (appelées algorithmes "gloutons") sont définies en tant qu'extensions d'Orthogonal Least Squares (OLS). Leur développement est motivé par le très bon comportement empirique d'OLS et de ses versions dérivées lorsque la matrice A est mal conditionnée. Je présenterai deux types d'algorithmes pour minimiser J(x;lambda) à lambda fixé et pour un continuum de valeurs de lambda. Finalement, je présenterai quelques résultats théoriques visant à garantir que les algorithmes gloutons permettent de reconstruire exactement le support d'une représentation parcimonieuse y = Ax*, c'est-à-dire le support du vecteur x*.

Biographie : Charles Soussen est né en France en 1972. Il est diplômé de l'Ecole Nationale Supérieure en Informatique et Mathématiques Appliquées, Grenoble (ENSIMAG) en 1996. Il a obtenu sa thèse en traitement du signal et des images au Laboratoire des Signaux et Systèmes (L2S), Université de Paris-Sud, Orsay, en 2000, et son Habilitation à Diriger des Recherches à l'Université de Lorraine en 2013. Il est actuellement Maître de Conférences à l'Université de Lorraine, et au Centre de Recherche en Automatique de Nancy depuis 2005. Ses thématiques de recherche concernent les problèmes inverses et l'approximation parcimonieuse.

S³ seminar : Deux trous noirs dans une meule de foin : analyse de données pour l'astronomie gravitationnelle

Seminar on May 19, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle des séminaires du L2S
Eric Chassande-Mottin, (CNRS, AstroParticule et Cosmologie, Université Paris Diderot)

Le 14 septembre 2015, les deux détecteurs du Laser Interferometer Gravitational-wave Observatory (LIGO) inauguraient une nouvelle ère pour l'astrophysique en observant pour la première fois une onde gravitationnelle issue de la fusion de deux trous noirs faisant chacun trente fois la masse du soleil environ et situés à une distance supérieure à un milliard d'années-lumière. On donnera une vue d'ensemble de cette découverte majeure en insistant sur les méthodes d'analyse de données utilisées pour sortir le signal du bruit complexe rencontré dans ces expériences.

S³ seminar : Extending Stationarity to Graph Signal Processing: a Model for Stochastic Graph Signals

Seminar on March 31, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Benjamin Girault, (University of Southern California)

During the past few years, graph signal processing has been extending the field of signal processing on Euclidean spaces to irregular spaces represented by graphs. We have seen successes ranging from the Fourier transform, to wavelets, vertex-frequency (time-frequency) decomposition, sampling theory, uncertainty principle, or convolutive filtering. One missing ingredient though are the tools to study stochastic graph signals for which the randomness introduces its own difficulties. Classical signal processing has introduced a very simple yet very rich class of stochastic signals that is at the core of the study of stochastic signals: the stationary signals. These are the signals statistically invariant through a shift of the origin of time. In this talk, we study two extensions of stationarity to graph signals, one that stems from a new translation operator for graph signals, and another one with a more sensible interpretation on the graph. In the course, we show that attempts of alternate definitions of stationarity on graphs in the recent literature are actually equivalent to our first definition. Finally, we look at a real weather dataset and show empirical evidence of stationarity.

Bio: Benjamin Girault received his License (B.Sc.) and his Master (M.Sc.) in France from École Normale Supérieure de Cachan, France, in 2009 and 2012 respectively in the field of theoretical computer science. He then received his PhD in computer science from École Normale Supérieure de Lyon, France, in December 2015. His dissertation entitled "Signal Processing on Graphs - Contributions to an Emerging Field" focused on extending the classical definition of stationary temporal signals to stationary graph signal. Currently, he is a postdoctoral scholar with Antonio Ortega and Shri Narayanan at the University of Southern California continuing his work on graph signal processing with a focus on applying these tools to understanding human behavior.

S³ seminar : Novel Algorithms for Automated Diagnosis of Neurological and Psychiatric Disorders

Seminar on March 28, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Hojjat ADELI, (The Ohio State University, Columbus, USA)

Novel algorithms are presented for data mining of time-series data and automated electroencephalogram (EEG)-based diagnosis of neurological and psychiatric disorders based on adroit integration of three different computing technologies and problem solving paradigms: neural networks, wavelets, and the chaos theory. Examples of the research performed by the author and his associates for automated diagnosis of epilepsy, the Alzheimer’s Disease, Attention Deficit Hyperactivity Disorder (ADHD), autism spectrum disorder (ASD), and Parkinson’s disease (PD) are reviewed.

Biography: Hojjat Adeli received his Ph.D. from Stanford University in 1976 at the age of 26. He is Professor of Civil, Environmental, and Geodetic Engineering, and by courtesy Professor of Biomedical Informatics, Biomedical Engineering, Neuroscience, and Neurology at The Ohio State University. He has authored over 550 publications including 15 books. He is the Founder and Editor-in-Chief of international research journals Computer-Aided Civil and Infrastructure, now in 32nd year of publication, and Integrated Computer-Aided Engineering, now in 25th year of publication, and the Editor-in-Chief of International Journal of Neural Systems. In 1998 he received the Distinguished Scholar Award from OSU, “in recognition of extraordinary accomplishment in research and scholarship”. In 2005, he was elected Distinguished Member, ASCE: "for wide-ranging, exceptional, and pioneering contributions to computing in civil engineering and extraordinary leadership in advancing the use of computing and information technologies in many engineering disciplines throughout the world.” In 2010 he was profiled as an Engineering Legend in the ASCE journal of Leadership and Management in Engineering, and Wiley established the Hojjat Adeli Award for Innovation in Computing. In 2011 World Scientific established the Hojjat Adeli Award for Outstanding Contributions in Neural Systems. He is a Fellow of IEEE, the American Association for the Advancement of Science, American Neurological Society, and American Institute for Medical and Biomedical Engineering. Among his numerous awards and honors are a special medal from Polish Neural Network Society, the Eduardo Renato Caianiello Award for Excellence in Scientific Research from the Italian Society of Neural Networks, the Omar Khayyam Research Excellence Award from Scientia Iranica, an Honorary Doctorate from Vilnius Gediminas Technical University, and corresponding member of the Spanish Royal Engineering Society.

S³ seminar : Stochastic proximal algorithms with applications to online image recovery

Seminar on March 24, 2017, 11:00 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Jean-Christophe PESQUET, CVN, CentraleSupélec

Stochastic approximation techniques have been used in various contexts in machine learning and adaptive filtering. We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in a Hilbert space. In our general setting, stochastic approximations of the cocoercive operator and perturbations in the evaluation of the resolvents of the set-valued operator are possible. In addition, relaxations and not necessarily vanishing proximal parameters are allowed. Weak almost sure convergence properties of the iterates are established under mild conditions on the underlying stochastic processes. Leveraging on these results, we propose a stochastic version of a popular primal-dual proximal optimization algorithm, and establish its convergence. We finally show the interest of these results in an online image restoration problem.

S³ seminar : On Electromagnetic Modeling and Imaging of Defects in Periodic Fibered Laminates

Seminar on March 10, 2017, 12:30 PM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Zicheng LIU, (Inverse problems Group, Signals and Statistics Division, L2S Laboratory)

Composite laminates are commonly utilized in industry due to advantages as high stiffness, light weight, versatility, etc. Multiple layers, each one involving periodically-positioned circular-cylindrical fibers in a given homogeneous matrix, are usually involved. However, defects can affect the structure and thereupon impact security and efficiency, and they call for nondestructive testing. By electromagnetic (EM) means, it requires fast and reliable computational modeling of both sound and damaged laminates if one wishes to better understand pluses and minuses of the testing, and derive efficient imaging algorithms for the end user. Both direct modeling and inverse imaging will be introduced in this presentation.  For the former, since the periodicity of the structure is destroyed due to defects, methods based on the Floquet theorem are inapplicable. Two modeling approaches are then utilized: one is with supercell methodology where a fictitious periodic structure is fabricated, so as the EM field solution everywhere in space can be well approximately modeled, provided the supercell be large enough; the other is based on fictitious source superposition (FSS) where defects are treated as equivalent sources and the field solution is a summation of responses to the exterior source and equivalent ones. For imaging, with MUSIC and sparsity-based algorithm, missing fibers could be accurately located.

Biography: Zicheng LIU was born in Puyang, China, in October 1988. He received the M.S. degree in circuit and system from Xidian University, Xi’an, China in March 2014 and is currently pursuing the Ph.D. degree with the benefit of a Chinese Scholarship Council (CSC) grant at the Laboratoire des Signaux et Systèmes, jointly Centre National de la Recherche Scientifique (CNRS), CentraleSupélec, and Université Paris-Sud, Université Paris-Saclay, Paris, France. He will defend his Université Paris-Saclay Ph.D. early Fall 2017. His present work is on the electromagnetic modeling of damaged periodic fiber-based laminates and corresponding imaging algorithms and inversion. His research interests include computational electromagnetics, scattering theory on periodic structures, non-destructive testing, sparsity theory, and array signal processing.

S³ seminar : On Imaging Methods of Material Structures with Different Boundary Conditions

Seminar on March 10, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Xiuzhu YE, (Beihang University, Beijing, China)

This talk is about the two-dimensional inverse scattering problems for different kinds of boundary conditions. Firstly, we propose a perfect electric conductor (PEC) inverse scattering approach, which is able to reconstruct PEC objects of arbitrary number and shape without requiring prior information on the approximate locations or the number of the unknown scatterers. Secondly, the modeling scheme of the T-matrix method is introduced to solve the challenging problem of reconstructing a mixture of both PEC and dielectric scatterers together. Then the method is further extended to the case of scatterers with four boundary conditions together. Last, we propose a method to solve the dielectric and mixed boundary through-wall imaging problem. Various numerical simulations and experiments are carried out to validate the proposed methods.

Biography: Xiuzhu YE was born in Heilongjiang, China, in December 1986. She received the Bachelor degree of Communication Engineering from Harbin Institute of Technology, China, in July 2008 and the Ph.D. degree from the National University of Singapore, Singapore, in April 2012. From February 2012 to January 2013, she worked in the Department E.C.E., National University of Singapore, as a Research Fellow. Currently, she is Assistant Professor in the School of Electronic and Information Engineering of the Beihang University. She has been and is engaged under various guises with Ecole Centrale de Pékin (ECPK) also. She is presently benefiting from an invited professorship position at University Paris-Sud and later this Summer 2017 she will be benefiting from an invited professorship position at CentraleSupélec, both within the Laboratoire des Signaux et Systèmes, jointly Centre National de la Recherche Scientifique (CNRS), CentraleSupélec, and Université Paris-Sud, Université Paris-Saclay, Gif-sur-Yvette, France. Her current research interest mainly includes fast algorithms in solving inverse scattering problems, near field imaging, biomedical imaging, and antenna designing.

S³ seminar : FastText: A library for efficient learning of word representations and sentence classification

Seminar on February 24, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Piotr Bojanowski, (Facebook AI Research)

In this talk, I will describe FastText, an open-source library that can be used to train word representations or text classifiers. This library is based on our generalization of the famous word2vec model, allowing to adapt it easily to various applications. I will go over the formulation of the skipgram and cbow models of word2vec and how these were extended to meet the needs of our model. I will describe in details the two applications of our model, namely document classification and building morphologically-rich word representations. In both applications, our model achieves very competitive performance while being very simple and fast.

S³ seminar : Stochastic Quasi-Newton Langevin Monte Carlo

Seminar on February 10, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Umut Şimşekli, (LTCI, Télécom ParisTech)

Recently, Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) methods have been proposed for scaling up Monte Carlo computations to large data problems. Whilst these approaches have proven useful in many applications, vanilla SG-MCMC might suffer from poor mixing rates when random variables exhibit strong couplings under the target densities or big scale differences. In this talk, I will present a novel SG-MCMC method that takes the local geometry into account by using ideas from Quasi-Newton optimization methods. These second order methods directly approximate the inverse Hessian by using a limited history of samples and their gradients. Our method uses dense approximations of the inverse Hessian while keeping the time and memory complexities linear with the dimension of the problem. I will provide formal theoretical analysis where it is shown that the proposed method is asymptotically unbiased and consistent with the posterior expectations. I will finally illustrate the effectiveness of the approach on both synthetic and real datasets. This is a joint work with Roland Badeau, Taylan Cemgil and Gaël Richard. arXiv: https://arxiv.org/abs/1602.03442

S³-PASADENA seminar : Detecting confounding in multivariate linear models via spectral analysis

Seminar on January 31, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Dominik Janzing, Max Planck Institute for Intelligent Systems, Tuebingen, Germany

We study a model where one target variable Y is correlated
with  a vector X:=(X_1,...,X_d) of predictor variables  being potential causes of Y.
We describe  a method that infers to what extent the statistical dependences between X and Y
are due to the influence of X on Y and to what extent due to a hidden common cause
(confounder) of X and Y. The method is based on an independence assumption stating that, in the absence of confounding,
the vector of regression coefficients describing the influence of each X on Y has 'generic orientation'
relative to the eigenspaces  of the covariance matrix of X. For the special case of a scalar confounder we show that confounding typically spoils this generic orientation in a characteristic way that can be used to quantitatively estimate the amount of confounding.
I also show some encouraging experiments with real data, but the method is work in progress and critical comments are highly appreciated.

Postulating 'generic orientation' is inspired by a more general postulate stating that
P(cause) and P(effect|cause) are independent objects of Nature and therefore don't contain information about each other [1,2,3],
an idea that inspired several causal inference methods already, e.g. [4,5].

[1] Janzing, Schoelkopf: Causal inference using the algorithmic Markov condition, IEEE TIT 2010.
[2] Lemeire, Janzing: Replacing causal faithfulness with the algorithmic independence of conditionals, Minds and Machines, 2012.
[3] Schoelkopf et al: On causal and anticausal learning, ICML 2012.
[4] Janzing et al: Telling cause frome effect based on high-dimensional observations, ICML 2010.
[5] Shajarisales et al: Telling cause from effect in deterministic linear dynamical systems, ICML 2015.

S³ Seminar: Adapting to unknown noise level in super-resolution

Seminar on January 20, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Claire Boyer (LSTA, UPMC)

We study sparse spikes deconvolution over the space of complex-valued measures when the input measure is a finite sum of Dirac masses. We introduce a new procedure to handle the spike deconvolution when the noise level is unknown. Prediction and localization results will be presented for this approach. An insight on the probabilistic tools used in the proofs could be briefly given as well.

S³ seminar : Inverse problems for speech production

Seminar on January 20, 2017, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Benjamin Elie (LORIA, IADI)

Studies on speech production are based on the extraction and the analysis of the acoustic features of human speech, and also on their relationships with the articulatory and phonatory configurations realized by the speaker. An interesting tool, which will be the topic of the talk, to make such researches is the articulatory synthesis, which consists in the numerical simulation of the mechanical and acoustical phenomena that are involved in speech production. The aim is to numerically reproduce a speech signal that contains the observed acoustic features with regards to the actual articulatory and phonatory gestures of the speaker. Using the articulatory approach may lead to a few problems that will be tackled in this talk, and to which possible solutions will be discussed. Firstly, the different articulatory gestures realized in natural speech should be precisely observed. For that purpose, the first part of the talk focuses on methods to acquire articulatory films of the vocal tract by MRI techniques with a fast acquisition rate via sparse techniques (Compressed Sensing). The aim is, in fine, to build an articulatory and a coarticulation model. The investigation of the acoustical phenomena involved in natural speech require to separate the contributions of the different acoustic sources in the speech signal. The periodic/aperiodic decomposition of the speech signal is the subject of the second part of the talk. The challenge is to be able to study the acoustic properties of the frication noise that is generated during the production of fricatives, and also to quantify the amount of voicing produced during fricatives. Finally, in order to directly use the analysis by synthesis methods, it is interesting to estimate the articulatory configurations of the speaker from the acoustic signal. This is the aim of the acoustic-articulatory inversion for copy synthesis, which is the third part of the talk. Direct applications of these problems for the study of speech production and phonetics will be presented.

Performances et méthodes pour l'échantillonnage comprimé: Robustesse à la méconnaissance du dictionnaire et optimisation du noyau d'échantillonnage

Stéphanie BERNHARDT
Thesis defended on December 05, 2016, 2:00 PM at CentraleSupelec (Gif-sur-Yvette) Amphi F3-05

Dans cette thèse, nous nous intéressons à deux méthodes promettant de reconstruire un signal parcimonieux largement sous-échantillonné : l’échantillonnage de signaux à taux d’innovation fini et l’acquisition comprimée. Il a été montré récemment qu’en utilisant un noyau de pré-filtrage adapté, les signaux impulsionnels peuvent être parfaitement reconstruits bien qu’ils soient à bande non-limitée. En présence de bruit, la reconstruction est réalisée par une procédure d’estimation de tous les paramètres du signal d’intérêt. Dans cette thèse, nous considérons premièrement l’estimation des amplitudes et retards paramétrisant une somme finie d'impulsions de Dirac filtrée par un noyau quelconque et deuxièmement l’estimation d’une somme d’impulsions de forme quelconque filtrée par un noyau en somme de sinus cardinaux (SoS). Le noyau SoS est intéressant car il est paramétrable par un jeu de paramètres à valeurs complexes et vérifie les conditions nécessaires à la reconstruction. En se basant sur l’information de Fisher Bayésienne relative aux paramètres d’amplitudes et de retards et sur des outils d’optimisation convexe, nous proposons un nouveau noyau d’échantillonnage. L’acquisition comprimée permet d’échantillonner un signal en-dessous de la fréquence d’échantillonnage de Shannon, si le vecteur à échantillonner peut être approximé comme une combinaison linéaire d’un nombre réduit de vecteurs extraits d’un dictionnaire sur-complet. Malheureusement, dans des conditions réalistes, le dictionnaire (ou base) n’est souvent pas parfaitement connu, et est donc entaché d’une erreur (DB). L’estimation par dictionnaire, se basant sur les mêmes principes, permet d’estimer des paramètres à valeurs continues en les associant selon une grille partitionnant l’espace des paramètres. Généralement, les paramètres ne se trouvent pas sur la grille, ce qui induit un erreur d’estimation même à haut rapport signal sur bruit (RSB). C’est le problème de l’erreur de grille (EG). Dans cette thèse nous étudions les conséquences des modèles d’erreur DB et EG en terme de performances bayésiennes et montrons qu’un biais est introduit même avec une estimation parfaite du support et à haut RSB. La BCRB est dérivée pour les modèles DB et EG non structurés, qui bien qu’ils soient très proches, ne sont pas équivalents en terme de performances. Nous donnons également la borne de Cramér-Rao moyennée (BCRM) dans le cas d’une petite erreur de grille et étudions l’expression analytique de l’erreur quadratique moyenne bayésienne (BEQM) sur l’estimation de l’erreur de grille à haut RSB. Cette dernière est confirmée en pratique dans le contexte de l’estimation de fréquence pour différents algorithmes de reconstruction parcimonieuse. Nous proposons deux nouveaux estimateurs : le Bias-Correction Estimator (BiCE) et l’Off-Grid Error Correction (OGEC) permettant de corriger l'erreur de modèle induite par les erreurs DB et EG, respectivement. Ces deux estimateurs principalement basés sur une projection oblique des mesures sont conçus comme des post-traitements, destinés à réduire le biais d’estimation suite à une pré-estimation effectuée par n’importe quel algorithme de reconstruction parcimonieuse. Les biais et variances théoriques du BiCE et du OGEC sont dérivés afin de caractériser leurs efficacités statistiques. Nous montrons, dans le contexte difficile de l’échantillonnage des signaux impulsionnels à bande non-limitée que ces deux estimateurs permettent de réduire considérablement l’effet de l'erreur de modèle sur les performances d’estimation. Les estimateurs BiCE et OGEC sont tout deux des schémas (i) génériques, car ils peuvent être associés à tout estimateur parcimonieux de la littérature, (ii) rapides, car leur coût de calcul reste faible comparativement au coût des estimateurs parcimonieux, et (iii) ont de bonnes propriétés statistiques.

 

Mots-clés :

échantillonnage, parcimonie, erreur de modèle, bornes bayésiennes, noyaux, signaux impulsionnels

 

Composition du jury

M. Rémy BOYER Université Paris-Sud Directeur de thèse

Mme Sylvie MARCOS CNRS Co-Directeur de thèse

M. Pascal LARZABAL Université Paris-Sud Co-Encadrant de thèse

M. David BRIE Université de Lorraine Rapporteur

M. André FERRARI Université de Côte d'Azur Rapporteur

M. Eric CHAUMETTE ISAE-Supaéro Examinateur

M. Ali MOHAMMAD-DJAFARI CNRS Examinateur

M. Nicolas DOBIGEON Université de Toulouse Examinateur

S³ seminar: High dimensional sampling with the Unadjusted Langevin Algorithm

Seminar on November 23, 2016, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Alain Durmus (LTCI, Telecom ParisTech)

Recently, the problem of designing MCMC sampler adapted to high-dimensional distributions and with sensible theoretical guarantees has received a lot of interest. The applications are numerous, including large-scale inference in machine learning,  Bayesian nonparametrics, Bayesian inverse problem, aggregation of experts among others. When the density is L-smooth (the log-density is continuously differentiable and its derivative is Lipshitz), we will advocate the use of a “rejection-free” algorithm, based on the discretization of the  Euler diffusion with either constant or decreasing stepsizes. We will present several new results allowing convergence to stationarity under different conditions for the log-density (from the  weakest, bounded oscillations on a compact set and super-exponential in the tails to the log concave).
When the density is strongly log-concave, the convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality for Lipschitz functions will be reported.
Finally, based on optimzation techniques we will propose new methods to sample from high dimensional distributions. In particular, we will be interested  in densities which are not continuously differentiable. Some Monte Carlo experiments will be presented to support our findings.

Gaussian Channels: I-MMSE at Every SNR

Seminar on October 20, 2016, 2:00 PM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Prof. Shlomo Shamai, The Andrew and Erna Viterbi Faculty of Electrical Engineering at the Technion-Israel Institute of Technology

Multi-user information theory presents many open problems, even in the simple Gaussian regime. One such prominent problem is the two-user Gaussian interference channel which has been a long standing open problem for over 30 years. We distinguish between two families of multi-user scalar Gaussian settings; a single transmitter (one dimension) and two transmitters (two dimensions), not restricting the number and nature of the receivers. Our first goal is to fully depict the behavior of asymptotically optimal, capacity
achieving, codes in one dimensional settings for every SNR. Such an understanding provides important insight to capacity achieving schemes and also gives an exact measure of the disturbance such codes have on unintended receivers.

We first discuss the Gaussian point-to-point channel and enhance some known results. We then consider the Gaussian wiretap channel and the Gaussian Broadcast channel (with and without secrecy demands) and reveal MMSE properties that confirm "rules of thumb" used in the achievability proofs of the capacity region of these channels and provide insights to the design of such codes.
We also include some recent observations that give a graphical interpretation to rate and equivocation in this one dimensional setting.
Our second goal is to employ these observations to the analysis of the two dimensional setting. Specifically, we analyze the two-user Gaussian interference channel, where simultaneous transmissions from two users interfere with each other. We employ our understanding of asymptotically point-to-point optimal code sequences to the analysis of this channel. Our results also resolve the "Costa Conjecture"
(a.k.a the "missing corner points" conjecture), as has been recently proved by Polyanskiy-Wu, applying Wasserstein Continuity of Entopy.

The talk is based on joint studies with R. Bustin, H. V. Poor and R. F. Schaefer.

S³: Material-by-Design for Synthesis, Modeling, and Simulation of Innovative Systems and Devices

Seminar on September 30, 2016, 10:30 AM at CentraleSupelec (Gif-sur-Yvette) Salle du conseil du L2S - B4.40
Giacomo Oliveri (ELEDIA, University of Trento)

Several new devices and architectures have been proposed in the last decade to exploit the unique features of innovative artificially-engineered materials (such as metamaterials, nanomaterials, biomaterials) with important applications in science and engineering. In such a framework, a new set of techniques belonging to the Material-by-Design (MbD) framework [1]-[5] have been recently introduced to synthesize innovative devices comprising task-oriented artificial materials. MbD is an instance of the System-by-Design paradigm [6][7] defined in short as “How to deal with complexity”. More specifically, MbD considers the problem of designing artificial-material enhanced-devices from a completely new perspective, that is "The application-oriented synthesis of advanced systems comprising artificial materials whose constituent properties are driven by the device functional requirements". The aim of this seminar will be to review the fundamentals, features, and potentialities of the MbD paradigm, as well as to illustrate selected state-of-the-art applications of this design framework in sensing and communications scenarios.

Bio: Giacomo Oliveri received the B.S. and M.S. degrees in Telecommunications Engineering and the PhD degree in Space Sciences and Engineering from the University of Genoa, Italy, in 2003, 2005, and 2009 respectively. He is currently an Tenure Track Associate Professor at the Department of Information Engineering and Computer Science (University of Trento), Professor at CentraleSupélec, member of the Laboratoire des signaux et systèmes (L2S)@CentraleSupélec, and member of the ELEDIA Research Center. He has been a visiting researcher at L2S, Gif-sur-Yvette, France, in 2012, 2013, and 2015, and he has been an Invited Associate Professor at the University of Paris Sud, France, in 2014. In 2016, he has been awarded the "Jean d'Alembert" Scholarship by the IDEX Université Paris-Saclay. He is author/co-author of over 250 peer-reviewed papers on international journals and conferences, which have been cited above 2200 times, and his H-Index is 26 (source: Scopus). His research work is mainly focused on electromagnetic direct and inverse problems, system-by-design and metamaterials, compressive sensing techniques and applications to electromagnetics, and antenna array synthesis. Dr. Oliveri serves as an Associate Editor of the International Journal of Antennas and Propagation, of the Microwave Processing journal, and of the International Journal of Distributed Sensor Networks. He is the Chair of the IEEE AP/ED/MTT North Italy Chapter.

Pages