SAVE THE DATE
Jeudi 7 décembre 2017 de 10h à 12h
Model reduction and sparsity in high dimension
LRI, Bâtiment 660, Rue Noetzlin, 91190 Gif S/Yvette - Amphithéâtre Digitéo
Rémy Boyer (L2S) - Model Reduction and Factor Estimation With Tensor Graph.
Model Reduction and Factor Estimation With Tensor Graph.In the context of the big data problem, the growing number, denoted here by $D$, of the available sensing technologies produces a large amount of heterogeneous measurements. Analyzing independently each collected data set is clearly a suboptimal strategy because potential hidden "data correlations" are simply ignored. The challenge here is to consider a sufficiently rich and flexible representation adapted to accurately modelize the problem of interest. The multilinear algebra of tensors is a powerful mathematical framework able reach this goal. In many practical contexts, $D$ is large. High-order tensor decompositions have to face a new challenge in terms of storage cost and algorithmic stability. In this talk, we present new ideas and methods to break this "curse of dimensionality" based on the idea to perform a graph-based model reduction described by interconnected low-order core tensors.
Jérôme-Alexis Chevalier (Inria, Parietal) - Statistical control of sparse models in high dimension.
Statistical control of sparse models in high dimension.(work with Bertrand Thirion and Joseph Salmon)
In many scientific fields, while the dimensionality of the explanatory data has increased, the number of samples available is often limited. Though high dimensional regression methods, like the Lasso (Tibshirani, 1994), have been popularized in contexts where many features can explain a phenomenon, it remains a burning issue to provide confidence to the predictive models they produce.
In this high-dimensional setting, we present three different methods that give a statistical control of the parameters of linear models. The multi sample-splitting (Meinshausen et al., 2008) is based on the repetition of the similar process which is composed of two steps: a screening step and a least squares step. The two other methods, the corrected Ridge (Bühlmann, 2013) and the desparsified Lasso (Zhang and Zhang, 2014), rely on computating the underlying feature covariance matrix. From our empirical study, the desparsified Lasso seems to have a greater power of detection. A related problem of ours is the estimation of the noise standard deviation in linear models in high dimension. We present different methods and compare their results in several simulation.
All information on this webpage : GT PASADENA