Loading...
 
[Show/Hide Left Column]

OPALE

Opérateurs monotones aléatoires et applications à l’optimisation stochastique 


Axis : DataSense, tâche 4
Subject : Opérateurs monotones aléatoires et applications à l’optimisation stochastique
Directors : Walid HACHEM, UPEst Marne La Vallée, Pascal BIANCHI, Telecom ParisTech, Jérémie JAKUBOWICZ, Telecom SudParis
Institutions : LTCI, SAMOVAR
Administrator laboratory : LTCI
PhD Student : Adil SALIM
Beginning : automne 2015
Thesis defence : october or november 2018
Scientifics production :
  • Journal papers
    • A. Salim, P. Bianchi, and W. Hachem, Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs, accepted for publication in Transaction on Automatic Control, March 2018.
    • P. Bianchi, W. Hachem, and A. Salim, A constant step Forward-Backward algorithm involving random maximal monotone operators, accepted for publication in Journal of Convex Analysis, March 2018.
    • P. Bianchi, W. Hachem, and A. Salim, Constant Step Stochastic Approximations Involving Differential Inclusions: Stability, Long-Run Convergence and Applications, accepted for publication in Stochastics, May 2018.
  • Conference papers
    • A. Salim, P. Bianchi and W. Hachem “A Splitting Algorithm for Minimization under Stochastic Linear Constraints”, ISMP 2018, Bordeaux, France.
    • A. Salim, P. Bianchi, and W. Hachem, “A Constant Step Stochastic Douglas-Rachford Algorithm with Application to Non Separable Regularization”, IEEE ICASSP 2018, Calgary, Canada.
    • A. Salim, P. Bianchi, and W. Hachem, “Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs”, CAp 2017, Grenoble, France.
    • P. Bianchi, W. Hachem, and A. Salim “Convergence d’un algorithme du gradient proximal stochastique à pas constant et généralisation aux opérateurs monotones aléatoires”, GRETSI 2017, Juan-les-Pins, France.
    • A. Salim, P. Bianchi, and W. Hachem, “A Stochastic Proximal Point Algorithm for Total Variation Regularization over Large Scale Graphs”, IEEE CDC 2016, Las Vegas, USA.
    • R. Mourya, P. Bianchi, A. Salim and C. Richard, “An Adaptive Distributed Asynchronous Algorithm with Application to Target Localization”, IEEE CAMSAP 2017, Curacao, Dutch Antilles.
    • P. Bianchi, W. Hachem and A. Salim, “Building Stochastic Optimization Algorithms with Random Monotone Operators”, EUCCO 2016, Leuven, Belgium.

Ressources :

Image



Context :
The general objective of the thesis is to study the behaviour of optimization algorithms based on the proximal operator in a noisy framework, and more generally, to evaluate the asymptotic behaviour of evolution equations generated by random monotonic operators. It is a question of building methodological tools to establish a bridge between the theory of stochastic approximation and that of monotonous operators. This theoretical connection puts into perspective the construction of new stochastic algorithms, such as primary algorithms, with high application potential for statistical learning and signal processing.

Scientific objective:
The thesis is part of a research program articulated around random monotonous operators recently addressed members of the project team. Starting from the algorithm where the sequence of observed random variables (ξ k) k∈N satisfies a given statistical model (iid, martingale, or Markov chain), we give ourselves the following objectives :
Establish the convergence of the sequence of iterations (x k) or their empirical means towards the set of zeros of the integral of Aumann A, in the case where the steps γ k are decreasing, More generally, show that the trajectories converge towards a dynamic system solution x(t) ∈ -At?, Consider the case where the steps γ are constant. In this context, the evidence of convergence is of a different nature from the case where steps tend towards zero, Use the results obtained to establish the convergence of algorithms more complex than al- the proximal point gorithm, such as ADMM, primary algorithms, proximal point algorithms
descent by coordinates. To demonstrate experimentally the good behaviour of the methods thanks to numerical validations on massive data sets.

Perspectives :
By relying on a strong expertise in stochastic approximation, the project will allow the development of efficient "solvers" better adapted to use on massive data typically encountered in statistical learning problems.