This course covers the theory and the algorithms to solve cooperative optimization and learning problems. Cooperative problems arise naturally in a number of recent applications, such as distributed computing, massive-scale machine learning, and the Internet of Things (IoT). In all these application scenarios, the data and therefore the optimization and learning cost and loss are distributed in space across multiple devices. Due to communication and privacy issues, we cannot gather all the data at a single location, and we are “forced” to look for alternative, advanced, algorithms that can solve optimization and learning problems in a cooperative fashion.
In particular, the aim of the course is to be able to answer the questions,
1. What is a cooperative optimization and learning problem?
2. Given a cooperative problem, which algorithm do I use to solve it? And with which theoretical guarantees?
3. How do I ensure privacy in the algorithms we develop and what do we mean by privacy?
In order to answer to these three questions, we will need to build a theory of cooperative algorithms. This will make us discover some very recent development in optimization and learning, such as the ADMM algorithm, federated learning, as well as differential privacy. Some of the algorithms we will study are implemented in some way or another by the big players in the field (Google, Microsoft, Meta, ...), and run on your browsers and on your smart phones.
The notes and course give for granted a good knowledge of continuous optimization and algorithms, for example the content of 4OPT1 and 4OPT2 at ENSTA.
In particular, the aim of the course is to be able to answer the questions,
1. What is a cooperative optimization and learning problem?
2. Given a cooperative problem, which algorithm do I use to solve it? And with which theoretical guarantees?
3. How do I ensure privacy in the algorithms we develop and what do we mean by privacy?
In order to answer to these three questions, we will need to build a theory of cooperative algorithms. This will make us discover some very recent development in optimization and learning, such as the ADMM algorithm, federated learning, as well as differential privacy. Some of the algorithms we will study are implemented in some way or another by the big players in the field (Google, Microsoft, Meta, ...), and run on your browsers and on your smart phones.
The notes and course give for granted a good knowledge of continuous optimization and algorithms, for example the content of 4OPT1 and 4OPT2 at ENSTA.