The course MAA304 begins with a detailed overview of convergence, both in probability and in distribution, and revisiting two key theorems in statistics: the law of large numbers and the central limit theorem. We will then look in detail at asymptotic statistics, including fundamental topics such as the asymptotic properties of maximum likelihood estimators (MLEs), the formulation of asymptotic confidence intervals, and the principles underlying asymptotic test theory.
We will then highlight the crucial role of information theory in statistics, with particular emphasis on notions of efficiency, Cramer-Rao theory, and sufficiency. Moving to multivariate linear regression, the focus shifts to inference in Gaussian models and model validation to give students a solid understanding of this important statistical paradigm.
Next, we turn to nonlinear regression and delve into a comprehensive study of logistic regression. The course concludes with a brief introduction to nonparametric statistics, emphasising the importance of distribution-free tests.
This course will cover the fundamentals of convex optimization with a strong focus on finite dimensional search spaces. It covers theory, algorithm and applications.
The syllabus contains: convex sets, convex functions, optimization problems, subdifferential calculus (subgradients), optimality conditions in convex or differentiable optimization with equality and inequality constraints, duality theory. The last part is an introduction to the optimal control of ordinary differential equations.
Prerequisites: MAA203
MAA305 presents the basic theory of discrete Markov chains. It starts by introducing the Markov property and then moves on to develop fundamental tools such as transition matrices, recurrence classes and stopping times. With these tools at hand, the strong Markov property is proven and applied to the study of hitting probabilities. In the second part of the course, we introduce the notion of stationary distribution and prove some basic existence and uniqueness results. Finally, the long time behavior of Markov chains is investigated by proving the ergodic Theorem and exponential convergence under Doblin's condition. The course is concluded by surveying some stochastic algorithm whose implementation relies on the construction of an appropriate Markov Chain, such as the Metropolis-Hastings algorithm.
Digital images are ubiquitous : from professional and smartphone cameras to remote sensing and medical imaging, technology steadily improves, allowing to obtain ever more accurate images under ever more extreme acquisition conditions (shorter exposures, low light imaging, finer resolution, indirect computational imaging methods, to name a few).
This course introduces inverse problems in imaging (aka image restoration), namely the mathematical models and algorithms that allow to obtain high quality images from partial, indirect or noisy observa- tions. After a short introduction of the physical modeling of image acquisition systems, we introduce the mathematical and computational tools required to achieve that goal. The course is structured in two parts.
The first part deals with well-posed inverse problems where perfect reconstruction is possible under certain hypotheses. We first introduce the theory of continuous and discrete (fast) Fourier transforms, convolutions, and several versions of the Shannon sampling theorem, aliasing and the Gibbs effect. Then we review how imaging technology ensures the necessary band-limited hypothesis, and a few applications including: antialiasing and multi-image super-resolution, exact interpolation and registration for stereo vision, synthesis of stationary textures.
In the second part we deal with ill-posed inverse problems and the variational and Bayesian formulations, leading to regularized optimization problems (for posterior maximization) and to posterior sampling (not covered in this course). This part starts with a review of optimization algorithms including gradient descent, and the most simple splitting and proximal algorithms. Then we review increasingly powerful regularization techniques in historical order: from Wiener filters and Tikhonov regularization, to total variation, and non-local self-similarity. By the end of the course we briefly introduce an overture to recent approaches using pretrained denoisers as implicit regularizers of inverse problems via RED and plug and play algorithms for posterior maximization. The theory is illustrated by applications to image denoising, deblurring and inpainting.
Course contents
• Part 1 - Well-Posed inverse problems
– Discrete Fourier Transform 1D & 2D
– Shannon's Sampling Theorem - Exact interpolation
– Applications to remote sensing & subpixel stereo vision
• Part 2 - Ill-posed Inverse Problems
– Review of Probability & Statistics
– Tikhonov / Wiener Regularization
– Total Variation Regularization - Convex optimization
– Learning-based Regularization - Non-convex optimization
Prerequisites
Essential
• Vector Spaces (MAA206), Mathematical Analysis (MAA102, MAA202 or similar)
• Linear Algebra (MAA101), basic Python programming (CSE101)
Helpful but not strictly required
• Measure theory and Integration (MAA301),
• Numerical Linear Algebra (MAA208), advanced Python programming (CSE102, MAA106)
• Basic concepts of Probability and Statistics (MAA203, MAA204, MAA304 or MAA305)
Schedule
• Lecture 1 : Discrete Fourier Transform (1D) - PL/PE/LS/Q
• Lecture 2 : Discrete Fourier Transform (2D) - PL/LS/Q
• Lecture 3 : Shannon's Sampling Theorem - PL/Q
• Lecture 4 : Continuous Fourier Transforms - PL/LS/HW
• Lecture 5 : Probability & Statistics - PL/HW
• Lecture 6 : Tikhonov / Wiener / TV Regularization - Part 1 - PL/LS
• Lecture 7 : Tikhonov / Wiener / TV Regularization - Part 2 - PL/LS/HW
• Lecture 8 : Applications to remote sensing & subpixel stereo vision - PL/Q
• Lecture 9 : Learning-based Regularization - PL/LS/Q
Abbreviations:
PL = Plenary Lecture LS = Lab Session TD = Practical Exercises HW = Homework Assignment Q = Quizz
Evaluation
To validate this course, you will need to upload several homework assignments which count for the continuous assessment (CA) grade. This will be completed by a final exam (FE).
The ratio for the final grade is as follows: Final grade = 2/3 * CA + 1/3 * FE.
CA = Continuous Assessment is composed of
• 3 homework assignments (HW)
• Several Quizzes
FE = Final Exam at the end of the course (June 10th)
Final grade = 2/3 * CA + 1/3 * FE
The final exam will be composed of two parts:
• Part 1: Questions on the course ad practical exercises to be done on paper - no documents allowed
• Part 2: Computer code to complete in a Jupyter Notebook - bring your own laptop - all documents allowed
References
As a complement of the course materials (handout, slides, assignments) the following references can be useful.
J.M. Morel (2004), Cours de traitement de Signal et de l’Image, ENS Cachan, polycopié.
J.M. Bony (1994), Cours d’analyse - Théorie des distributions et analyse de Fourier, Ecole Polytechnique, polycopié.
C. Gasquet et P. Witomski (1995), Analyse de Fourier et applications, Masson.
S. Mallat (1997), A Wavelet Tour of Signal Processing, Academic Press.
N. Sabater, J.M. Morel and A. Almansa (2011). How Accurate Can Block Matches Be in Stereo Vision? SIAM Journal on Imaging Sciences, 4(1), 472–500.
Parikh, N., & Boyd, S. (2014). Proximal Algorithms. Foundations and Trends in Optimization, 1(3), 127–239.
A. Almansa, S. Durand, and B. Rougé (2004). Measuring and Improving Image Resolution by Adaptation of the Reciprocal Cell. Journal of Mathematical Imaging and Vision, 21(3), 235–279.
F. Malgouyres and F. Guichard (2001). Edge Direction Preserving Image Zooming: A Mathematical and Numerical Analysis. SIAM Journal on Numerical Analysis, 39(1), 1–37.
A. Buades, B. Coll and J.M. Morel (2006). A review of image denoising algorithms, with a new one. SIAM Multiscale Modeling and Simulation, 4(2), 490–530.
C. Barnes, E. Shechtman, A. Finkelstein and D.B. Goldman (2009). PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics-TOG, 28(3), 24.
L. Raad, A. Davy, A. Desolneux and J.M. Morel (2017). A survey of exemplar-based texture synthesis. Annals of Mathematical Sciences and Applications, 3(1), 89–148.
Samuel Hurault. (2023). Méthodes plug-and-play convergentes pour la résolution de problèmes inverses en imagerie avec régularisation explicite, profonde et non-convexe. PhD thesis. Université de Bordeaux.
Ryu, E. K., Liu, J., Wang, S., Chen, X., Wang, Z., & Yin, W. (2019). Plug-and-Play Methods Provably Converge with Properly Trained Denoisers. In (ICML) International Conference on Machine Learning. Retrieved from http://proceedings.mlr.press/v97/ryu19a.html
R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, and M. Pereyra (2023). On Maximum a Posteriori Estimation with Plug & Play Priors and Stochastic Gradient Descent. Journal of Mathematical Imaging and Vision, 65(1), 140–163.
In APM_3S012_EP “Numerical Methods for ODEs”, we will introduce numerical scheme to simulate ordinary differential equations. We will start by Euler schemes (explicit and implicit) and understand how the notions of stability and consistency can be used to study these methods. We will then consider Runge-Kutta schemes and apply the different methods to particular applications, e.g. the N-body problem.