Advanced Statistical Inference
Academic Year: 2026
Course Information
This course focuses on the principles of learning from data and quantification of uncertainty, by complementing and enriching the โIntroduction to Statistical Learningโ and โMachine Learning and Intelligence Systemsโ courses. The course will be divided into two main parts, corresponding to โInferenceโ and โPredictionโ.
During the first half, we will focus on the principles of probabilistic inference, and the quantification of uncertainty in the context of machine learning. We will start by drawing connections between optimization and probabilistic inference (e.g., maximum likelihood estimation, maximum a posteriori estimation). We will then introduce the concept of Bayesian inference, and discuss how to perform inference in complex and intractable models using approximate methods (e.g., variational inference, Markov Chain Monte Carlo, Laplace approximation).
The second half of the course will focus on prediction, and the evaluation of predictive models. We will start by discussing simple predictive models (e.g., linear regression, logistic regression), and then move on to more complex models (e.g., Gaussian processes, neural networks). Notably, we will discuss all these models in the context of probabilistic inference, and we will put in practice the probabilistic methods introduced in the first half. While mostly focused on supervised learning, we will also brief take a look at unsupervised learning and generative models.
Finally, the course will be complemented by practical sessions, where students will be able to implement and experiment with the methods discussed in class (using Python).
Course Objectives
- Understand the principles of probabilistic inference and quantification of uncertainty.
- Understand the principles of Bayesian inference and how to perform (approximate) inference in complex models.
- Understand how to evaluate and compare predictive models, and how to quantify uncertainty in predictions.
- Apply the principles of probabilistic inference to a variety of models, including linear regression, logistic regression, Gaussian processes, and neural networks.
Prerequisites
- Probability theory and statistics.
- Linear algebra and calculus.
- Basic programming skills (Python).
- Basic knowledge of machine learning.
Evaluation
- Lab sessions (5% bonus): There will be 5 lab sessions. The content of the labs is part of the course, and students are expected to attend and participate in the labs. The submission of the completed lab exercises is not mandatory, but it accounts for a total of 5% bonus points that can be added to the final exam grade.
- Assignment (40%): There will be one assignment, which will be due at the end of the course. The assignment consists in reproducing the results of a research paper related to the course topics, and writing a report about it. More details about the assignment will be provided during the course.
- Final exam (60%): The final exam (2h) will be held at the end of the course, and will cover all the topics discussed during the lectures. The exam will consist in a combination of multiple-choice questions and open-ended questions. The exam has a minimum passing grade of 8/20.
Policies
- Attendance: Attendance is mandatory. Students are expected to attend all lectures and lab sessions. Slides and lecture materials are available online in this website in advance.
- Late submission: Late submissions will not be accepted, unless in case of a valid reason (e.g., medical emergency). In such cases, students should contact the instructor as soon as possible.
- Academic integrity: Students are expected to adhere to the highest standards of academic integrity. Any form of cheating or plagiarism will not be tolerated, and will be dealt with according to the universityโs policies.
๐ข Tentative schedule
| Week | Topic |
|---|---|
| ๐ Week 01 | Introduction and Revision of linear algebra and probability |
| ๐ Week 02 | Linear regression: Optimization and Bayesian Inference |
| ๐ Week 03 | Lab: Bayesian linear regression |
| ๐ Week 04 | Approximate inference 1: Monte Carlo methods, Markov Chain Monte Carlo |
| ๐ Week 05 | Approximate inference 2: Variational inference, Laplace Approximation |
| ๐ Week 06 | Classification: Bayesian logistic regression |
| ๐ Week 07 | Lab: Bayesian logistic regression with MCMC |
| ๐ Week 08 | Gaussian processes |
| ๐ Week 09 | Lab: Gaussian processes |
| ๐ Week 10 | Neural networks and Bayesian neural networks |
| ๐ Week 11 | Lab: Bayesian deep learning |
| ๐ Week 12 | Generative models 1 |
| ๐ Week 13 | Generative models 2 |
| ๐ Week 14 | Lab: Generative models and course revision for exam |