πŸ“š Assignment

Author
Affiliation

EURECOM

🎯 Objectives

  1. Understand a research paper in the field of (probabilistic) machine learning.
  2. Be able to apply the basics of the course to reproduce the results.
  3. Be able to go from equations to code in a more complex setting than the labs.
  4. Be able to write a short report explaining the paper, the results, and the code.

πŸ“ Deliverables

  • A short report (5 pages max) with the NeurIPS format. Use the template at this link (download it and use it locally, or create a copy on Overleaf). The report should include:
    • A summary of the paper (1-2 pages).
    • A description of the results you reproduced (1-2 pages).
    • A discussion of the results and the code (1 page): what worked well, what was challenging, what could be improved, etc.
    • Use an appendix if you want to include additional details
    • Outside of the 5 pages, you must include a section on AI use (see below) and how the work was distributed between the group members (if applicable).
  • A link to a GitHub or GitLab repository with the code and the results (ideally, a Jupyter notebook). The link must be included in the abstract of the report.

AI use

  • You are allowed to use AI tools (e.g., ChatGPT, GitHub Copilot, etc.) to help you with the assignment. However, you must disclose any use of AI tools in your report and explain how you used them. Failure to disclose AI use will be considered a violation and will be penalized, up to and including a 0 for the assignment.

  • You are responsible for the content of your report and your code, even if you used AI tools. If required, you should be able to explain the code and the results in detail (e.g. answering β€œbecause ChatGPT did it” is not an acceptable answer).

⏰ Timeline

  • Week 4: List of papers available.
  • Week 6: Deadline to choose a paper and form a group.
  • 11:59 PM, June 7th, 2026: Deadline to submit the report and the code.

πŸ“– List of papers

  • Weight Uncertainty in Neural Networks. Blundell et al. 2015.
  • Bayesian Learning via Stochastic Gradient Langevin Dynamics. Welling and Teh. 2011.
  • Sparse Gaussian Processes using Pseudo-inputs. Snelson and Ghahramani. 2006.
  • Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. Gal and Ghahramani. 2016.
  • Stochastic gradient Hamiltonian Monte Carlo. Chen et al. 2014.
  • Cyclic Stochastic Gradient MCMC for Bayesian Deep Learning. Zhang et al. 2019.
  • Variational Dropout and the Local Reparameterization Trick. Kingma et al. 2015.
  • Bayesian classification with Gaussian processes. Williams and Barber. 1998.
  • Deep Bayesian Active Learning with Image Data. Gal et al. 2017.
  • Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data. Lawrence. 2004.
  • Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Lakshminarayanan et al. 2017.
  • Auto-Encoding Variational Bayes. Kingma and Welling. 2014.
  • Practical Variational Inference for Neural Networks. Graves. 2011.
  • Black Box Variational Inference. Ranganath et al. 2014.