๐Ÿ“š Assignment

Author
Affiliation

EURECOM

๐ŸŽฏ Objectives

  1. Understand a research paper in the field of (probabilistic) machine learning.
  2. Be able to apply the basics of the course to reproduce the results.
  3. Be able to go from equations to code in a more complex setting than the labs.
  4. Be able to write a short report explaining the paper, the results, and the code.

๐Ÿ“ Deliverables

  • A short report (4-5 pages max) with the NeurIPS format.
  • A link to a GitHub or GitLab repository with the code and the results (ideally, a Jupyter notebook).

โฐ Timeline

  • Week 5: List of papers available.
  • Week 7: Deadline to choose a paper and form a group.
  • Exam week: Submit the assignment.

๐Ÿ“– List of papers

  • Weight Uncertainty in Neural Networks. Blundell et al. 2015.
  • Bayesian Learning via Stochastic Gradient Langevin Dynamics. Welling and Teh. 2011.
  • Sparse Gaussian Processes using Pseudo-inputs. Snelson and Ghahramani. 2006.
  • Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. Gal and Ghahramani. 2016.
  • Stochastic gradient Hamiltonian Monte Carlo. Chen et al. 2014.
  • Cyclic Stochastic Gradient MCMC for Bayesian Deep Learning. Zhang et al. 2019.
  • Handling sparsity via the Horseshoe prior. Carvalho et al. 2009.
  • Variational Dropout and the Local Reparameterization Trick. Kingma et al. 2015.
  • Bayesian classification with Gaussian processes. Williams and Barber. 1998.