📚 Assignment
For this course, part of the grade will be based on an assignment that will be due at the end of the semester.
The assignment will be more research-oriented and consist of reproducing a paper.
- You will have to choose a paper from a list and reproduce (some of) the results.
- You will have to write a short report explaining the paper and the results.
- You will have to submit the report and the code.
The assignment can be done in group (2 people max) but the report is individual.
The assignment will be graded based on the quality of the report, the quality of the code, and the quality of the results. It will count for 40% of the final grade.
🎯 Objectives
- Understand a research paper in the field of (probabilistic) machine learning.
- Be able to apply the basics of the course to reproduce the results.
- Be able to go from equations to code in a more complex setting than the labs.
- Be able to write a short report explaining the paper, the results, and the code.
📝 Deliverables
- A short report (4-5 pages max) with the NeurIPS format. The report should include:
- A summary of the paper (1-2 pages).
- A description of the results you reproduced (1-2 pages).
- A discussion of the results and the code (1 page): what worked well, what was challenging, what could be improved, etc.
- A link to a GitHub or GitLab repository with the code and the results (ideally, a Jupyter notebook).
⏰ Timeline
- Week 4: List of papers available.
- Week 6: Deadline to choose a paper and form a group.
- Exam week: Submit the assignment.
📖 List of papers
- Weight Uncertainty in Neural Networks. Blundell et al. 2015.
- Bayesian Learning via Stochastic Gradient Langevin Dynamics. Welling and Teh. 2011.
- Sparse Gaussian Processes using Pseudo-inputs. Snelson and Ghahramani. 2006.
- Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. Gal and Ghahramani. 2016.
- Stochastic gradient Hamiltonian Monte Carlo. Chen et al. 2014.
- Cyclic Stochastic Gradient MCMC for Bayesian Deep Learning. Zhang et al. 2019.
- Variational Dropout and the Local Reparameterization Trick. Kingma et al. 2015.
- Bayesian classification with Gaussian processes. Williams and Barber. 1998.
- Deep Bayesian Active Learning with Image Data. Gal et al. 2017.
- Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data. Lawrence. 2004.
- Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Lakshminarayanan et al. 2017.
- Auto-Encoding Variational Bayes. Kingma and Welling. 2014.
- Practical Variational Inference for Neural Networks. Graves. 2011.
- Black Box Variational Inference. Ranganath et al. 2014.