In this course you will learn how to deal with complex data sets and build predictors and classifiers, using state of the art machine learning approaches, and combine different methods to improve results.
Knowledge and understanding: basic and some advanced topics in graphical models, inference, bayesian methods, kernel-based methods, and deep learning.
Applying knowledge and understanding: being capable of dealing with a complex dataset, clean it, and build effective predictors, combining several methods of supervised and unsupervised learning.
Communication skills: being able to explain the basic ideas and communicate the results to experts and to non-experts.
Learning skills: being capable of exploring literature and find alternative approaches and combine them to solve complex problems.
Basic knowledge of Python and scientific Python. Knowledge of statistics and machine learning, as from the course on Machine Learning and Data Analytics.
1. Graphical models and exact inference.
2. Approximate inference for models latent variables.
3. Sampling methods.
4. Bayesian linear regression and classification.
5. Kernel based methods and Gaussian Processes.
6. Deep Learning: classic, recurrent and convolutional neural networks. Regularization and generative models.
Frontal lectures and hands on sessions, both individual and in groups. The balance will be roughly 60% of frontal lectures and 40% of hands-on
sessions. Ideally, each lecture will have a part of frontal teaching and a part of hands-on training. This may range from getting used to new libraries and tools to analyse complex datasets in groups.
1. Graphical models: Bayesian networks, Markov random fields, factor graphs, exact inference by message passing in tree-like factor graphs.
2. Mixtures of Gaussians, latent variables and expectation maximisation
3. Sequential data and Hidden Markov Models.
4. Bayesian linear regression and classification. Laplace approximation.
5. Kernel based methods: Gaussian Processes: Regression, Classification, Expectation Propagation.
6. Deep Learning: neural networks (NN), recurrent NN, convolutional NN, autoencoders, Bolzmann Machines, stochastic optimization, regularization.
Extra topics (if there will be time left)
Sampling: rejection sampling, importance sampling, Markov Chain Monte Carlo
Kernel PCA and non-linear dimensionality reductions.
Overview of variational inference. VI for Gaussian mixtures
Ensemble methods: bagging and boosting. Gradient boosting.
The exam will consist of two parts:
1. a group project work, in groups of 2 to 4 students. Each group will have one or more tasks, typically analysing a complex dataset, and will have to write a short report, provide commented code, and give a brief presentation explaining the work done.
2. a short individual presentation of a topic not presented in the course, and studied autonomously by the student.
During the presentations, few questions will be asked to asses the individual contributions and preparation on the topics of the course.
Bring your own laptop.
1. C. M. Bishop, Pattern recognition and machine learning. New York, NY: Springer, 2009.
2. I. J. Goodfellow, Y. Bengio, and A. C. Courville, Deep Learning. MIT press, 2016.
Other good textbooks
3. K. P. Murphy, Machine learning: a probabilistic perspective. Cambridge, MA: MIT Press, 2012.
4. J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning, vol. 1. Springer series in statistics Springer, Berlin, 2001.