Objectif du cours
Description of the course:
Graphical models (or probabilistic graphical models) provide a powerful paradigm to jointly exploit probability theory and graph theory for solving complex real-world problems. They form an indispensable component in several research areas, such as statistics, machine learning, computer vision, where a graph expresses the conditional (probabilistic) dependence among random variables.
This course will focus on discrete models, that is, cases where the random variables of the graphical models are discrete. After an introduction to the basics of graphical models, the course will then focus on problems in representation, inference, and learning of graphical models. We will cover classical as well as state of the art algorithms used for these problems. Several applications in machine learning and computer vision will be studied as part of the course.
Organisation des séances
– 7 lectures
– Project defense on March
– All information at http://thoth.inrialpes.fr/people/alahari/disinflearn/
Mode de validation
project (report + defense)
Références
Convex Optimization, Stephen Boyd and Lieven Vanderbeghe
Numerical Optimization, Jorge Nocedal and Stephen J. Wright
Introduction to Operations Research, Frederick S. Hillier and Gerald J. Lieberman
An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs, M. Pawan Kumar, Vladimir Kolmogorov and Phil Torr
Convergent Tree-reweighted Message Passing for Energy Minimization, Vladimir Kolmogorov
Probabilistic graphical models: principles and techniques, Daphne Koller and Nir Friedman, MIT Press
Thèmes abordés
– Introduction, graphical models, message-passing methods – Belief propagation
– Graph-cuts (binary energy minimization, multi-label energy minimization)
– Tree-reweighted message passing
– Dual decomposition, convex relaxations, linear programming relaxations
– Causality
– Bayesian Networks
– Deep Learning in graphical models