Markov Chain Mixing and Applications

Course Description

The goal of this course is to provide an introduction to the theory of Markov chains, with an emphasis on their applications to algorithms, learning and control. The main tool we will focus on is the mixing properties of finite-state, discrete-time, reversible Markov chains, i.e., how “fast” a given chain converges to its stationary distribution from any starting state. This may seem like a narrow question; however, it is central to the analysis of Markov Chain Monte Carlo, with connections to classical questions in statistical physics, numerical integration, randomized algorithms and combinatorial optimization, and increasingly, applications in distributed computing, large-scale graph mining, reinforcement learning and high-dimensional optimization.

Course Information

References

There are two excellent sources for a lot of the techniques we will cover

The first is a remarkable unfinished monograph, which has been online since the 1990s, and inspired a lot of the work in this field. The second is a more recent book which gives a nice and accessible introduction to many of the topics we will study.

Lecture Notes

  • Set 0: Wilson’s algorithm for sampling spanning trees (Demo)

  • Set 1: Markov chains, ergodic theorem and convergence [notes]

  • Set 2: Coupling and mixing times [notes]

  • Set 3: Convergence of random variables [notes]

  • Set 4: Path coupling [notes]

  • Set 5: Perfect sampling [notes]

  • Set 6: Intro to spectral techniques [notes]

  • Set 7: Cheeger’s inequality and Jerrum-Sinclair [notes]

  • Set 8: Canonical flows [notes]

  • Set 9: Mixing and approximate counting [notes]

Siddhartha Banerjee
Siddhartha Banerjee
Assistant Professor

Sid Banerjee is an assistant professor in the School of Operations Research at Cornell, working on topics at the intersection of data-driven decision-making, market design, and algorithms for large-scale networks.