Markov Chains with Applications


Professor Malwina Luczak, The University of Melbourne


Markov chains are widely used as models of real-world processes, especially where it is important to incorporate a degree of randomness in the evolution of the system.

For instance, when modelling the outbreak of a disease in a population, the current state of the system can be captured reasonably accurately by the number of people presently infected and the number of those susceptible to the disease, but the future course of the epidemic cannot normally be known with any confidence. It may be influenced by a few chance events, especially when the number of infectives is small (such as at the start or near the end of the outbreak), which may have a crucial impact on how long it takes before the epidemic is contained and how many people will end up being infected.

This course will cover some important aspects of the theory of Markov chains, in discrete and continuous time. We start with the basics, including a discussion of convergence of the time-dependent distribution to equilibrium as time goes to infinity, in the case where the state space has a fixed size. We then study concepts and results regarding the long-term and limiting behaviour of large systems; this will involve looking at convergence of the distribution to equilibrium, also letting the state space size become larger and larger.

An important phenomenon in that context is that of phase transition, where the behaviour of a large system changes radically as the value of a parameter moves through a critical value. This is best studied by considering a sequence of systems, and letting the parameter value be a function of the system size.

Examples of applications will be given, including some basic epidemic models, as well as models arising in statistical physics and computer science.

Course Overview

  1. Preliminaries from measure-theoretic probability.
    • Probability spaces. Sequences of random variables. Conditional probability and expectation. Martingales and stopping times.
  2. Markov chains in discrete and continuous time.
    • Definitions and examples. Transient evolution. Hitting times. Invariant distributions, and convergence to equilibrium.
  3. Couplings. Total variation distance. Mixing time.
  4. Sequences of Markov chains. Rapid mixing and the cutoff phenomenon. Concentration of measure and approximation by differential equations
  5. Examples and applications will be given throughout the course. These will include a selection from the following: epidemic models, Glauber dynamics, random walks, load-balancing systems.


  • Basic knowledge of probability, including familiarity with the notions of, e.g., random variables and expectation.
  • Epsilon-delta proofs, at least to the level of a first course in real analysis.
  • Basic knowledge of differential equations.


  • Two assignments worth 20% each.
  • Final examination worth 60%.

Resources/pre-reading (if available)

The following textbooks would be appropriate for preliminary reading, and for sections (1) and (2) of the course.

  • J.R. Norris. Markov chains. CUP 1998.
  • J. Rosenthal. A first look at rigorous probability theory. World Scientific Publishing 2006.

The material in the remaining sections of the course will be largely taken from the following book, available free of charge online:

Professor Malwina Luczak,
The University of Melbourne

Malwina Luczak is an Australian Research Council Future Fellow, and Professor in Applied Probability at the University of Melbourne. Prior to that, she was Professor in Applied Probability at Queen Mary University of London and EPSRC Leadership Fellow, having earlier also held appointments at the University of Sheffield and the London School of Economics.

Malwina’s research focusses mostly on Markov chains, especially their long-term behaviour and concentration of measure. She is also interested in random graphs, as well as in applications of the theory of Markov chains and random graphs to the study of queueing networks and of spread of diseases in populations.