Machine Learning/Deep Learning

Lecturers

Dr Susan Wei, The University of Melbourne
Dr Pavel Krupskiy, The University of Melbourne
Dr Matthew Tam, The University of Melbourne

Synopsis

This course is intended to introduce the foundational theory of machine learning with a special focus on deep learning. Participants will be exposed to a range of tools that will be equally applicable in either academic or industry contexts. The intended learning outcome is that participants will be able to deploy deep learning in an end-to-end fashion and appreciate the theoretical mysteries that deep learning poses to classical statistical learning theory.

Course overview

Week 1: The basics of machine learning using the linear neural networks as a guiding example

  • These linear neural networks can be regarded as shallow neural networks and allow us to focus on the basics of neural network training without getting bogged down by complex architectures
  • How to choose the loss function, how to train, and how to evaluate the performance
  • Linear neural networks for both regression and classification

Week 2: Stochastic optimisation schemes which serve as the workhorse of modern deep learning

  • gradient descent, stochastic gradient descent, minibatch stochastic gradient descent
  • more advanced stochastic optimisation schemes including momentum, adagrad, RMSProp, Adadelta

Week 3: How to design and train deep neural networks end-to-end on real datasets

  • The simplest deep network — the multilayer perception
  • How automatic differentiation calculates gradients
  • Convolution neural networks which are purpose built for computer vision tasks, one of the crowning achievements of modern deep learning

Week 4: Classical statistical learning theory and its limitations in explaining the unreasonable effectiveness of deep learning.

  • Recent advancements in learning theory that look promising for laying a theoretical foundation of deep learning
  • Topics covered include loss landscapes of neural networks and inductive biases due to algorithmic regularisation.

Prerequisites

  • At a minimum, the successful student will have taken courses in calculus, statistics and linear algebra
  • Familiarity with the basics of programming (an introductory course or equivalent experience in C++, Julia, Python, Matlab, or similar should suffice)
  • Detailed instructions will be provided in advance of the class for participants to install Python and deep learning libraries such as PyTorch
  • Pre-read Chapters 1 and 2 of dl2.ai

Assessment

  • 4 weekly quizzes (20%)
  • 4 weekly assignments (40%)
  • Take home exam (40%)

Attendance requirements

Participation in all lectures and tutorials is expected.

For those completing the subject for their own knowledge/interest, evidence of at least 80% attendance at lectures and tutorials is required to receive a certificate of attendance.

Resources/pre-reading

Not sure if you should sign up for this course?

Take this QUIZ to self-evaluate and get a measure of the key foundational knowledge required.

Dr Susan Wei, University of Melbourne

Susan Wei is a lecturer in the School of Mathematics and Statistics at the University of Melbourne. She currently holds a Discovery Early Career Researcher Award (DECRA) from the Australian Research Council (ARC) and a Visiting Faculty Researcher position at Google Deepmind in Sydney, Australia. Her research interests include statistics, machine learning, and deep learning. She is part of the Melbourne Deep Learning Group.

pavel-krupskiy

Dr Pavel Krupskiy, University of Melbourne

Pavel Krupskiy is a lecturer in the School of Mathematics and Statistics at Melbourne University. He got his PhD from the University of British Columbia (Canada) in 2014 and worked as a postdoctoral fellow at King Abdullah University of Science and Technology (Saudi Arabia) in 2015-2017 and University of British Columbia in 2017-2018. His research interests include copula models for multivariate data, multivariate extremes, spatial data modeling and nonparametric statistics.

Dr Matthew Tam, University of Melbourne

After receiving a PhD from the University of Newcastle in 2016, Matthew Tam moved to the University of Göttingen where he was a post-doctoral researcher supported by the RTG-2088 “Discovering structure in complex data: Statistics meets Optimization and Inverse Problems” and the Alexander von Humboldt Foundation. He subsequently joined the faculty at the University of Göttingen after being appointed Junior Professor for Mathematical Optimisation. In 2020, he returned to Australia, joining the School of Mathematics and Statistics at the University of Melbourne.