Course Information

Lecturer:  

Professor Markus Hegland, The Australian National University

Synopsis:

Numerical techniques are used very widely in modern industrial societies in the design and production of vehicles, in understanding weather and climate, assessing the effect of tsunamis, the spread of introduced species and diseases, in medical image processing, the control of autonomous vehicles, the analysis of big data and in computer games. In this course we will explore some of the fundamental methods which underpin all numerical techniques. We will learn how and which mathematics drives the innovation in this exciting area of computational technology. We will see that it is often results taught in first year calculus and linear algebra which are used but occasionally we will get a glimpse of some more advanced mathematics. Basically, mathematics is applied here to show that numerical techniques work. One of the perks of this area is that while thinking about some of the mathematics we will also have plenty of time to explore some of the techniques by implementing and assessing their performance in practice.

The main components of any computational procedure are arithmetic operations and decisions. The output of such a procedure is the result of an evaluation of a typically very long and complex arithmetic expression. In a first part of this course we will learn how such expressions are evaluated. As computers are only able to represent a finite set of all (rational) numbers any computation involving irrational numbers will have to use approximation. Consequently, all numerical computations are affected by approximation errors, the so-called rounding errors. It turns out that even for some rather complicated expressions these errors can be bounded. We will see that many arithmetic laws, most notably perhaps the associative law for addition, do not hold for numerical computations. As a consequence the order in which arithmetic operations are performed can have a big influence on the accuracy of the result.

Much of this course is devoted to discussing techniques used to solve equations. This includes linear systems of equations, nonlinear equations and ordinary differential equations. We will cover Gaussian elimination, iterative methods, Newton’s method and the Runge-Kutta methods for ordinary differential equations. Somewhat curiously, the elimination procedure named after Gauss was apparantly not his favourite method to solve linear systems of equations! We will investigate some limitations of Gaussian elimination and see how they can be overcome. We will see how a special class of equations can be solved efficiently using (fast) Fourier transforms (which can be traced back to Gauss). We are then ready for the discussion of iterative techniques. We will encounter the fact that frequently an iterative approximate solution to a problem is more cost effective than the so-called exact solution. The solution of ordinary differential equations requires the representation and approximation of functions. The course will mostly cover function approximations which are based on polynomials. We will discuss polynomial interpolation and cover numerical quadrature or the integration of functions. The section on the solution of ordinary differential equations includes the very important theorem that in order for numerical techniques to perform well they have to be stable and not amplify previous errors but also be consistent and not introduce large new errors.

Course Overview:

  • Expression evaluation and floating point arithmetic. Here we introduce a computer’s version of real numbers which are called floating point numbers. Evaluating expressions in floating point arithmetic will typically be affected by rounding errors. We will explore the source and effect of these errors in expression evaluation.
  • Solving linear systems of equations. We will discuss Gaussian elimination with pivoting and several iterative methods including the Gauss-Seidel method, SOR and the conjugate gradient method. We will also discuss the fast Fourier transform and how it is used to solve certain structured systems of equations.
  • Function approximation and quadrature. This section is mostly about polynomial interpolation and its applications. We will cover the existence and uniqueness of interpolation in addition to the error incurred in polynomial interpolation. Other function systems will be mentioned. Quadrature will include the trapezoidal rule, higher order rules, Gaussian quadrature and Romberg extrapolation.
  • Non-linear equations. This section covers the bisection method and Newton’s method. Error bounds and convergence theory will be derived.
  • Ordinary differential equations. Here we consider both explicit and implicit methods, several concepts of stability and both onestep and multistep methods. The convergence theorem will be presented.

Contact Hours:

28 hours including several computer lab sessions.

Timetable:

Please see the Timetable for scheduled class times during the AMSI Summer School 2017.
Please note: Computational Mathematics is held concurrently with Geometric Group Theory and students cannot attend both courses in 2017.

Prerequisites:

The feature distinguishing this course from many similar courses introducing numerical algorithms is a stronger focus on mathematical theory. I aim to show how mathematics really drives innovation in computation. The mathematics encountered will include advanced topics from functional and harmonic analysis and algebraic and differential geometry. In order to make this course accessible to a broader audience, all the necessary advanced concepts will be introduced.

The course also requires some familiarity with first year calculus and linear algebra. A basic understanding of Taylor series and finite dimensional linear (vector) spaces, scalar products, norms, and matrices is assumed. Suggested reading here is any of the material used in the first year calculus and linear algebra course at your university.

The hands-on parts of the course will be done using Python and a computer lab with Linux computers will be available. Some familiarity with Python and Linux would be useful but participants with a good background on how to use their own laptop can do that as well using any operating system. There are many introductions to Python and Linux on the web and I suggest that you have a look at some of the video tutorials available on YouTube.

Assessment:

  • Computer lab assignments
  • Final examination

Resources:

I would suggest any book on introductory numerical analysis. Feel free to browse, an interesting book I found recently is by James F. Epperson: An introduction to numerical methods and analysis. You might also be interested in Michael T. Heath: Scientific Computing: An Introductory Survey. I have learnt a lot from a book by Bulirsh and Stoer which is to a large extent available on Google Books.

Lecturer Biography

Computational Mathematics

Professor Markus Hegland, The Australian National University

Markus received a PhD in mathematics from ETH Zurich in 1988, after which he was an inaugural research fellow at the ETH Interdisciplinary Project Center for Supercomputing. He came to ANU ‘for one year’ in 1992 to work on parallel FFTs, and has been based there ever since. Markus is now a member of the ANU Mathematical Sciences Institute and is a Hans Fischer Fellow of the Technical University of Munich. He is the Deputy Director of AMSI starting in July 2016. He has taught several AMSI summer school courses on computational topics.

Markus’ research covers the areas of numerical approximation and algorithms for inverse problems, high-dimensional problems and high performance computing. He sees mathematics as a prime driver of computational innovation. He is particularly interested to explore new ways on how abstract mathematics can make computations more efficient and provide guarantees on their accuracy.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text.