Associate Professor Regina Burachik, The University of South Australia


An optimization training should include (i) theoretical analysis and algorithms, (ii) their relation to practical problems, and (iii) the ability to apply the theory and algorithms on these “real- life” problems.

Hence, we will cover the following general areas:

Modelling: Formulation of a given problem as an optimization problem;

Theory: Study of existence and uniqueness of solutions, and characterization of solutions of optimization problems;

Methods: Development and convergence analysis of algorithms for solving optimization problems;

Implementation: Transcription of the algorithm to the medium of a suitable technical device (e.g., a digital computer). For instance, on a digital computer, implementation of a method consists of writing a code corresponding to the method and running it on the computer.

All these four stages will be part of this optimization course. To implement the methods, you will be using MATLAB (or a similar language, such as octave).

Many topics will be demonstrated by MATLAB programs, and you will find satisfaction in the ability of solving problems on your own!

Course Objectives: At the end of this course, you should be able to:

  • Identify the main features of a given optimization problem (i.e., whether the problem is constrained or not, differentiable or not, convex or not, etc.).
  • Decide whether the problem has solutions, and whether these solutions can be found analytically.
  • Become acquainted with numerical algorithms able to compute/approximate the solutions.
  • Analyze the convergence of optimization techniques.
  • model and solve optimization problems using MATLAB.

Course Overview (week by week)

We assume there will be five one-hour lectures and two-hours tutorial per week. References to sections and chapters are as in the book: Amir Beck, Introduction to Nonlinear Optimization, MOS-SIAM Series on Optimization, 2014, which I will follow.

Week 1: Review of mathematical tools: Inner product and norms. Matrices, eigenvectors and eigenvalues. Topology concepts, open/closed balls, open/closed sets. Optimality conditions for Unconstrained Optimization: Global and local optima, first order optimality conditions, second order optimality conditions. Conditions for existence of solutions. Global optimality conditions. The quadratic case. (chapters 1 and 2 of textbook).

Week 2: Least squares problem (chapter 3). The gradient method, scaled gradient method, Gauss-Newton Method. Newton’s method, damped Newton method, Cholesky factorisation. (chapters 4 and 5).

Week 3: Hybrid Gradient-Newton. (chapter 5) Convex sets and its topological properties (chapter 6). Convex functions. Connection with global optimality conditions. Continuity and differentiability of Convex functions, and other related properties (chapter 7, up to 7.9)

Week 4: Convex optimisation: important examples: Linear programming, convex quadratic problems. Examples of use of CVX software for solving convex optimization problems (chapter 8). Optimality for problems with continuously differentiable objective over a convex constraint set:
Optimality conditions for some special cases. (chapter 9). KKT conditions for inequality constrained problems. (chapter 11, up to 11.7)

Contact hours: 28

Prerequisite Knowledge

  • Multivariate calculus (e.g., gradient, Hessian, and Jacobian).
  • Linear algebra: linear functions, matrices, subspaces, dimension theorem, spectral decomposition theorem for symmetric matrices.
  • Basic knowledge of MATLAB.


One assignment, based on exercises solved during the tutorial hours of weeks 1, 2 and 3. This assignment will be posted on Wednesday of week 2, and due in the Wednesday of week 4. It is to be submitted as hard copy in person to the lecturer, or by email as a single pdf file of reasonable size. This assignment is worth 30% of the total marks.

Three 30-minutes quizzes in the tutorials of weeks 1, 2 and 3. The three quizzes will be worth 5% each. Hence, this contributes to of 15% of the total marks.

There will be a final exam worth 55% of the total marks. The exam duration is 2 hours and will be held at the students’ home institution.

Late submission policy: Due to the short time frame, late submissions will not be accepted.


This course is based on the book: Amir Beck, Introduction to Nonlinear Optimization, MOS-SIAM Series on Optimization, 2014, ISBN 978-1-611973-64-8. This book is available as e-book.


Chapters 1 and 2 of Amir Beck, Introduction to Nonlinear Optimization, MOS-SIAM Series on Optimization, 2014, ISBN 978-1-611973-64-8.



Associate Professor Regina Burachik, The University of South Australia

Regina Burachik is an Associate Professor of Optimization at University of South Australia. She publishes extensively in nonsmooth/convex/nonconvex optimization, and set-valued analysis. She co-authored the Springer research-level book: ”Set-Valued Analysis and Monotone Mappings”. Her interest is in theoretical and practical aspects of optimization, including variational inequalities, maximal monotone maps, and duality theory. She is associate editor of around 10 international journals, and regularly gives keynotes/plenary talks at national/international conferences. In 2018, she is a plenary speaker at the Annual Meeting of the Australian Mathematical Society, and at the Annual Meeting of the Argentinian Mathematics Union.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text.