## Math 273, Section 1, Fall 2014

## Optimization, Calculus of Variations, and Control Theory

**Lecture Meeting Time:** WED 3.00PM - 3.50pm and FRI 2.00-3.50pm.

** Lecture Location:** MS 5117.

**Instructor:** Luminita A. Vese

**Office:** MS 7620 D

**Office hours:** Wed and Fri after the class or by appointment.

**E-mail:** lvese@math.ucla.edu

**Discussion Section:** THU 3.00pM - 3.50pm.

**Discussion Section Location:** MS 5118.

**Teaching Assistant:** Eric Radke

**E-mail:** eric.radke@gmail.com

** General Course Description:** Application of abstract mathematical theory to optimization problems of calculus of variations, including numerical optimization.

**References:**
J. Nocedal and S.J. Wright, * Numerical Optimization*, Springer
Series in Operations Research, Springer 1999 (1st or 2nd edition).

* Online access
restricted to UC campuses (2nd edition) *
I. Ekeland and R. Temam, * Convex Analysis and Variational Problems*, SIAM, 1999 (new edition)

* Online
access restricted to UC campuses (2nd edition) *
E. Zeidler, * Nonlinear Functional Analysis and its Applications*, Vol. III, * Variational Methods and Optimization *, Springer-Verlag 1984.
P.E. Gill, W. Murray, and M.H. Wright, * Practical Optimization*, Academic Press 1981.
R.T. Rockafellar, * Convex Analysis*, Princeton University Press 1970.
J.-B. Hiriart-Urruty, C. Lemarechal, * Fundamentals of Convex Analysis*, Springer 2001.
S. Boyd and L. Vandenberghe,
*Convex Optimization*,
Cambridge University Press, 2004 (especially Chapters 9, 10 and 11).
M. Giaquinta, S. Hildebrandt, * Calculus of variations*, Springer, 1996 (two volumes).
D. Luenberger, * Optimization by Vector Space Methods *, John Wiley & Sons, 1969.
Dimitri P. Bertsekas, with Angelia Nedic and Asuman E. Ozdaglar, * Convex Analysis and Optimization *.
L.C. Evans, * Partial Differential Equations *, Chapter 8.
H. Attouch, G. Buttazzo, and G. Michaille, *Variational Analysis in Sobolev and BV Spaces: applications to PDE's and optimization*, MPS-SIAM 2006.

**Specific topics:**

I. The first half of the course will be devoted to numerical (nonlinear) optimization.

Sections on numerical optimization covered from Nocedal and Wright: 2.1, 3.1, 3.2, 3.3, 7.1, 8.1, 10, 10.2, Summary of Ch. 12, 15, 17.
Additional topics are covered from Gill-Murray-Wright, and from Boyd-Vandenberghe (9.1.2, 9.2, 9.3, 9.4, 9.5).
Formulation of a general finite dimensional optimization problem with constraints; objective function, equality or inequality constraints, feasible region, example.
Finite dimensional unconstrained optimization: recall of Taylor's theorem; 1st and 2nd order necessary conditions; sufficient conditions for local minimizer; case of convex functions and global minimizers (pages 13-17 from Nocedal and Wright).
Examples of non-smooth optimization problems and how to transform them into smooth problems (from P.E. Gill, W. Murray, and M.H. Wright, Practical Optimization, pages 96-98).
Descent methods: definition of descent directions and steepest descent directions, step length alpha, computation of steepest descent direction for various norms (Euclidean, general quadratic norm, and l1-norm), exact line search and backtracking line search (from Chapter 9 Convex Optimization and part from Nocedal-Wright).
Newton and Quasi-Newton methods
Constrained optimization: quadratic penalty method, log barrier, augmented Lagrangian.

II. The second half of the course will be devoted to abstract formulations in calculus of variations and applications to minimization problems on Sobolev spaces. Several sections from Ekeland-Temam will be presented.
Abstract minimization problems, existence of minimizers, applications, duality techniques in the continuous case (Ekeland-Temam), polar functions, Lagrangians, saddle points.
Duality applied to a particular case on finite dimensional optimization.
Several notions of differentiability; characterization of minimizers; computation of Euler-Lagrange equation for
F(u)=int_{x0}^{x1} L(x,u,u')dx in one dimension; associated gradient descent method for a general problem "Min F(u)" for u in V that decreases the objective function (associated time-dependent Euler-Lagrange equation).
Applications to abstract minimization problems and to minimization problems on Sobolev spaces; computation of the dual problem.
Sobolev gradients
Shape optimization and other applications from image processing.

** Links:**
*Matlab Optimization Toolbox*
* Optimization Online *
* Optimization Center at Northwestern University*
* SIAM Activity Group on Optimization *
* Numerical Recipies *
* NEOS Guide *
* Convex Analysis and Optimization by Dimitri P. Bertsekas *
* Computational Convex Analysis - CCA numerical library by Yves Lucet *

**Assignments Policy:**
There will be several homework assignments with theoretical and computational questions.

**Notes:**
* Summary of optimality conditions *
* Notes on Stable and Normal Problems (following Ekeland-Temam) *
* Connections with the finite dimensional case *
* Duality Examples *
* Notations for Sobolev Spaces *

**Homework Assignments, Projects & Practice Problems:**
* Homework #1 *

* Latex file *
*
Homework #2 *

* Latex file *
*
Homework #3 (due on Friday, November 14)*

* Latex file
*
*
Homework #4 *

* Latex file
*
*
Homework #5 *

* Latex file
*