Math 273B: Calculus of Variations


  • MS 5147, Mon Wed Fri 3-3:50pm

  • Prerequisits: linear algebra, basic functional analysis, basic optimization

  • Online forum for Q&As, homework discussions, and projects


  1. Optimization by Vector Space Methods, Luenberger, 1969.

  2. Convex Analysis and Variational Problems, Ekeland and Temam, SIAM, 1999. (UC campuses have online access.)


  • 50% about four homework sets, latex required, 500 points in total

  • 40% reading projects, 400 points

  • 10% classroom and piazza participation (ask and answer questions, share resources), 100 points

  • total: 100% and 1000 points

Homework / exam policy

No extension will be granted. Late submission will not be accepted. No exceptions.

Topics (tentative):

  • Vector space (finite and infinite dimensions)

  • Optimization in vector space, local/global minima, first variation, second variation, optimality conditions

  • Calculus of variations, Euler-Lagrange equation, Hamiltonian

  • If time permits, optimal control and Hamilton-Jacobi-Bellman equation

Project assignments

  • Group size: 1, 2, or 3 students in each group.

  • Requirements:

    • Select one paper below or bring up your own choice (subject to approval)

    • Submit a reading report or a set of slides that describe the background, motivation, main methods, numerical results, and future direction The submitted documents will be read by the instructor. Points will be taken for missing major parts, missing major novelties or contributions, missing major references, or any significant mathematical errors.

    • Groups of 2 require both methodological and numerical results.

    • Groups of 3 also require a brief literature survey.

  • Todo's and deadlines:

    • Group formation and topic selection: email the instructor the list of names and copy all group members in the email, no later than Monday, February 25.

    • In the last week of class, 15-20 minute in class presentations will be schedules. We might need additional time in some evenings.

    • Final submission of slidesreportscodes to CCLE by March 22nd (last day of quarter).

  • List of candidate papers:

    • Michael G. Crandall and Pierre-Louis Lions. Viscosity solutions of Hamilton-Jacobi equations, T. AMS, 1983.

    • M. Bardi and L.C. Evans. On Hopf's formulas for solutions of Hamilton-Jacobi equations, Nonlinear analysis. Theory, methods and applications, 1984.

    • Jonathan Eckstein and Michael Ferris. Operator-Splitting Methods for Monotone Affine Variational Inequalities, with a Parallel Application to Optimal Control, INFORMS J. on Computing, 1998.

    • Brendan O'Donoghue, Giorgos Stathopoulos, and Stephen Boyd. A Splitting Method for Optimal Control, IEEE T. Control Systems Technology, 2013.

    • Biao Luo, Huai-Ning Wu, and Tingwen Huang. Off-Policy Reinforcement Learning for H_\infty Control Design, IEEE T. Cybernetics, 2015.

    • Timothy Lillicrap, et al. Continuous Control With Deep Reinforcement Learning, arXiv:1509.02971, 2015.

    • Bahare Kiumarsi, Kyriakos G. Vamvoudakis, Hamidreza Modares, Frank L. Lewis. Optimal and Autonomous Control Using Reinforcement Learning: A Survey, IEEE T. Neural Networks and Learning Systems, 2017. (Recommended for Groups of 2 or 3.)

    • Yat Tin Chow, Jerome Darbon, Stanley Osher, and Wotao Yin. Algorithm for Overcoming the Curse of Dimensionality For Time-Dependent Non-convex Hamilton–Jacobi Equations Arising From Optimal Control and Differential Games Problems, J. Scientific Computing, 2017.

    • Kyriakos Vamvoudakis and Frank Lewis. Online actor–critic algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 2010.

    • Qianxiao Li, Long Chen, Cheng Tai, Weinan E. Maximum principle based algorithms for deep learning, Journal of Machine Learning Research, 2018.

    • Benjamin Recht. A Tour of Reinforcement Learning: The View from Continuous Control, Annual Review of Control, Robotics, and Autonomous Systems, 2019.

    • Gabriel Peyre and Marco Cuturi. Computational Optimal Transport, Foundations and Trends® in Machine Learning, 2019.

« Back