Optimal control theory is a branch of applied mathematics that deals with finding a control law for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in both science and engineering. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.

Optimal control is a mathematical optimization method for deriving optimal policies and is an extension of the calculus of variations, dating back to the formulation of Bernoull's brachistochrone problem more than 300 years ago. Optimal control can also be seen as a control strategy in control theory.

Bernoulli's Challenge

In the June 1696 issue of Acta Eruditorum, Bernoulli posed the following challenge: * “Given two points A and B in the vertical plane, for a moving particle m, how to find the path AmB descending along which by its own gravity and beginning to be urged from the point A, it may in the shortest time reach B?”*

This problem, which sounds much better in Latin in its English translation, is the so-called brachistochrone problem, named by combining the two Greek words for the ‘’shortest” (brachistos) and “time” (chronos).

In this AMS seminar talk, we will review some basic knowledge of Calculus and discuss an approach how to solve the Brachistochrone Problem.