Optimal Stochastic Control and an application to the Management of Public Debt
A talk in the Mathematisches Kolloquium (SFB 1283) series by Giorgio Ferrari from Bielefeld
Optimal control theory plays a fundamental role in various disciplines ranging from Economics and Finance to Aerospace Engineering and Biology.
In this talk we will start by introducing some ideas and methods of (stochastic) optimal control theory, and by revisiting some important applications. We will see how the dynamic programming principle of Bellman leads to an equation for the value function (the so-called Hamilton-Jacobi-Bellman equation), and how any suitable solution to this equation can be verified to identify with the value function of the optimal control problem (the so-called verification theorem).
We will then move on by introducing an important class of stochastic optimal control problems, namely singular stochastic control problems. In these problems the dynamics of the stochastic system is additively controlled through a process of bounded variation, and the cost of exerting control is linear. Through a classical example, we will see how singular stochastic control problems can be related to free-boundary problems and questions of optimal stopping.
This relation is particularly helpful in multi-dimensional settings, and we will see its use in a fully two-dimensional singular stochastic control problem motivated by the aim of optimally managing the debt-to-GDP ratio of a country.