April 13, 2016
In this paper we investigate possible approaches to study general time-inconsistent optimization problems without assuming the existence of optimal strategy. This leads immediately to the need to refine the concept of time-consistency as well as any method that is based on Pontryagin's Maximum Principle. The fundamental obstacle is the dilemma of having to invoke the {\it Dynamic Programming Principle} (DPP) in a time-inconsistent setting, which is contradictory in nature. Th...
October 3, 2018
We consider a general class of stochastic optimal control problems, where the state process lives in a real separable Hilbert space and is driven by a cylindrical Brownian motion and a Poisson random measure; no special structure is imposed on the coefficients, which are also allowed to be path-dependent; in addition, the diffusion coefficient can be degenerate. For such a class of stochastic control problems, we prove, by means of purely probabilistic techniques based on the...
February 18, 2017
Verification theorems are key results to successfully employ the dynamic programming approach to optimal control problems. In this paper we introduce a new method to prove verification theorems for infinite dimensional stochastic optimal control problems. The method applies in the case of additively controlled Ornstein-Uhlenbeck processes, when the associated Hamilton-Jacobi-Bellman (HJB) equation admits a mild solution. The main methodological novelty of our result relies on...
June 6, 2013
In this paper, we study a stochastic recursive optimal control problem in which the objective functional is described by the solution of a backward stochastic differential equation driven by G-Brownian motion. Under standard assumptions, we establish the dynamic programming principle and the related Hamilton-Jacobi-Bellman (HJB) equation in the framework of G-expectation. Finally, we show that the value function is the viscosity solution of the obtained HJB equation.
November 10, 2014
We derive a new equation for the optimal investment boundary of a general irreversible investment problem under exponential L\'evy uncertainty. The problem is set as an infinite time-horizon, two-dimensional degenerate singular stochastic control problem. In line with the results recently obtained in a diffusive setting, we show that the optimal boundary is intimately linked to the unique optional solution of an appropriate Bank-El Karoui representation problem. Such a relati...
November 3, 2002
We investigate the growth optimal strategy over a finite time horizon for a stock and bond portfolio in an analytically solvable multiplicative Markovian market model. We show that the optimal strategy consists in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon.
October 9, 2012
We present a methodology for obtaining explicit solutions to infinite time horizon optimal stopping problems involving general, one-dimensional, It\^o diffusions, payoff functions that need not be smooth and state-dependent discounting. This is done within a framework based on dynamic programming techniques employing variational inequalities and links to the probabilistic approaches employing $r$-excessive functions and martingale theory. The aim of this paper is to facilitat...
July 21, 2016
Stochastic optimal control problems governed by delay equations with delay in the control are usually more difficult to study than the the ones when the delay appears only in the state. This is particularly true when we look at the associated Hamilton-Jacobi-Bellman (HJB) equation. Indeed, even in the simplified setting (introduced first by Vinter and Kwong for the deterministic case) the HJB equation is an infinite dimensional second order semilinear Partial Differential Equ...
March 21, 2019
In this paper we consider discrete time stochastic optimal control problems over infinite and finite time horizons. We show that for a large class of such problems the Taylor polynomials of the solutions to the associated Dynamic Programming Equations can be computed degree by degree.
February 2, 2016
We consider a discounted reward control problem in continuous time stochastic environment where the discount rate might be an unbounded function of the control process. We provide a set of general assumptions to ensure that there exists a smooth classical solution to the corresponding HJB equation. Moreover, some verification reasoning are provided and the possible extension to dynamic games is discussed. At the end of the paper consumption - investment problems arising in fi...