April 22, 2021
We deal with an infinite horizon, infinite dimensional stochastic optimal control problem arising in the study of economic growth in time-space. Such problem has been the object of various papers in deterministic cases when the possible presence of stochastic disturbances is ignored. Here we propose and solve a stochastic generalization of such models where the stochastic term, in line with the standard stochastic economic growth models, is a multiplicative one, driven by a cylindrical Wiener process. The problem is studied using the Dynamic Programming approach. We find an explicit solution of the associated HJB equation and, using a verification type result, we prove that such solution is the value function and we find the optimal feedback strategies. Finally we use this result to study the asymptotic behavior of the optimal trajectories.
Similar papers 1
June 26, 2008
We develop the dynamic programming approach for a family of infinite horizon boundary control problems with linear state equation and convex cost. We prove that the value function of the problem is the unique regular solution of the associated stationary Hamilton--Jacobi--Bellman equation and use this to prove existence and uniqueness of feedback controls. The idea of studying this kind of problem comes from economic applications, in particular from models of optimal investme...
February 17, 2023
In this manuscript we consider a class optimal control problem for stochastic differential delay equations. First, we rewrite the problem in a suitable infinite-dimensional Hilbert space. Then, using the dynamic programming approach, we characterize the value function of the problem as the unique viscosity solution of the associated infinite-dimensional Hamilton-Jacobi-Bellman equation. Finally, we prove a $C^{1,\alpha}$-partial regularity of the value function. We apply thes...
June 5, 2016
The present paper considers a stochastic optimal control problem, in which the cost function is defined through a backward stochastic differential equation with infinite horizon driven by G-Brownian motion. Then we study the regularities of the value function and establish the dynamic programming principle. Moreover, we prove that the value function is the uniqueness viscosity solution of the related HJBI equation.
October 4, 2023
We study optimal control problems governed by abstract infinite dimensional stochastic differential equations using the dynamic programming approach. In the first part, we prove Lipschitz continuity, semiconcavity and semiconvexity of the value function under several sets of assumptions, and thus derive its $C^{1,1}$ regularity in the space variable. Based on this regularity result, we construct optimal feedback controls using the notion of the $B$-continuous viscosity soluti...
July 9, 2009
This paper, which is the natural continuation of a previous paper by the same authors, studies a class of optimal control problems with state constraints where the state equation is a differential equation with delays. This class includes some problems arising in economics, in particular the so-called models with time to build. The problem is embedded in a suitable Hilbert space H and the regularity of the associated Hamilton-Jacobi-Bellman (HJB) equation is studied. Therein ...
August 18, 1999
We study long-term growth-optimal strategies on a simple market with linear proportional transaction costs. We show that several problems of this sort can be solved in closed form, and explicit the non-analytic dependance of optimal strategies and expected frictional losses of the friction parameter. We present one derivation in terms of invariant measures of drift-diffusion processes (Fokker- Planck approach), and one derivation using the Hamilton-Jacobi-Bellman equation of ...
January 4, 2018
We consider a general class of dynamic resource allocation problems within a stochastic optimal control framework. This class of problems arises in a wide variety of applications, each of which intrinsically involves resources of different types and demand with uncertainty and/or variability. The goal involves dynamically allocating capacity for every resource type in order to serve the uncertain/variable demand, modeled as Brownian motion, and maximize the discounted expecte...
December 31, 2013
This paper examines stochastic optimal control problems in which the state is perfectly known, but the controller's measure of time is a stochastic process derived from a strictly increasing L\'evy process. We provide dynamic programming results for continuous-time finite-horizon control and specialize these results to solve a noisy-time variant of the linear quadratic regulator problem and a portfolio optimization problem with random trade activity rates. For the linear quad...
October 17, 2015
We consider the problem of finding optimal strategies that maximize the average growth-rate of multiplicative stochastic processes. For a geometric Brownian motion the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applicatio...
February 1, 2020
We consider an infinite horizon portfolio problem with borrowing constraints, in which an agent receives labor income which adjusts to financial market shocks in a path dependent way. This path-dependency is the novelty of the model, and leads to an infinite dimensional stochastic optimal control problem. We solve the problem completely, and find explicitly the optimal controls in feedback form. This is possible because we are able to find an explicit solution to the associat...