November 15, 2006
We present and analyze a new method for solving optimal control problems for Volterra integral equations, based on approximating the controlled Volterra integral equations by a sequence of systems of controlled ordinary differential equations. The resulting approximating problems can then be solved by dynamic programming methods for ODE controlled systems. Other, straightforward versions of dynamic programming, are not applicable to Volterra integral equations. We also derive the connection between our version of dynamic programming and the Hamiltonian equations for Volterra controlled systems.
Similar papers 1
November 8, 2006
We formulate and analyze a new method for solving optimal control problems for systems governed by Volterra integral equations. Our method utilizes discretization of the original Volterra controlled system and a novel type of dynamic programming jn which the Hamilton-Jacobi function is parametrized by the control function (rather than the state, as in the case of ordinary dynamic programming). We also derive estimates for the computational cost of our method.
January 13, 2021
We study linear-quadratic optimal control problems for Voterra systems, and problems that are linear-quadratic in the control but generally nonlinear in the state. In the case of linear-quadratic Volterra control, we obtain sharp necessary and sufficient conditions for optimality. For problems that are linear-quadratic in the control only, we obtain a novel form of necessary conditions in the form of double Volterra equation; we prove the solvability of such equations.
June 25, 2019
We formulate and analyze game-theoretic problems for systems governed by integral equations. For Volterra integral equations, we obtain and prove necessary and sufficient conditions for linear-quadratic problems, and for problems that are linear-quadratic in the control. Also, we obtain necessary conditions for one type of pursuit-evasion Volterra games.
April 13, 2019
We analyze optimal control problems for multiple Fredholm and Volterra integral equations. These are non Pontryaginian optimal control problems, i.e. an extremum principle of Pontryagin type does not hold. We obtain first order necessary conditions for optimality, and second order necessary and sufficient conditions. We illustrate with applications to first order and second order Volterra bilinear control problems.
June 18, 2016
We present a number of cases of optimal control of Volterra and Fredholm integral equations that are solvable in the sense that the problem can be reduced to a solvable integral equation. This is conceptually analogous to the role of the Riccati differential system in the optimal control of ordinary differential equations.
September 16, 2021
This paper is concerned with a linear quadratic optimal control for a class of singular Volterra integral equations. Under proper convexity conditions, optimal control uniquely exists, and it could be characterized via Frechet derivative of the quadratic functional in a Hilbert space or via maximum principle type necessary conditions. However, these (equivalent) characterizations have a shortcoming that the current value of the optimal control depends on the future values of ...
May 29, 2007
We analyze an optimal control problem for systems of integral equations of Volterra type with two independent variables. These systems generalize both, the hyperbolic control problems for systems of Goursat-Darboux type, and the optimal control of ordinary (i.e. with one independent variable) Volterra integral equations. We prove extremal principles akin to Pontryagin's maximum principle.
November 11, 2003
A geometric derivation of numerical integrators for optimal control problems is proposed. It is based in the classical technique of generating functions adapted to the special features of optimal control problems.
February 6, 2008
We obtain necessary conditions of optimality for impulsive Volterra integral equations with switching and impulsive controls, with variable impulse time-instants. The present work continues and complements our previous work on impulsive Volterra control with fixed impulse times.
May 12, 2004
We consider an optimal control problem for a system governed by a Volterra integral equation with impulsive terms. The impulses act on both the state and the control; the control consists of switchings at discrete times. The cost functional includes both, an integrated cost rate (continuous part) and switching costs at the discrete impulse times (discrete part). We prove necessary optimality conditions of a form analogous to a discrete maximum principle. For the particular ca...