As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book.
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents results for two-player differential games and mean-field optimal control problems in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, the book identifies, for the first time, the interconnections between the existence of open-loop and closed-loop Nash equilibria, solvability of the optimality system, and solvability of the associated Riccati equation, and also explores the open-loop solvability of mean-filed linear-quadratic optimal control problems. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.
Xunjing Li (1935OCo2003) was a pioneer in control theory in China. He was known in the Chinese community of applied mathematics, and in the global community of optimal control theory of distributed parameter systems. He has made important contributions to the optimal control theory of distributed parameter systems, in particular regarding the first-order necessary conditions (Pontryagin-type maximum principle) for optimal control of nonlinear infinite-dimensional systems. He directed the Seminar of Control Theory at Fudan towards stochastic control theory in 1980s, and mathematical finance in 1990s, which has led to several important subsequent developments in both closely interactive fields. These remarkable efforts in scientific research and education, among others, gave birth to the so-called OC Fudan SchoolOCO. This proceedings volume includes a collection of original research papers or reviews authored or co-authored by Xunjing Li''s former students, postdoctoral fellows, and mentored scholars in the areas of control theory, dynamic systems, mathematical finance, and stochastic analysis, among others. Sample Chapter(s). Part 1: A Tribute in Memory of Professor Xunjing Li on His Seventieth Birthday (112 KB). Contents: Stochastic Control, Mathematical Finance, and Backward Stochastic Differential Equations: Axiomatic Characteristics for Solutions of Reflected Backward Stochastic Differential Equations (X Bao & S Tang); A Linear Quadratic Optimal Control Problem for Stochastic Volterra Integral Equations (S Chen & J Yong); Stochastic Control and BSDEs with Quadratic Growth (M Fuhrman et al.); Unique Continuation and Observability for Stochastic Parabolic Equations and Beyond (X Zhang); Deterministic Control Systems: Some Counterexamples in Existence Theory of Optimal Control (H Lou); A Generalized Framework for Global Output Feedback Stabilization of Inherently Nonlinear Systems with Uncertainties (J Polendo & C Qian); On Finite-Time Stabilization of a Class of Nonsmoothly Stabilizable Systems (B Yang & W Lin); Dynamics and Optimal Control of Partial Differential Equations: Optimal Control of Quasilinear Elliptic Obstacle Problems (Q Chen & Y Ye); Controllability of a Nonlinear Degenerate Parabolic System with Bilinear Control (P Lin et al.); and other papers. Readership: Researchers and graduate students in the areas of control theory, mathematical finance and dynamical systems.
Mathematical analysis serves as a common foundation for many research areas of pure and applied mathematics. It is also an important and powerful tool used in many other fields of science, including physics, chemistry, biology, engineering, finance, and economics. In this book, some basic theories of analysis are presented, including metric spaces and their properties, limit of sequences, continuous function, differentiation, Riemann integral, uniform convergence, and series.After going through a sequence of courses on basic calculus and linear algebra, it is desirable for one to spend a reasonable length of time (ideally, say, one semester) to build an advanced base of analysis sufficient for getting into various research fields other than analysis itself, and/or stepping into more advanced levels of analysis courses (such as real analysis, complex analysis, differential equations, functional analysis, stochastic analysis, amongst others). This book is written to meet such a demand. Readers will find the treatment of the material is as concise as possible, but still maintaining all the necessary details.
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents the results in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, it precisely identifies, for the first time, the interconnections between three well-known, relevant issues – the existence of optimal controls, solvability of the optimality system, and solvability of the associated Riccati equation. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.
This volume is a survey/monograph on the recently developed theory of forward-backward stochastic differential equations (FBSDEs). Basic techniques such as the method of optimal control, the 'Four Step Scheme', and the method of continuation are presented in full. Related topics such as backward stochastic PDEs and many applications of FBSDEs are also discussed in detail. The volume is suitable for readers with basic knowledge of stochastic differential equations, and some exposure to the stochastic control theory and PDEs. It can be used for researchers and/or senior graduate students in the areas of probability, control theory, mathematical finance, and other related fields.
This book gathers the most essential results, including recent ones, on linear-quadratic optimal control problems, which represent an important aspect of stochastic control. It presents results for two-player differential games and mean-field optimal control problems in the context of finite and infinite horizon problems, and discusses a number of new and interesting issues. Further, the book identifies, for the first time, the interconnections between the existence of open-loop and closed-loop Nash equilibria, solvability of the optimality system, and solvability of the associated Riccati equation, and also explores the open-loop solvability of mean-filed linear-quadratic optimal control problems. Although the content is largely self-contained, readers should have a basic grasp of linear algebra, functional analysis and stochastic ordinary differential equations. The book is mainly intended for senior undergraduate and graduate students majoring in applied mathematics who are interested in stochastic control theory. However, it will also appeal to researchers in other related areas, such as engineering, management, finance/economics and the social sciences.
This volume is a survey/monograph on the recently developed theory of forward-backward stochastic differential equations (FBSDEs). Basic techniques such as the method of optimal control, the 'Four Step Scheme', and the method of continuation are presented in full. Related topics such as backward stochastic PDEs and many applications of FBSDEs are also discussed in detail. The volume is suitable for readers with basic knowledge of stochastic differential equations, and some exposure to the stochastic control theory and PDEs. It can be used for researchers and/or senior graduate students in the areas of probability, control theory, mathematical finance, and other related fields.
This book uses a small volume to present the most basic results for deterministic two-person differential games. The presentation begins with optimization of a single function, followed by a basic theory for two-person games. For dynamic situations, the author first recalls control theory which is treated as single-person differential games. Then a systematic theory of two-person differential games is concisely presented, including evasion and pursuit problems, zero-sum problems and LQ differential games.The book is intended to be self-contained, assuming that the readers have basic knowledge of calculus, linear algebra, and elementary ordinary differential equations. The readership of the book could be junior/senior undergraduate and graduate students with majors related to applied mathematics, who are interested in differential games. Researchers in some other related areas, such as engineering, social science, etc. will also find the book useful.
The book deals with topics such as the pricing of various contingent claims within different frameworks, risk-sensitive problems, optimal investment, defaultable term structure, etc. It also reflects on some recent developments in certain important aspects of mathematical finance. Contents: Intensity-Based Valuation of Basket Credit Derivatives (T R Bielecki & M Rutkowski); Comonotonicity of Backward Stochastic Differential Equations (Z Chen & X Wang); Some Lookback Option Pricing Problems (X Guo); Optimal Investment and Consumption with Fixed and Proportional Transaction Costs (H Liu); Filtration Consistent Nonlinear Expectations (F Coquet et al.); A Theory of Volatility (A Savine); Discrete Time Markets with Transaction Costs (L Stettner); Options on Dividend Paying Stocks (R Beneder & T Vorst); Risk: From Insurance to Finance (H Yang); Arbitrage Pricing Systems in a Market Driven by an It Process (S Luo et al.); and other papers. Readership: Graduate students and researchers in mathematical finance and economics.
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
The IFIP-TC7, WG 7.2 Conference on Control Theory of Distributed Parameter Systems and Applications was held at Fudan University, Shanghai, China, May 6-9, 1990. The papers presented cover a wide variety of topics, e.g. the theory of identification, optimal control, stabilization, controllability, stochastic control as well as appplications in heat exchangers, elastic structures, nuclear reactor, meteorology etc.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.