This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
This invaluable book provides approximately eighty examples illustrating the theory of controlled discrete-time Markov processes. Except for applications of the theory to real-life problems like stock exchange, queues, gambling, optimal search etc, the main attention is paid to counter-intuitive, unexpected properties of optimization problems. Such examples illustrate the importance of conditions imposed in the theorems on Markov Decision Processes. Many of the examples are based upon examples published earlier in journal articles or textbooks while several other examples are new. The aim was to collect them together in one reference book which should be considered as a complement to existing monographs on Markov decision processes.The book is self-contained and unified in presentation.The main theoretical statements and constructions are provided, and particular examples can be read independently of others. Examples in Markov Decision Processes is an essential source of reference for mathematicians and all those who apply the optimal control theory to practical purposes. When studying or using mathematical methods, the researcher must understand what can happen if some of the conditions imposed in rigorous theorems are not satisfied. Many examples confirming the importance of such conditions were published in different journal articles which are often difficult to find. This book brings together examples based upon such sources, along with several new ones. In addition, it indicates the areas where Markov decision processes can be used. Active researchers can refer to this book on applicability of mathematical methods and theorems. It is also suitable reading for graduate and research students where they will better understand the theory.
World leading experts give their accounts of the modern mathematical models in the field: Markov Decision Processes, controlled diffusions, piece-wise deterministic processes etc, with a wide range of performance functionals. One of the aims is to give a general view on the state-of-the-art. The authors use Dynamic Programming, Convex Analytic Approach, several numerical methods, index-based approach and so on. Most chapters either contain well developed examples, or are entirely devoted to the application of the mathematical control theory to real life problems from such fields as Insurance, Portfolio Optimization and Information Transmission. The book will enable researchers, academics and research students to get a sense of novel results, concepts, models, methods, and applications of controlled stochastic processes.
The present book is the outcome of efforts to introduce topological connectedness as one of the basic tools for the study of necessary conditions for an extremum. Apparently this monograph is the first book in the theory of maxima and minima where topological connectedness is used so widely for this purpose. Its application permits us to obtain new results in this sphere and to consider the classical results from a nonstandard point of view. Regarding the style of the present book it should be remarked that it is comparatively elementary. The author has made constant efforts to make the book as self-contained as possible. Certainly, familiarity with the basic facts of topology, functional analysis, and the theory of optimization is assumed. The book is written for applied mathematicians and graduate students interested in the theory of optimization and its applications. We present the synthesis of the well known Dybovitskii'-Milyutin ap proach for the study of necessary conditions for an extremum, based on functional analysis, and topological methods. This synthesis allows us to show that in some cases we have the following important result: if the Euler equation has no non trivial solution at a point of an extremum, then some inclusion is valid for the functionals belonging to the dual space. This general result is obtained for an optimization problem considered in a lin ear topological space. We also show an application of our result to some problems of nonlinear programming and optimal control.
This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.
World leading experts give their accounts of the modern mathematical models in the field: Markov Decision Processes, controlled diffusions, piece-wise deterministic processes etc, with a wide range of performance functionals. One of the aims is to give a general view on the state-of-the-art. The authors use Dynamic Programming, Convex Analytic Approach, several numerical methods, index-based approach and so on. Most chapters either contain well developed examples, or are entirely devoted to the application of the mathematical control theory to real life problems from such fields as Insurance, Portfolio Optimization and Information Transmission. The book will enable researchers, academics and research students to get a sense of novel results, concepts, models, methods, and applications of controlled stochastic processes.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.