This book contains three well-written research tutorials that inform the graduate reader about the forefront of current research in multi-agent optimization. These tutorials cover topics that have not yet found their way in standard books and offer the reader the unique opportunity to be guided by major researchers in the respective fields. Multi-agent optimization, lying at the intersection of classical optimization, game theory, and variational inequality theory, is at the forefront of modern optimization and has recently undergone a dramatic development. It seems timely to provide an overview that describes in detail ongoing research and important trends. This book concentrates on Distributed Optimization over Networks; Differential Variational Inequalities; and Advanced Decomposition Algorithms for Multi-agent Systems. This book will appeal to both mathematicians and mathematically oriented engineers and will be the source of inspiration for PhD students and researchers.
A uniquely pedagogical, insightful, and rigorous treatment of the analytical/geometrical foundations of optimization. The book provides a comprehensive development of convexity theory, and its rich applications in optimization, including duality, minimax/saddle point theory, Lagrange multipliers, and Lagrangian relaxation/nondifferentiable optimization. It is an excellent supplement to several of our books: Convex Optimization Theory (Athena Scientific, 2009), Convex Optimization Algorithms (Athena Scientific, 2015), Nonlinear Programming (Athena Scientific, 2016), Network Optimization (Athena Scientific, 1998), and Introduction to Linear Optimization (Athena Scientific, 1997). Aside from a thorough account of convex analysis and optimization, the book aims to restructure the theory of the subject, by introducing several novel unifying lines of analysis, including: 1) A unified development of minimax theory and constrained optimization duality as special cases of duality between two simple geometrical problems. 2) A unified development of conditions for existence of solutions of convex optimization problems, conditions for the minimax equality to hold, and conditions for the absence of a duality gap in constrained optimization. 3) A unification of the major constraint qualifications allowing the use of Lagrange multipliers for nonconvex constrained optimization, using the notion of constraint pseudonormality and an enhanced form of the Fritz John necessary optimality conditions. Among its features the book: a) Develops rigorously and comprehensively the theory of convex sets and functions, in the classical tradition of Fenchel and Rockafellar b) Provides a geometric, highly visual treatment of convex and nonconvex optimization problems, including existence of solutions, optimality conditions, Lagrange multipliers, and duality c) Includes an insightful and comprehensive presentation of minimax theory and zero sum games, and its connection with duality d) Describes dual optimization, the associated computational methods, including the novel incremental subgradient methods, and applications in linear, quadratic, and integer programming e) Contains many examples, illustrations, and exercises with complete solutions (about 200 pages) posted at the publisher's web site http://www.athenasc.com/convexity.html
A uniquely pedagogical, insightful, and rigorous treatment of the analytical/geometrical foundations of optimization. The book provides a comprehensive development of convexity theory, and its rich applications in optimization, including duality, minimax/saddle point theory, Lagrange multipliers, and Lagrangian relaxation/nondifferentiable optimization. It is an excellent supplement to several of our books: Convex Optimization Theory (Athena Scientific, 2009), Convex Optimization Algorithms (Athena Scientific, 2015), Nonlinear Programming (Athena Scientific, 2016), Network Optimization (Athena Scientific, 1998), and Introduction to Linear Optimization (Athena Scientific, 1997). Aside from a thorough account of convex analysis and optimization, the book aims to restructure the theory of the subject, by introducing several novel unifying lines of analysis, including: 1) A unified development of minimax theory and constrained optimization duality as special cases of duality between two simple geometrical problems. 2) A unified development of conditions for existence of solutions of convex optimization problems, conditions for the minimax equality to hold, and conditions for the absence of a duality gap in constrained optimization. 3) A unification of the major constraint qualifications allowing the use of Lagrange multipliers for nonconvex constrained optimization, using the notion of constraint pseudonormality and an enhanced form of the Fritz John necessary optimality conditions. Among its features the book: a) Develops rigorously and comprehensively the theory of convex sets and functions, in the classical tradition of Fenchel and Rockafellar b) Provides a geometric, highly visual treatment of convex and nonconvex optimization problems, including existence of solutions, optimality conditions, Lagrange multipliers, and duality c) Includes an insightful and comprehensive presentation of minimax theory and zero sum games, and its connection with duality d) Describes dual optimization, the associated computational methods, including the novel incremental subgradient methods, and applications in linear, quadratic, and integer programming e) Contains many examples, illustrations, and exercises with complete solutions (about 200 pages) posted at the publisher's web site http://www.athenasc.com/convexity.html
This book contains three well-written research tutorials that inform the graduate reader about the forefront of current research in multi-agent optimization. These tutorials cover topics that have not yet found their way in standard books and offer the reader the unique opportunity to be guided by major researchers in the respective fields. Multi-agent optimization, lying at the intersection of classical optimization, game theory, and variational inequality theory, is at the forefront of modern optimization and has recently undergone a dramatic development. It seems timely to provide an overview that describes in detail ongoing research and important trends. This book concentrates on Distributed Optimization over Networks; Differential Variational Inequalities; and Advanced Decomposition Algorithms for Multi-agent Systems. This book will appeal to both mathematicians and mathematically oriented engineers and will be the source of inspiration for PhD students and researchers.
Recent advances in wired and wireless technology lead to the emergence of large-scale networks such as Internet, wireless mobile ad-hoc networks, swarm robotics, smart-grid, and smart-sensor networks. The advances gave rise to new applications in networks including decentralized resource allocation in multi-agent systems, decentralized control of multi-agent systems, collaborative decision making, decentralized learning and estimation, and decentralized in-network signal processing. The advances also gave birth to new large cyber-physical systems such as sensor and social networks. These network systems are typically spatially distributed over a large area and may consists of hundreds of agents in smart-sensor networks to millions of agents in social networks. As such, they do not possess a central coordinator or a central point for access to the complete system information. This lack of central entity makes the traditional (centralized) optimization and control techniques inapplicable, thus necessitating the development of new distributed computational models and algorithms to support efficient operations over such networks. This tutorial provides an overview of the convergence rate of distributed algorithms for coordination and its relevance to optimization in a system of autonomous agents embedded in a communication network, where each agent is aware of (and can communicate with) its local neighbors only. The focus is on distributed averaging dynamics for consensus problems and its role in consensus-based gradient methods for convex optimization problems, where the network objective function is separable across the constituent agents.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.