Multi-agent coordination : a reinforcement learning approach / Arup Kumar Sadhu, Amit Konar.
By: Sadhu, Arup Kumar [author.].
Contributor(s): Konar, Amit [author.].
Material type: BookPublisher: Hoboken, New Jersey : Wiley-IEEE, [2021]Description: 1 online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9781119699057; 1119699053; 9781119699026; 1119699029; 9781119698999; 1119698995.Subject(s): Reinforcement learning | Multiagent systems | Multiagent systems | Reinforcement learningGenre/Form: Electronic books.Additional physical formats: Print version:: Multi-agent coordinationDDC classification: 006.3/1 Online resources: Wiley Online Library Summary: "This book explores the usage of Reinforcement Learning for Multi-Agent Coordination. Chapter 1 introduces fundamentals of the multi-robot coordination. Chapter 2 offers two useful properties, which have been developed to speed-up the convergence of traditional multi-agent Q-learning (MAQL) algorithms in view of the team-goal exploration, where team-goal exploration refers to simultaneous exploration of individual goals. Chapter 3 proposes the novel consensus Q-learning (CoQL), which addresses the equilibrium selection problem. Chapter 4 introduces a new dimension in the literature of the traditional correlated Q-learning (CQL), in which correlated equilibrium (CE) is computed partly in the learning and the rest in the planning phases, thereby requiring CE computation once only. Chapter 5 proposes an alternative solution to the multi-agent planning problem using meta-heuristic optimization algorithms. Chapter 6 provides the concluding remarks based on the principles and experimental results acquired in the previous chapters. Possible future directions of research are also examined briefly at the end of the chapter."-- Provided by publisher.Includes bibliographical references and index.
"This book explores the usage of Reinforcement Learning for Multi-Agent Coordination. Chapter 1 introduces fundamentals of the multi-robot coordination. Chapter 2 offers two useful properties, which have been developed to speed-up the convergence of traditional multi-agent Q-learning (MAQL) algorithms in view of the team-goal exploration, where team-goal exploration refers to simultaneous exploration of individual goals. Chapter 3 proposes the novel consensus Q-learning (CoQL), which addresses the equilibrium selection problem. Chapter 4 introduces a new dimension in the literature of the traditional correlated Q-learning (CQL), in which correlated equilibrium (CE) is computed partly in the learning and the rest in the planning phases, thereby requiring CE computation once only. Chapter 5 proposes an alternative solution to the multi-agent planning problem using meta-heuristic optimization algorithms. Chapter 6 provides the concluding remarks based on the principles and experimental results acquired in the previous chapters. Possible future directions of research are also examined briefly at the end of the chapter."-- Provided by publisher.
Description based on print version record and CIP data provided by publisher; resource not viewed.
Wiley Frontlist Obook All English 2020
There are no comments for this item.