About This Book – Deep Reinforcement Learning in Action

About This Book

Who should read this book

Deep Reinforcement Learning in Action is a course designed to take you from the very foundational concepts in reinforcement learning all the way to implementing the latest algorithms. As a course, each chapter centers around one major project meant to illustrate the topic or concept of that chapter. We’ve designed each project to be something that can be efficiently run on a modern laptop; we don’t expect you to have access to expensive GPUs or cloud computing resources (though access to these resources does make things run faster).

This book is for individuals with a programming background, in particular, a working knowledge of Python, and for people who have at least a basic understanding of neural networks (a.k.a. deep learning). By “basic understanding,” we mean that you have at least tried implementing a simple neural network in Python even if you didn’t fully understand what was going on under the hood. Although this book is focused on using neural networks for the purposes of reinforcement learning, you will also probably learn a lot of new things about deep learning in general that can be applied to other problems outside of reinforcement learning, so you do not need to be an expert at deep learning before jumping into deep reinforcement learning.

How this book is organized: A roadmap

The book has two sections with 11 chapters.

Part 1 explains the fundamentals of deep reinforcement learning.

  • Chapter 1 gives a high-level introduction to deep learning, reinforcement learning, and the marriage of the two into deep reinforcement learning.
  • Chapter 2 introduces the fundamental concepts of reinforcement learning that will reappear through the rest of the book. We also implement our first practical reinforcement learning algorithm.
  • Chapter 3 introduces deep Q-learning, one of the two broad classes of deep reinforcement algorithms. This is the algorithm that DeepMind used to outperform humans at many Atari 2600 games in 2015.
  • Chapter 4 describes the other major class of deep reinforcement learning algorithms, policy-gradient methods. We use this to train an algorithm to play a simple game.
  • Chapter 5 shows how we can combine deep Q-learning from chapter 3 and policy-gradient methods from chapter 4 into a combined class of algorithms called actor-critic algorithms.

Part 2 builds on the foundations we built in part 1 to cover the biggest advances in deep reinforcement learning in recent years.

  • Chapter 6 shows how to implement evolutionary algorithms, which use principles of biological evolution, to train neural networks.
  • Chapter 7 describes a method to significantly improve the performance of deep Q-learning by incorporating probabilistic concepts.
  • Chapter 8 introduces a way to give reinforcement learning algorithms a sense of curiosity to explore their environments without any external cues.
  • Chapter 9 shows how to extend what we have learned in training single agent reinforcement learning algorithms into systems that have multiple interacting agents.
  • Chapter 10 describes how to make deep reinforcement learning algorithms more interpretable and efficient by using attention mechanisms.
  • Chapter 11 concludes the book by discussing all the exciting areas in deep reinforcement learning we didn’t have the space to cover but that you may be interested in.

The chapters in part 1 should be read in order, as each chapter builds on the concepts in the previous chapter. The chapters in part 2 can more or less be approached in any order, although we still recommend reading them in order.

About the code

As we noted, this book is a course, so we have included all of the code necessary to run the projects within the main text of the book. In general, we include shorter code blocks as inline code which is formatted in this font as well as code in separate numbered code listings that represented larger code blocks.

At press time we are confident all the in-text code is working, but we cannot guarantee that the code will continue to be bug free (especially for those of you reading this in print) in the long term, as the deep learning field and consequently its libraries are evolving quickly. The in-text code has also been pared down to the minimum necessary to get the projects working, so we highly recommend you follow the projects in the book using the code in this book’s GitHub repository: http://mng.bz/JzKp. We intend to keep the code on GitHub up to date for the foreseeable future, and it also includes additional comments and code that we used to generate many of the figures in the book. Hence, it is best if you read the book alongside the corresponding code in the Jupyter Notebooks found on the GitHub repository.

We are confident that this book will teach you the concepts of deep reinforcement learning and not just how to narrowly code things in Python. If Python were to somehow disappear after you finish this book, you would still be able to implement all of these algorithms in some other language or framework, since you will understand the fundamentals.

liveBook discussion forum

Purchase of Deep Reinforcement Learning in Action includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the authors and from other users. To access the forum, go to https://livebook.manning.com/#!/book/deep-reinforcement-learning-in-action/discussion. You can also learn more about Manning’s forums and the rules of conduct at https://livebook.manning.com/#!/discussion.

Manning’s commitment to our readers is to provide a venue where a meaningful dialogue between individual readers and between readers and the authors can take place. It is not a commitment to any specific amount of participation on the part of the authors, whose contribution to the forum remains voluntary (and unpaid). We suggest you try asking the authors some challenging questions lest their interest stray! The forum and the archives of previous discussions will be accessible from the publisher’s website as long as the book is in print.