After mastering the basics of deep reinforcement learning in part 1, part 2 delves into a variety of more advanced techniques and tackles more sophisticated environments. The chapters in this part can more or less be approached in any order as they do not rely on each other. However, each chapter tends to be more complex than the previous one, so it still may be better to go in sequence.
In chapter 6 we’ll introduce an alternative framework for training neural networks using ideas borrowed from the biological sciences. In particular, we’ll adapt Charles Darwin’s theory of evolution by natural selection for machine learning.
In chapter 7 we’ll show that most approaches to reinforcement learning are impoverished in how they represent states of the environment, and we’ll fix that by modeling a full probability distribution. In chapter 8 we’ll show you how to imbue reinforcement learning agents with a sense of human-like curiosity. In chapter 9 we’ll extend what we’ve learned training individual reinforcement learning agents to scenarios with dozens or hundreds of agents all interacting together.
In chapter 10 we’ll tackle one last major project to implement a machine learning model with a crude form of symbolic reasoning. This will allow us to inspect the internal behavior of the neural network and make the model more interpretable. Finally, chapter 11 briefly reviews the core concepts in the book and provides a roadmap for further study.