Skip to main content Skip to main navigation

Publikation

Iterated Q-Network: Beyond the One-Step Bellman Operator

Théo Vincent; Daniel Palenicek; Boris Belousov; Jan Peters; Carlo D'Eramo
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2403.02107, Pages 1-26, arXiv, 2024.

Zusammenfassung

The vast majority of Reinforcement Learning methods is largely impacted by the com- putation effort and data requirements needed to obtain effective estimates of action-value functions, which in turn determine the quality of the overall performance and the sample- efficiency of the learning procedure. Typically, action-value functions are estimated through an iterative scheme that alternates the application of an empirical approximation of the Bellman operator and a subsequent projection step onto a considered function space. It has been observed that this scheme can be potentially generalized to carry out multiple iterations of the Bellman operator at once, benefiting the underlying learning algorithm. However, until now, it has been challenging to effectively implement this idea, especially in high-dimensional problems. In this paper, we introduce iterated Q-Network (i-QN), a novel principled approach that enables multiple consecutive Bellman updates by learning a tailored sequence of action-value functions where each serves as the target for the next. We show that i-QN is theoretically grounded and that it can be seamlessly used in value- based and actor-critic methods. We empirically demonstrate the advantages of i-QN in Atari 2600 games and MuJoCo continuous control problems. Our code is publicly available at https: // github. com/ theovincent/ i-DQN and the trained models are uploaded at https: // huggingface. co/ TheoVincent/ Atari_ i-QN

Weitere Links