Publication
Diminishing Return of Value Expansion Methods
Daniel Palenicek; Michael Lutter; Jo~ao Carvalho; Daniel Dennert; Faran Ahmad; Jan Peters
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2412.20537, Pages 1-21, arXiv, 2024.
Abstract
Model-based reinforcement learning is one approach to increase sample effi-
ciency. However, the accuracy of the dynamics model and the resulting com-
pounding error over modelled trajectories are commonly regarded as key limita-
tions. A natural question to ask is: How much more sample efficiency can be
gained by improving the learned dynamics models? Our paper empirically an-
swers this question for the class of model-based value expansion methods in con-
tinuous control problems. Value expansion methods should benefit from increased
model accuracy by enabling longer rollout horizons and better value function ap-
proximations. Our empirical study, which leverages oracle dynamics models to
avoid compounding model errors, shows that (1) longer horizons increase sample
efficiency, but the gain in improvement decreases with each additional expansion
step, and (2) the increased model accuracy only marginally increases the sample
efficiency compared to learned models with identical horizons. Therefore, longer
horizons and increased model accuracy yield diminishing returns in terms of sam-
ple efficiency. These improvements in sample efficiency are particularly disap-
pointing when compared to model-free value expansion methods. Even though
they introduce no computational overhead, we find their performance to be on-par
with model-based value expansion methods. Therefore, we conclude that the lim-
itation of model-based value expansion methods is not the model accuracy of the
learned models. While higher model accuracy is beneficial, our experiments show
that even a perfect model will not provide an un-rivalled sample efficiency but that
the bottleneck lies elsewhere.
