Skip to main content Skip to main navigation

Publications

Displaying results 221 to 230 of 668.
  1. Guillaume Duret; Mohamed Mahmoud Sayed Shelkamy Ali; Nicolas Cazin; Danylo Mazurak; Anna Samsonenko; Alexandre Chapin; Florence Zara; Emmanuel Dellandréa; Liming Chen; Jan Peters

    FruitBin: A Tunable Large-Scale Dataset for Advancing 6D Pose Estimation in Fruit Bin-Picking Automation

    In: Alessio Del Bue; Cristian Canton; Jordi Pont-Tuset; Tatiana Tommasi (Hrsg.). Computer Vision - ECCV 2024 Workshops - Milan, Italy, September 29-October 4, 2024, Proceedings, Part I. Computer Vision Systems (CVS), Pages 73-90, Lecture Notes in Computer Science, Vol. 15623, Springer, 2024.

  2. Julen Urain; Ajay Mandlekar; Yilun Du; Mahi Shafiullah; Danfei Xu; Katerina Fragkiadaki; Georgia Chalvatzaki; Jan Peters

    Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2408.04380, Pages 1-20, arXiv, 2024.

  3. Luca Lach; Robert Haschke; Davide Tateo; Jan Peters; Helge J. Ritter; Júlia Borràs Sol; Carme Torras

    Zero-Shot Transfer of a Tactile-based Continuous Force Control Policy from Simulation to Robot

    In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024, Abu Dhabi, United Arab Emirates, October 14-18, 2024. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Pages 725-732, IEEE, 2024.

  4. Piotr Kicki; Puze Liu; Davide Tateo; Haitham Bou-Ammar; Krzysztof Walas; Piotr Skrzypczynski; Jan Peters

    Fast Kinodynamic Planning on the Constraint Manifold With Deep Neural Networks

    In: IEEE Transactions on Robotics (T-RO), Vol. 40, Pages 277-297, IEEE, 2024.

  5. Daniel Palenicek; Florian Vogt; Joe Watson; Ingmar Posner; Jan Peters

    XQC: Well-conditioned Optimization Accelerates Deep Reinforcement Learning

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2509.25174, Pages 1-24, arXiv, 2025.

  6. Aryaman Reddi; Gabriele Tiboni; Jan Peters; Carlo D'Eramo

    emphK-Level Policy Gradients for Multi-Agent Reinforcement Learning

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2509.12117, Pages 1-22, arXiv, 2025.

  7. Théo Vincent; Yogesh Tripathi; Tim Lukas Faust; Yaniv Oren; Jan Peters; Carlo D'Eramo

    Bridging the Performance Gap Between Target-Free and Target-Based Reinforcement Learning With Iterated Q-Learning

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2506.04398, Pages 1-22, arXiv, 2025.

  8. Théo Vincent; Tim Lukas Faust; Yogesh Tripathi; Jan Peters; Carlo D'Eramo

    Eau De emphQ-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2503.01437, Pages 1-26, arXiv, 2025.

  9. Nico Bohlinger; Jan Peters

    Massively Scaling Explicit Policy-conditioned Value Functions

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2502.11949, Pages 1-5, arXiv, 2025.

  10. Daniel Palenicek; Florian Vogt; Jan Peters

    Scaling Off-Policy Reinforcement Learning with Batch and Weight Normalization

    In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2502.07523, Pages 1-23, arXiv, 2025.