Hauptinhalt

Zehn ausgewählte Publikationen

  • Peters, J., Schaal, S. (2008).
    Natural Actor-Critic
    Neurocomputing, 71(7), pp.1180-90,
    DOI: 10.1016/j.neucom.2007.11.026

  • Peters, J., Schaal, S. (2008).
    Reinforcement Learning of Motor Skills with Policy Gradients
    Neural Networks, 21(4), pp.682-97,
    DOI: 10.1016/j.neunet.2008.02.003

  • Kober, J., Peters, J. (2011).
    Policy Search for Motor Primitives in Robotics
    Machine Learning, 84(1), pp.171–203,
    DOI 10.1007/s10994-010-5223-6 

  • Mülling, K., Kober, J., Krömer, O., Peters, J. (2013).
    Learning to Select and Gener- alize Striking Movements in Robot Table Tennis
    International Journal of Robotics Re- search, 32(3), pp. 280–298
    DOI: 10.1177/0278364912472380 

  • Daniel, C., Neumann, G., Kroemer, O., Peters, J. (2016).
    Hierarchical Relative Entropy Policy Search
    Journal of Machine Learning (JMLR), 17, pp.1–50 

  • Maeda, G., Ewerton, M., Neumann, G., Lioutikov, R., Peters, J. (2017).
    Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration
    International Journal of Robotics Research (IJRR) 

  • Lutter, M., Ritter, C., Peters, J. (2019).
    Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning
    International Conference on Learning Representations (ICLR)
     
  • D`Eramo, C., Tateo, D., Bonarini, A., Restelli, M., Peters, J. (2020).
    Sharing Knowledge in Multi-Task Deep Reinforcement Learning
    International Conference in Learning Representations (ICLR) 

  • Watson, J., Lin J. A., Klink, P., Pajarinen, J., Peters, J. (2021).
    Latent Derivative Bayesian Last Layer Networks
    Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

  • Akrour, R., Tateo, D., Peters, J. (2022).
    Continuous Action Reinforcement Learning from a Mixture of Interpretable Experts
    IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 44, 10, pp.6795-6806

Kooperationspartner