günter neumann

Publications (full list here)

  • Stalin Varanasi, Muhammad Umer Butt, and Günter Neumann (2023) AutoQIR: Auto-Encoding Questions with Retrieval Augmented Decoding for Unsupervised Passage Retrieval and Zero-shot Question Generation, Proceedings of Recent Advances in Natural Language Processing (RANLP-2023), Bulgaria, 2023.
  • Saadullah Amin, Pasquale Minervini, David Chang, Pontus Stenetorp, and Günter Neumann (2022) MedDistant19: Towards an Accurate Benchmark for Broad-Coverage Biomedical Relation Extraction, Proceedings of The 29th International Conference on Computational Linguistics (Coling-2022), October 12-17, 2022, Gyeongju, Republic of Korea
  • Ioannis Dikeoulias, Saadullah Amin, and Günter Neumann (2022) Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic Representations , Proceedings of the 7th Workshop on Representation Learning for NLP. ACL-2022, RepL4NLP May 2022, Pages 111-120 ACL 5/2022 (RepL4NLP-2022), May, 2022.
  • Saadullah Amin, Noon Pokaratsiri, Morgan Wixted, Alejandro García-Rudolph, Catalina Martínez-Costa, and Günter Neumann (2022) Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of Code-Mixed Clinical Texts , Proceedings of the 21st Workshop on Biomedical Language Processing. ACL-2022 BioNLP, May 22-27, Pages 200-211 ACL 5/2022. (BioNLP-2022), May, 2022.
  • Stalin Varanasi, Saadullah Amin and Günter Neumann (2021) AutoEQA: Auto-Encoding Questions for Extractive Question Answering, The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP-2021), Nov. 2021.
  • Saadullah Amin and Günter Neumann (2021) T2NER: Transformers based Transfer Learning Framework for Name Entity Recognition, The 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), demo session, 2021.
  • Ekaterina Loginova, Stalin Varanasi and Günter Neumann (2021) Towards End-to-End Multilingual Question Answering . In Journal Information Systems Frontiers, 23(1): 227-241 (2021).
  • Stalin Varanasi, Saadullah Amin, and Günter Neumann (2020) CopyBERT: A Unified Approach to Question Generation with Self-Attention NLP for Conversational AI - Proceedings of the 2nd Workshop, ACL workshop, 2020.
  • Saadullah Amin, Stalin Varanasi, Katherine Dunfield and Günter Neumann (2020) LowFER: Low-rank Bilinear Pooling for Link Prediction. Proceedings of the 37th International Conference on Machine Learning (ICML-2020), 2020.
  • Saadullah Amin, Katherine Dunfield, Anna Vechkaeva and Günter Neumann (2020) A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction . In Proceedings of BioNLP-2020 at ACL-2020.
  • Dominik Stammbach and Günter Neumann (2019) Team DOMLIN: Exploiting Evidence Enhancement for the FEVER Shared Task. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), EMNLP workshop, 2019.
  • Saadullah Amin, Günter Neumann, Katherine Dunfield, Anna Vechkaeva, Kathryn Annette Chapman, and Morgan Kelly Wixted (2019) MLT-DFKI at CLEF eHealth 2019: Multi-label Classification of ICD-10 Codes with BERT. In working notes of CLEF eHealth, 2019.
  • Alejandro Figueroa, Carlos Gómez-Pantoja, and Günter Neumann (2019) Integrating heterogeneous sources for predicting question temporal anchors across Yahoo! Answers. In journal Information Fusion, Volume 50, October 2019, Pages 112-125.
  • Ekaterina Loginova and Günter Neumann (2018) An Interactive Web-Interface for Visualizing the Inner Workings of the Question Answering LSTM. In proceedings of the Conference on Empirical Methods in Natural Language Processing - EMNLP-2018, October 31 – November 4, Brussels, Belgium, 2018.
  • Georg Heigold, Stalin Varanasi, Günter Neumann and Josef van Genabith (2018) How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?. In proceedings of AMTA, March.
  • Georg Heigold, Günter Neumann and Josef van Genabith (2017) An Extensive Empirical Evaluation of Character-Based Morphological Tagging for 14 Languages. In proceedings of EACL, 2017.