Skip to main content Skip to main navigation


How Risky is Multimodal Fake News Detection? A Review of Cross-Modal Learning Approaches under EU AI Act Constrains

Razieh Khamsehashari; Vera Schmitt; Tim Polzehl; Salar Mohtaj; Sebastian Möller
In: Proc. 3rd Symposium on Security and Privacy in Speech Communication. Conference in the Annual Series of Interspeech Events (INTERSPEECH-2023), Pages 1-10, ISCA, 2023.


Manual review methods have become insufficient when combating today’s scale of online fake news, leading researchers to develop AI-based detection models, many of which struggle with, e.g., multimodal conflicts and ambiguity. Most promising models combine images and textual information in a cross-modal learning strategy. This work summarizes current multimodal fake news detection models, based on cross-modal learning. In order to evaluate if and how they can be applied in real-world use cases, we analyze best-performing models with respect to obligations like risk management, data governance, documentation, transparency, human oversight, and required accuracy, following the European Commission’s AI Act. The analysis shows that the AI Act can be applied to a certain extent only, as the categories and their obligations are vaguely defined, leaving room for interpretation when translating the obligations into technical requirements.


Weitere Links