Skip to main content Skip to main navigation

Publication

Leveraging Visual Question Answering to Improve Text-to-Image Synthesis

Stanislav Frolov; Shailza Jolly; Jörn Hees; Andreas Dengel
In: Proceedings of the Second Workshop on Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN). International Conference on Computational Linguistics (COLING-2020), 28th COLING, December 13, Online-Conference, Association for Computational Linguistics, Barcelona, Spain, 12/2020.

Abstract

Generating images from textual descriptions has recently attracted a lot of interest. While current models can generate photo-realistic images of individual objects such as birds and human faces, synthesising images with multiple objects is still very difficult. In this paper, we propose an effective way to combine Text-to-Image (T2I) synthesis with Visual Question Answering (VQA) to improve the image quality and image-text alignment of generated images by leveraging the VQA 2.0 dataset. We create additional training samples by concatenating question and answer (QA) pairs and employ a standard VQA model to provide the T2I model with an auxiliary learning signal. We encourage images generated from QA pairs to look realistic and additionally minimize an external VQA loss. Our method lowers the FID from 27.84 to 25.38 and increases the R-prec. from 83.82{\%} to 84.79{\%} when compared to the baseline, which indicates that T2I synthesis can successfully be improved using a standard VQA model.

Projekte

Weitere Links