Skip to main content Skip to main navigation

Publication

DeepBIBX: Deep Learning for Image Based Bibliographic Data Extraction

Akansha Bhardwaj; Dominique Mercier; Andreas Dengel; Sheraz Ahmed
In: International Conference on Neural Information Processing. International Conference on Neural Information Processing (ICONIP-2017), 24th International Conference on Neural Information Processing, November 14-18, Guangzhou, China, Pages 286-293, Vol. 10635, ISBN 978-3-319-70095-3, Springer, 10/2017.

Abstract

Extraction of structured bibliographic data from document images of non-native-digital academic content is a challenging problem that finds its application in the automation of cataloging systems in libraries and reference linking domain. The existing approaches discard the visual cues and focus on converting the document image to text and further identifying citation strings using trained segmentation models. Apart from the large training data, which these existing methods require, they are also language dependent. This paper presents a novel approach (DeepBIBX) which targets this problem from a computer vision perspective and uses deep learning to semantically segment the individual citation strings in a document image. DeepBIBX is based on deep Fully Convolutional Networks and uses transfer learning to extract bibliographic references from document images. Unlike existing approaches which use textual content to semantically segment bibliographic references, DeepBIBX utilizes image based contextual information, which makes it applicable to documents of any language. To gauge the performance of the presented approach, a dataset consisting of 286 document images containing 5090 bibliographic references is collected. Evaluation results reveals that the DeepBIBX outperforms state-of-the-art method (ParsCit, 71.7%) for bibliographic references extraction and achieved an accuracy of 84.9% in comparison to 71.7%. Furthermore, in terms of pixel classification task, DeepBIBX achieved a precision and a recall rate of 96.2%, 94.4% respectively.