Workshop proceedings are now available at http://ceur-ws.org/Vol-2241/
This workshop will take place on Monday, 8 October 2018, from 14:00 to 18:00. Preliminary versions of the papers and the presentaitons of the workshops are for now linked below until the official proceedings will come out.
14:00 - 14:10 | Opening and Welcome [slides] |
14:10 - 15:00 | Invited Talk by Marco Rospocher |
Learning Expressive Ontological Concept Descriptions via Neural Networks [slides] [abstract] | |
15:00 - 15:30 | Muhammad Rahman and Tim Finin |
Understanding and Representing the Semantics of Large Structured Documents [slides] [paper] | |
15:30 - 16:00 | Coffee break |
16:00 - 16:30 | Gengchen Mai, Krzysztof Janowicz and Bo Yan |
Combining Text Embedding and Knowledge Graph Embedding Techniques for Academic Search Engines [slides] [paper] | |
16:30 - 17:00 | Asan Agibetov and Matthias Samwald |
Global and Local Evaluation of Link Prediction Tasks with Neural Embeddings [slides] [paper] | |
17:00 - 17:30 | Szymon Wieczorek, Dominik Filipiak and Agata Filipowska |
Semantic Image-Based Profiling of Users' Interests with Neural Networks [slides] [paper] | |
This paper has won the best paper award of SemDeep-4! | |
17:30 - 17:50 | Michael Cochez, Martina Garofalo, Jérôme Lenßen and Maria Angela Pellegrino |
A First Experiment on Including Text Literals in KGloVe [slides] [paper] | |
Invited talk: "Learning Expressive Ontological Concept Descriptions via Neural Networks"
Abstract. In this talk I will overview some recent work we have done on applying Neural Networks to learn expressive ontological concept descriptions from natural language text. The intuition behind the work is that the problem of encoding natural language definitions into Description Logics axioms can be framed as a syntactic transformation of the input sentence into a formula. We proposed two different approaches to implement this transformation. The first (presented in [1]) employs a process with two parallel phases (transduction & tagging). The second (submitted work, under review), tackles the problem as a neural machine translation task. Since no pre-existing dataset was available to adequately train Neural Networks for this task, we designed a data generation pipeline to produce datasets to train and evaluate the proposed architectures.
[1] Giulio Petrucci, Chiara Ghidini, and Marco Rospocher. 2016. Ontology Learning in the Deep. In 20th International Conference on Knowledge Engineering and Knowledge Management - Volume 10024 (EKAW 2016), Eva Blomqvist, Paolo Ciancarini, Francesco Poggi, and Fabio Vitali (Eds.), Vol. 10024. Springer-Verlag New York, Inc., New York, NY, USA, 480-495. DOI: https://doi.org/10.1007/978-3-319-49004-5_31