Publication
Evaluating Higher-Level and Symbolic Features in Deep Learning on Time Series: Towards Simpler Explainability
Leonid Schwenke; Till Stückemann; Martin Atzmueller
In: Proc. International Work-Conference on Artificial Neural Networks. International Work-Conference on Artificial Neural Networks (IWANN), Springer, 2025.
Abstract
Deep neural networks (DNNs) on time series data are not yet as developed regarding performance and explainability. Hence, more domain-specific approaches are needed, as time series data is less intuitive compared to natural language or image data. For this reason, non-deep-learning approaches apply standardized preprocessing or feature-extraction frameworks to boost performance and interpretability. While preprocessing already showed promising results on DNNs, feature extraction frameworks are still relatively underexplored. Additionally, recently, the advantages of symbolic abstraction for explainability and performance on DNNs have emerged, by showing that disentangled and symbolic concepts are desired for easier interpretability: In this work, we explore and analyse higher-level features in combination with symbolic approximation approaches on time series data. We perform an in-depth performance evaluation using a comprehensive set of datasets from the UCR UEA time series repository and argue towards the explainability benefits of our approach, making the input data more meaningful.
