Skip to main content Skip to main navigation

Project

DeeperSense

Deep-Learning for Multimodal Sensor Fusion

Deep-Learning for Multimodal Sensor Fusion

The main objective of DeeperSense is to significantly improve the capabilities for environment perception of service robots to improve their performance and reliability, achieve new functionality, and open up new applications for robotics. DeeperSense adopts a novel approach of using Artificial Intelligence and data-driven Machine Learning / DeepLearning to combine the capabilities of non-visual and visual sensors with the objective to improve their joint capability of environment perception beyond the capabilities of the individual sensors. As one of the most challenging application areas for robot operation and environment perception, DeeperSense chooses underwater robotics as a domain to demonstrate and verify this approach. The project implements DeepLearning solutions for three use cases that were selected for their societal relevance and are driven by concrete end-user and market needs. During the project, comprehensive training data are generated. The algorithms are trained on these data and verified both in the lab and in extensive field trials. The trained algorithms are optimized to run on the on-board hardware of underwater vehicles, thus enabling real-time execution in support of the autonomous robot behaviour. Both the algorithms and the data will be made publicly available through online repositories embedded in European research infrastructures. The DeeperSense consortium consists of renowned experts in robotics and marine robotics, artificial intelligence, and underwater sensing. The research and technology partners are complemented by end-users from the three use case application areas. Among others, the dissemination strategy of DeeperSense has the objective to bridge the gap between the European robotics and AI communities and thus strengthen European science and technology.

Partners

UNIVERSITAT DE GIRONA (ES) UNIVERSITY OF HAIFA (IL) KRAKEN ROBOTIK GMBH (DE) BUNDESMINISTERIUM DES INNERN (THW) (DE) ISRAEL NATURE AND NATIONAL PARKS PROTECTION AUTHORITY (IL) TECNOAMBIENTE SL (ES)

Sponsors

EU - European Union

101016958

H2020-ICT-2020-2 ICT-47-2020

EU - European Union

Publications about the project

Bilal Wehbe; Nimish Shrenik Shah; Miguel Bande Firvida; Christian Backe

In: OCEANS 2022, Hampton Roads. OCEANS MTS/IEEE Conference (OCEANS-2022), October 17-20, Pages 1-9, OCEANS IEEE, 10/2022.

To the publication

Alan Preciado-Grijalva; Bilal Wehbe; Miguel Bande Firvida; Matias Valdenegro-Toro

In: Alan Preciado-Grijalva; Bilal Wehbe; Miguel Bande Firvida; Matias Valdenegro-Toro (Hrsg.). Self-supervised Learning for Sonar Image Classification. International Conference on Computer Vision and Pattern Recognition (CVPR-2022), 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), June 19-20, New Orleans, LA, USA, Pages 1498-1507, ISBN 978-1-6654-8739-9, IEEE, 2022.

To the publication

Hampton Roads OCEANS 2022 (Hrsg.)

OCEANS MTS/IEEE Conference (OCEANS-2022), IEEE, 2022.

To the publication