Polyphonic Audio Event Detection: Multi-Label or Multi-Class Multi-Task Classification Problem?

Huy Phan, Thi Ngoc Tho Nguyen, Philipp Koch, Alfred Mertins

In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. International Conference on Acoustics, Speech and Signal Processing (ICASSP-2022) Pages 8877-8881 IEEE 2022.


Polyphonic events are the main error source of audio event detection (AED) systems. In deep-learning context, the most common approach to deal with event overlaps is to treat the AED task as a multi-label classification problem. By doing this, we inherently consider multiple one-vs.-rest classification problems, which are jointly solved by a single (i.e. shared) network. In this work, to better handle polyphonic mixtures, we propose to frame the task as a multi-class classification problem by considering each possible label combination as one class. To circumvent the large number of arising classes due to combinatorial explosion, we divide the event categories into multiple groups and construct a multi-task problem in a divide-and-conquer fashion, where each of the tasks is a multi-class classification problem. A network architecture is then devised for multi-class multi-task modelling. The network is composed of a backbone subnet and multiple task-specific subnets. The task-specific subnets are designed to learn time-frequency and channel attention masks to extract features for the task at hand from the common feature maps learned by the backbone. Experiments on the TUT-SED-Synthetic-2016 with high degree of event overlap show that the proposed approach results in more favorable performance than the common multi-label approach.

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz