Skip to main content Skip to main navigation

Publikation

Learning to Intervene on Concept Bottlenecks

David Steinmann; Wolfgang Stammer; Felix Friedrich; Kristian Kersting
In: Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. International Conference on Machine Learning (ICML), OpenReview.net, 2024.

Zusammenfassung

While deep learning models often lack inter- pretability, concept bottleneck models (CBMs) provide inherent explanations via their concept representations. Moreover, they allow users to per- form interventional interactions on these concepts by updating the concept values and thus correct- ing the predictive output of the model. Up to this point, these interventions were typically applied to the model just once and then discarded. To rectify this, we present concept bottleneck mem- ory models (CB2Ms), which keep a memory of past interventions. Specifically, CB2Ms leverage a two-fold memory to generalize interventions to appropriate novel situations, enabling the model to identify errors and reapply previous interven- tions. This way, a CB2M learns to automatically improve model performance from a few initially obtained interventions. If no prior human inter- ventions are available, a CB2M can detect poten- tial mistakes of the CBM bottleneck and request targeted interventions. Our experimental evalua- tions on challenging scenarios like handling dis- tribution shifts and confounded data demonstrate that CB2Ms are able to successfully generalize in- terventions to unseen data and can indeed identify wrongly inferred concepts. Hence, CB2Ms are a valuable tool for users to provide interactive feed- back on CBMs, by guiding a user’s interaction and requiring fewer interventions.

Weitere Links