Publikation
Masked Autoencoders are Efficient Continual Federated Learners
Subarnaduti Paul; Lars-Joel Frey; Roshni Ramanna Kamath; Kristian Kersting; Martin Mundt
In: Vincenzo Lomonaco; Stefano Melacci; Tinne Tuytelaars; Sarath Chandar; Razvan Pascanu (Hrsg.). Conference on Lifelong Learning Agents, 29-1 August 2024, University of Pisa, Pisa, Italy. Conference on Lifelong Learning Agents (CoLLAs), Pages 70-85, Proceedings of Machine Learning Research, Vol. 274, PMLR, 2024.
Zusammenfassung
Machine learning is typically framed from a perspective of i.i.d., and more importantly, isolated data.
In parts, federated learning lifts this assumption, as it sets out to solve the real-world challenge of
collaboratively learning a shared model from data distributed across clients. However, motivated
primarily by privacy and computational constraints, the fact that data may change, distributions drift,
or even tasks advance individually on clients, is seldom taken into account. The field of continual
learning addresses this separate challenge and first steps have recently been taken to leverage synergies
in distributed settings of a purely supervised nature. Motivated by these prior works, we posit that
such federated continual learning should be grounded in unsupervised learning of representations that
are shared across clients; in the loose spirit of how humans can indirectly leverage others’ experience
without exposure to a specific task. For this purpose, we demonstrate that masked autoencoders for
distribution estimation are particularly amenable to this setup. Specifically, their masking strategy
can be seamlessly integrated with task attention mechanisms to enable selective knowledge transfer
between clients. We empirically corroborate the latter statement through several continual federated
scenarios on both image and binary datasets.
