Skip to main content Skip to main navigation

Publikation

SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning Training

Meghdad Mirabi; René Klaus Nikiel; Carsten Binnig
In: Jihe Wang; Yi He; Thang N. Dinh; Christan Grant; Meikang Qiu; Witold Pedrycz (Hrsg.). 23rd IEEE International Conference on Data Mining Workshops - ICDMW 2023. International Workshop on Trustworthy Knowledge Discovery and Data Mining (TrustKDD-2023), December 1, Shanghai, China, Pages 207-216, IEEE, 2023.

Zusammenfassung

This paper introduces SafeML, a distributed ma- chine learning framework that can address privacy and Byzantine robustness concerns during model training. It employs secret sharing and data masking techniques to secure all computations, while also utilizing computational redundancy and robust con- firmation methods to prevent Byzantine nodes from negatively affecting model updates at each iteration of model training. The theoretical analysis and preliminary experimental results demonstrate the security and correctness of SafeML for mode training.

Weitere Links