Skip to main content Skip to main navigation

Publication

Modular Arithmetic: Language Models Solve Math Digit by Digit

Tanja Bäumel; Daniil Gurgurov; Yusser Al Ghussin; Josef van Genabith; Simon Ostermann
In: Kentaro Inui; Sakriani Sakti; Haofen Wang; Derek F. Wong; Pushpak Bhattacharyya; Biplab Banerjee; Asif Ekbal; Tanmoy Chakraborty; Dhirendra Pratap Singh (Hrsg.). Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics. International Joint Conference on Natural Language Processing & Asia-Pacific Chapter of the Association for Computational Linguistics (IJCNLP-AACL-2025), Mumbai, India, Pages 1380-1409, ISBN 979-8-89176-303-6, The Asian Federation of Natural Language Processing and The Association for Computational Linguistics, 2025.

Abstract

While recent work has begun to uncover the internal strategies that Large Language Models (LLMs) employ for simple arithmetic tasks, a unified understanding of their underlying mechanisms is still lacking. We extend recent findings showing that LLMs represent numbers in a digit-wise manner and present evidence for the existence of digit-position-specific circuits that LLMs use to perform simple arithmetic tasks, i.e. modular subgroups of MLP neurons that operate independently on different digit positions (units, tens, hundreds). Notably, such circuits exist independently of model size and of tokenization strategy, i.e. both for models that encode longer numbers digit-by-digit and as one token.Using Feature Importance and Causal Interventions, we identify and validate the digit-position-specific circuits, revealing a compositional and interpretable structure underlying the solving of arithmetic problems in LLMs. Our interventions selectively alter the model's prediction at targeted digit positions, demonstrating the causal role of digit-position circuits in solving arithmetic tasks.

Projects

More links