Publication
ObscuraCoder: Powering Efficient Code LM Pre-Training Via Obfuscation Grounding
Indraneil Paul; Haoyi Yang; Goran Glavas; Kristian Kersting; Iryna Gurevych
In: The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025. International Conference on Learning Representations (ICLR), OpenReview.net, 2025.
Abstract
Language models (LMs) have become a staple of the code-writing toolbox. Their
pre-training recipe has, however, remained stagnant over recent years, barring the
occasional changes in data sourcing and filtering strategies. In particular, research
exploring modifications to Code-LMs’ pre-training objectives, geared towards
improving data efficiency and better disentangling between syntax and semantics,
has been noticeably sparse, especially compared with corresponding efforts in
natural language LMs. In this work, we examine grounding on obfuscated code as
a means of helping Code-LMs look beyond the surface-form syntax and enhance
their pre-training sample efficiency. To this end, we compile ObscuraX, a dataset
of approximately 55M source and obfuscated code pairs in seven languages. Subse-
quently, we pre-train ObscuraCoder models, ranging in size from 255M to 2.8B
parameters, on a 272B-token corpus that includes ObscuraX and demonstrate
that our obfuscation-based pre-training recipe leads to consistent improvements in
Code-LMs’ abilities compared to both vanilla autoregressive pre-training as well
as existing de-obfuscation (DOBF) objectives. ObscuraCoder demonstrates
sizeable gains across multiple tests of syntactic and semantic code understanding,
along with improved capabilities in multilingual code completion, multilingual
code commit summarization, and multi-purpose library-oriented code generation.
