Accepted Paper by ICSE 2026 - Second Round

I am excited to share that our work on ‘How Do Semantically Equivalent Code Transformations Impact Membership Inference on LLMs for Code?’ has been accepted by Research Track of ICSE 2026.

paper
news
The success of large language models for code relies on vast amounts of code data, including public open-source repositories, such as GitHub, and private, confidential code from companies. This raises concerns about intellectual property compliance and the potential unauthorized use of license-restricted code. While membership inference (MI) techniques have been proposed to detect such unauthorized usage, their effectiveness can be undermined by semantically equivalent code transformation techniques, which modify code syntax while preserving semantic. In this work, we systematically investigate whether semantically equivalent code transformation rules might be leveraged to evade MI detection. The results reveal that model accuracy drops by only 1.5% in the worst case for each rule, demonstrating that transformed datasets can effectively serve as substitutes for fine-tuning. Additionally, we find that one of the rules (RenameVariable) reduces MI success by 10.19%, highlighting its potential to obscure the presence of restricted code. To validate these findings, we conduct a causal analysis confirming that variable renaming has the strongest causal effect in disrupting MI detection. Notably, we find that combining multiple transformations does not further reduce MI effectiveness. Our results expose a critical loophole in license compliance enforcement for training large language models for code, showing that MI detection can be substantially weakened by transformation-based obfuscation techniques.
Authors

Hua Yang

Alejandro Velasco

Thanh Le-Cong

Md Nazmul Haque

Bowen Xu

Denys Poshyvanyk

Published

December 17, 2025

Full Paper

Your browser does not support PDFs. Download the PDF.