Unveiling Memorization in Code Models

Zhou Yang, Zhipeng Zhao, Chenyu Wang, Jieke Shi, Dongsun Kim, Donggyun Han, David Lo

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

The availability of large-scale datasets, advanced architectures, and powerful computational resources have led to effective code models that automate diverse software engineering activities. The datasets usually consist of billions of lines of code from both open-source and private repositories. A code model memorizes and produces source code verbatim, which potentially contains vulnerabilities, sensitive information, or code with strict licenses, leading to potential security and privacy issues. This paper investigates an important problem: to what extent do code models memorize their training data? We conduct an empirical study to explore memorization in large pre-trained code models. Our study highlights that simply extracting 20,000 outputs (each having 512 tokens) from a code model can produce over 40,125 code snippets that are memorized from the training data. To provide a better understanding, we build a taxonomy of memorized contents with 3 categories and 14 subcategories. The results show that the prompts sent to the code models affect the distribution of memorized contents. We identify several key factors of memorization. Specifically, given the same architecture, larger models suffer more from memorization problem. A code model produces more memorization when it is allowed to generate longer outputs. We also find a strong positive correlation between the number of an output's occurrences in the training data and that in the generated outputs, which indicates that a potential way to reduce memorization is to remove duplicates in the training data. We then identify effective metrics that infer whether an output contains memorization accurately. We also make suggestions to deal with memorization.

Original languageEnglish
Title of host publicationProceedings - 2024 ACM/IEEE 44th International Conference on Software Engineering, ICSE 2024
PublisherIEEE Computer Society
Pages867-879
Number of pages13
ISBN (Electronic)9798400702174
DOIs
StatePublished - 2024
Event44th ACM/IEEE International Conference on Software Engineering, ICSE 2024 - Lisbon, Portugal
Duration: 14 Apr 202420 Apr 2024

Publication series

NameProceedings - International Conference on Software Engineering
ISSN (Print)0270-5257

Conference

Conference44th ACM/IEEE International Conference on Software Engineering, ICSE 2024
Country/TerritoryPortugal
CityLisbon
Period14/04/2420/04/24

Keywords

  • Code Generation
  • Computing methodologies → Artificial intelligence
  • Memorization
  • Open-Source Software
  • Security and privacy
  • Software and its engineering → Software development techniques

Fingerprint

Dive into the research topics of 'Unveiling Memorization in Code Models'. Together they form a unique fingerprint.

Cite this