On Leakage of Code Generation Evaluation Datasets
In this paper, we consider contamination by code generation test sets, in particular in their use in modern large language models. We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
10-07-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we consider contamination by code generation test sets, in
particular in their use in modern large language models. We discuss three
possible sources of such contamination and show findings supporting each of
them: (i) direct data leakage, (ii) indirect data leakage through the use of
synthetic data and (iii) overfitting to evaluation sets during model selection.
To address this, we release Less Basic Python Problems (LBPP): an
uncontaminated new benchmark of 161 prompts with their associated Python
solutions. LBPP is released at https://huggingface.co/datasets/CohereForAI/lbpp . |
---|---|
DOI: | 10.48550/arxiv.2407.07565 |