Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions
Large-scale generative models enabled the development of AI-powered code completion tools to assist programmers in writing code. However, much like other AI-powered tools, AI-powered code completions are not always accurate, potentially introducing bugs or even security vulnerabilities into code if...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
09-11-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Large-scale generative models enabled the development of AI-powered code
completion tools to assist programmers in writing code. However, much like
other AI-powered tools, AI-powered code completions are not always accurate,
potentially introducing bugs or even security vulnerabilities into code if not
properly detected and corrected by a human programmer. One technique that has
been proposed and implemented to help programmers identify potential errors is
to highlight uncertain tokens. However, there have been no empirical studies
exploring the effectiveness of this technique -- nor investigating the
different and not-yet-agreed-upon notions of uncertainty in the context of
generative models. We explore the question of whether conveying information
about uncertainty enables programmers to more quickly and accurately produce
code when collaborating with an AI-powered code completion tool, and if so,
what measure of uncertainty best fits programmers' needs. Through a
mixed-methods study with 30 programmers, we compare three conditions: providing
the AI system's code completion alone, highlighting tokens with the lowest
likelihood of being generated by the underlying generative model, and
highlighting tokens with the highest predicted likelihood of being edited by a
programmer. We find that highlighting tokens with the highest predicted
likelihood of being edited leads to faster task completion and more targeted
edits, and is subjectively preferred by study participants. In contrast,
highlighting tokens according to their probability of being generated does not
provide any benefit over the baseline with no highlighting. We further explore
the design space of how to convey uncertainty in AI-powered code completion
tools, and find that programmers prefer highlights that are granular,
informative, interpretable, and not overwhelming. |
---|---|
DOI: | 10.48550/arxiv.2302.07248 |