Static Code Analysis Alarms Filtering Reloaded: A New Real-World Dataset and its ML-Based Utilization

Even though Static Code Analysis (SCA) tools are integrated into many modern software building and testing pipelines, their practical impact is still seriously hindered by the excessive number of false positive warnings they usually produce. To cope with this problem, researchers have proposed sever...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 10; pp. 55090 - 55101
Main Authors: Hegedus, Peter, Ferenc, Rudolf
Format: Journal Article
Language:English
Published: Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Even though Static Code Analysis (SCA) tools are integrated into many modern software building and testing pipelines, their practical impact is still seriously hindered by the excessive number of false positive warnings they usually produce. To cope with this problem, researchers have proposed several post-processing methods that aim to filter out false hits (or equivalently identify "actionable" warnings) after the SCA tool produced its results. However, we found that most of these approaches are targeted (i.e., deal with only a few SCA warning types) and evaluated on synthetic benchmarks or small-scale manually collected data sets (i.e., with typical sample sizes of several hundred). In this paper, we present a dataset containing 224,484 real-world warning samples fixed (true positives) or explicitly ignored (false positives) by the developers, which we collected from 9,958 different open-source Java projects from GitHub using a data mining approach. Additionally, we utilize this rich dataset to train a code embedding-based machine learning model for filtering false positive warnings produced by 160 different SonarQube rule checks, one of the most widely adopted SCA tools today. This is the most extensive real-world public dataset and study we know of in this area. Our method works with an accuracy of 91% (best F1-score of 81.3% and AUC of 95.3%) for the classification of SonarQube warnings.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3176865