Limitations of machine learning for building energy prediction: ASHRAE Great Energy Predictor III Kaggle competition error analysis

Research is needed to explore the limitations and potential for improvement of machine learning for building energy prediction. With this aim, the ASHRAE Great Energy Predictor III (GEPIII) Kaggle competition was launched in 2019. This effort was the largest building energy meter machine learning co...

Full description

Saved in:
Bibliographic Details
Published in:HVAC&R research Vol. 28; no. 5; pp. 610 - 627
Main Authors: Miller, Clayton, Picchetti, Bianca, Fu, Chun, Pantelic, Jovan
Format: Journal Article
Language:English
Published: Philadelphia Taylor & Francis 27-05-2022
Taylor & Francis Ltd
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Research is needed to explore the limitations and potential for improvement of machine learning for building energy prediction. With this aim, the ASHRAE Great Energy Predictor III (GEPIII) Kaggle competition was launched in 2019. This effort was the largest building energy meter machine learning competition of its kind, with 4370 participants who submitted 39,403 predictions. The test dataset included two years of hourly whole building readings from 2380 meters in 1448 buildings at 16 locations. This paper analyzes the various sources and types of residual model error from an aggregation of the competition's top 50 solutions. This analysis reveals the limitations for machine learning using the standard model inputs of historical meter, weather, and basic building metadata. The errors are classified according to timeframe, behavior, magnitude, and incidence in single buildings or across a campus. The results show machine learning models have errors within a range of acceptability (RMSLEscaled = < 0.1) on 79.1% of the test data. Lower magnitude (in-range) model errors (0.1 < RMSLEscaled = < 0.3) occur in 16.1% of the test data. These errors could be remedied using innovative training data from onsite and Web-based sources. Higher magnitude (out-of-range) errors (RMSLEscaled > 0.3) occur in 4.8% of the test data and are unlikely to be accurately predicted.
ISSN:2374-4731
2374-474X
DOI:10.1080/23744731.2022.2067466