Simulating runoff under changing climatic conditions: Revisiting an apparent deficiency of conceptual rainfall‐runoff models
Hydrologic models have potential to be useful tools in planning for future climate variability. However, recent literature suggests that the current generation of conceptual rainfall runoff models tend to underestimate the sensitivity of runoff to a given change in rainfall, leading to poor performa...
Saved in:
Published in: | Water resources research Vol. 52; no. 3; pp. 1820 - 1846 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Washington
John Wiley & Sons, Inc
01-03-2016
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Hydrologic models have potential to be useful tools in planning for future climate variability. However, recent literature suggests that the current generation of conceptual rainfall runoff models tend to underestimate the sensitivity of runoff to a given change in rainfall, leading to poor performance when evaluated over multiyear droughts. This research revisited this conclusion, investigating whether the observed poor performance could be due to insufficient model calibration and evaluation techniques. We applied an approach based on Pareto optimality to explore trade‐offs between model performance in different climatic conditions. Five conceptual rainfall runoff model structures were tested in 86 catchments in Australia, for a total of 430 Pareto analyses. The Pareto results were then compared with results from a commonly used model calibration and evaluation method, the Differential Split Sample Test. We found that the latter often missed potentially promising parameter sets within a given model structure, giving a false negative impression of the capabilities of the model. This suggests that models may be more capable under changing climatic conditions than previously thought. Of the 282[347] cases of apparent model failure under the split sample test using the lower [higher] of two model performance criteria trialed, 155[120] were false negatives. We discuss potential causes of remaining model failures, including the role of data errors. Although the Pareto approach proved useful, our aim was not to suggest an alternative calibration strategy, but to critically assess existing methods of model calibration and evaluation. We recommend caution when interpreting split sample results.
Key Points:
Models may be more capable under changing climatic conditions than previously thought
Common calibration methods often fail to identify parameter sets that are robust
Caution is needed when interpreting the results of split sample testing |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0043-1397 1944-7973 |
DOI: | 10.1002/2015WR018068 |