Choosing the Optimal Numerical Precision for Data Assimilation in the Presence of Model Error

The use of reduced numerical precision within an atmospheric data assimilation system is investigated. An atmospheric model with a spectral dynamical core is used to generate synthetic observations, which are then assimilated back into the same model using an ensemble Kalman filter. The effect on th...

Full description

Saved in:
Bibliographic Details
Published in:Journal of advances in modeling earth systems Vol. 10; no. 9; pp. 2177 - 2191
Main Authors: Hatfield, Sam, Düben, Peter, Chantry, Matthew, Kondo, Keiichi, Miyoshi, Takemasa, Palmer, Tim
Format: Journal Article
Language:English
Published: Washington John Wiley & Sons, Inc 01-09-2018
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The use of reduced numerical precision within an atmospheric data assimilation system is investigated. An atmospheric model with a spectral dynamical core is used to generate synthetic observations, which are then assimilated back into the same model using an ensemble Kalman filter. The effect on the analysis error of reducing precision from 64 bits to only 22 bits is measured and found to depend strongly on the degree of model uncertainty within the system. When the model used to generate the observations is identical to the model used to assimilate observations, the reduced‐precision results suffer substantially. However, when model error is introduced by changing the diffusion scheme in the assimilation model or by using a higher‐resolution model to generate observations, the difference in analysis quality between the two levels of precision is almost eliminated. Lower‐precision arithmetic has a lower computational cost, so lowering precision could free up computational resources in operational data assimilation and allow an increase in ensemble size or grid resolution. Plain Language Summary In order to produce a weather forecast, we must have a good estimate of the current state of the atmosphere. We can observe the atmosphere using satellites and other instruments, but observations alone do not tell the whole picture. We must combine observational data with computational models in order to estimate the atmospheric state comprehensively. This process is known as data assimilation. Data assimilation is a very computationally expensive process as it requires the atmospheric model to be run many times over. This paper proposes a novel method to reduce the computational cost of data assimilation: reduced‐precision calculations. Reducing the precision of the calculations inside the atmospheric model might be expected to degrade the data assimilation process. However, our atmospheric models are inherently imperfect because they do not capture all of the important scales of atmospheric motion. Therefore, the degradation from reducing precision is not actually significant when compared with this unavoidable error. The computational savings that we make by reducing precision could be reinvested to actually improve the data assimilation system and therefore the skill of weather forecasts. For example, we could run more simulations (i.e., use a larger ensemble) for no extra cost. Key Points Lowering precision could accelerate an ensemble Kalman filter The level of precision used should fit the level of model error We perform tests with a spectral dynamical core
ISSN:1942-2466
1942-2466
DOI:10.1029/2018MS001341