HEPEX Blog Post #4: Linking flows to flood hazard

The Global Flood Awareness System forecasts when there will be extreme flows within a river, but doesn’t provide the end-user with any information about the potential hazard, i.e. extent and depth of the flood, which are the most relevant bits of information for end-users. In my HEPEX Blog Post from this month I discussed the hierarchy of different ways of linking the flow forecast to the flood hazard, something that I have been thinking about recently as part of a paper I am writing.

“With respect to flooding at least, to have value for decision-making we need to link the forecast of a particular magnitude river flow with the hazard posed by that size flow. When we are tasked with forecasting a flood, what is really meant is that there needs to be some indication of the area that will be flooded, and not only what is going on in the river channel. In practice, this means somehow linking the meteorology to the hydraulics of the floodplain.

How is this currently done?

1. Linking with historical datasets

Collection of historical data can provide a very tangible baseline for decision-makers, e.g. the Environment Agency communicated for Reading, UK in 2014 that ‘flood levels on the Thames are likely to reach those of 2007 but not 2003’ in Reading. This works better where there is clear information about historical hazard (observed inundation extent or levels) rather than of the impact, as the impact of a flood is compounded by many other factors.

2. Linking with offline maps produced by inundation (hydraulic models).

Given an adequate flow record, topographic data (NASA have recently released 30m-pixel resolution global data), and information about channel width and depth, it is possible to create ‘look-up’ maps of flood extent and depth for a particular river flow using offline hydraulic modelling. This approach may not work too well for complex river systems where the dynamics of the flood, i.e. the timing and magnitude of different tributaries, would be important.

However, even where there is adequate observed data to drive the models there are questions about how the hydrometeorological and hydraulic models should be linked. Hydromet modellers bypass the potential issue of the model not predicting the correct rainfall magnitude by adopting a model climatology approach, whereby a 1 in 100 year flood within the model world is assumed to equate to a 1 in 100 year flood within the real world. However, we don’t really know whether this assumption holds true. We can imagine it isn’t likely to; the model climatology is usually derived from a reforecast dataset, perhaps spanning 30 years, whereas the observed climatology is also dependent on the length of the observed dataset. Extrapolating a 1 in 100 year flood from each of these would likely yield very different results.

What might the gold standard be?

3. Real-time inundation forecasting

The gold-standard aim is to be able to fully couple the meteorological model to the inundation model, thereby enabling real-time forecasts of flood inundation and depth. This would have the benefits over #2 of not only being able to represent dynamics, but also not being so reliant on scenarios that have already been modelled. It is of course computationally expensive and relies on there being adequate topographic and channel structure data for flow routing.



Perhaps most importantly, the advantage of #2 is that linking model climatology to observed climatology bypasses the problems of forecasting the correct rainfall (flow) magnitude. We would need to be confident that the hydromet model is getting the rainfall magnitudes correct, and that there is no drift with lead time, to be confident that the coupled model chain would predict anywhere near the correct inundation extent.

What should we be focussing on?

As well as the other challenges that I have already highlighted, I am concerned that producing forecasts of expected inundation extent will pose communication challenges; often providing a very precise estimate seems to give overconfidence in the forecast itself.

The inundation map provided would need to communicate the whole accumulation of uncertainty in the model cascade. Perhaps providing statements like the Environment Agency did for the River Thames earlier this year is the best representation of the certainty in our knowledge? Especially for forecasting using the Global Flood Awareness System (GloFAS), can we really be confident that the impact of sub-grid scale topography and human-control of flood defence structures is minimal?

If I’m going to stick my neck out (and try to get some discussion going), operationally I think we should be focussing on #2, providing scenarios of what might happen if particular thresholds were breached. This means we need research on how to link two different model climatologies. I think we need to carry out a lot more research to understand the model climatology in general…

However, though I don’t have a lot of confidence that we are getting the magnitude of the flow correct (this certainly applies for GloFAS, maybe less so for other systems), I do believe that #3 is a grand scientific challenge that is worth considering from a research perspective.”

Leave a Reply

Your email address will not be published. Required fields are marked *