HEPEX Blog Post #4: Linking flows to flood hazard

The Global Flood Awareness System forecasts when there will be extreme flows within a river, but doesn’t provide the end-user with any information about the potential hazard, i.e. extent and depth of the flood, which are the most relevant bits of information for end-users. In my HEPEX Blog Post from this month I discussed the hierarchy of different ways of linking the flow forecast to the flood hazard, something that I have been thinking about recently as part of a paper I am writing.

“With respect to flooding at least, to have value for decision-making we need to link the forecast of a particular magnitude river flow with the hazard posed by that size flow. When we are tasked with forecasting a flood, what is really meant is that there needs to be some indication of the area that will be flooded, and not only what is going on in the river channel. In practice, this means somehow linking the meteorology to the hydraulics of the floodplain.

How is this currently done?

1. Linking with historical datasets

Collection of historical data can provide a very tangible baseline for decision-makers, e.g. the Environment Agency communicated for Reading, UK in 2014 that ‘flood levels on the Thames are likely to reach those of 2007 but not 2003’ in Reading. This works better where there is clear information about historical hazard (observed inundation extent or levels) rather than of the impact, as the impact of a flood is compounded by many other factors.

2. Linking with offline maps produced by inundation (hydraulic models).

Given an adequate flow record, topographic data (NASA have recently released 30m-pixel resolution global data), and information about channel width and depth, it is possible to create ‘look-up’ maps of flood extent and depth for a particular river flow using offline hydraulic modelling. This approach may not work too well for complex river systems where the dynamics of the flood, i.e. the timing and magnitude of different tributaries, would be important.

However, even where there is adequate observed data to drive the models there are questions about how the hydrometeorological and hydraulic models should be linked. Hydromet modellers bypass the potential issue of the model not predicting the correct rainfall magnitude by adopting a model climatology approach, whereby a 1 in 100 year flood within the model world is assumed to equate to a 1 in 100 year flood within the real world. However, we don’t really know whether this assumption holds true. We can imagine it isn’t likely to; the model climatology is usually derived from a reforecast dataset, perhaps spanning 30 years, whereas the observed climatology is also dependent on the length of the observed dataset. Extrapolating a 1 in 100 year flood from each of these would likely yield very different results.

What might the gold standard be?

3. Real-time inundation forecasting

The gold-standard aim is to be able to fully couple the meteorological model to the inundation model, thereby enabling real-time forecasts of flood inundation and depth. This would have the benefits over #2 of not only being able to represent dynamics, but also not being so reliant on scenarios that have already been modelled. It is of course computationally expensive and relies on there being adequate topographic and channel structure data for flow routing.



Perhaps most importantly, the advantage of #2 is that linking model climatology to observed climatology bypasses the problems of forecasting the correct rainfall (flow) magnitude. We would need to be confident that the hydromet model is getting the rainfall magnitudes correct, and that there is no drift with lead time, to be confident that the coupled model chain would predict anywhere near the correct inundation extent.

What should we be focussing on?

As well as the other challenges that I have already highlighted, I am concerned that producing forecasts of expected inundation extent will pose communication challenges; often providing a very precise estimate seems to give overconfidence in the forecast itself.

The inundation map provided would need to communicate the whole accumulation of uncertainty in the model cascade. Perhaps providing statements like the Environment Agency did for the River Thames earlier this year is the best representation of the certainty in our knowledge? Especially for forecasting using the Global Flood Awareness System (GloFAS), can we really be confident that the impact of sub-grid scale topography and human-control of flood defence structures is minimal?

If I’m going to stick my neck out (and try to get some discussion going), operationally I think we should be focussing on #2, providing scenarios of what might happen if particular thresholds were breached. This means we need research on how to link two different model climatologies. I think we need to carry out a lot more research to understand the model climatology in general…

However, though I don’t have a lot of confidence that we are getting the magnitude of the flow correct (this certainly applies for GloFAS, maybe less so for other systems), I do believe that #3 is a grand scientific challenge that is worth considering from a research perspective.”

HEPEX Blog Post #3: Developing HEPS for humanitarian action

A short piece for the HEPEX blog putting together some thoughts I have gathered over the first year of my fellowship, about the interdisciplinary challenges associated with using flood forecasts for humanitarian action:

Evacuated villagers

Villagers evacuated before Typhoon Phailin, India

“Within the HEPEX community I’m sure we would all like to see our research and forecasting efforts contributing to better preparedness before disasters; hopefully saving lives and livelihoods. We also know that in many situations our forecast models have seen a flood coming well in advance.

But for many disasters, a good forecast has not led to anything more than humanitarian organisations spreading the word that something is imminent, whereas many might hope that aid could be prepositioned or evacuations could be carried out – something that the Indian government  managed so effectively prior to Typhoon Phailin (though post-disaster recovery was arguably less successful). Moving supplies or evacuating people by road prior to a flood is much cheaper than using a helicopter during the disaster.

Ultimately, funding is required to evacuate or preposition supplies, but despite the known benefits of acting before a disaster strikes, donors are reluctant to release money when there is no certainty that the disaster will occur, and so a paradigm shift is needed to convince donors of the benefit of acting early even when the event will not happen on every occasion they do so. Coughlan de Perez et al. have an excellent paper, currently in NHESS discussions, on how the Red Cross are carrying out pilot schemes to effect this shift, but little work has been undertaken elsewhere to look at how forecasts could be used.

This is perhaps because another issue surrounding the use of forecasts is that humanitarian action usually falls within two categories, either response to a disaster that has already happened, or long-term disaster risk reduction. With funding channelled through these functions, who takes the responsibility for forecast-based action?

These are perhaps problems that do not fall within the typical HEPS-scientist skillset, but solving them is vital if we want to see our models used. I believe that we need to do our best from the modelling side to ensure that pilot schemes are successful, but we also need to appreciate that the successful use of forecasts is not solely a scientific problem related to model skill. How would you feel if your charity donation went towards preparing for a disaster that never happened?”

HEPEX Blog Post #1: Hazards, risks, advice, how far should we go?

I wrote this HEPEX blog post in October 2013 to cover a discussion of the same name at the European Meteorological Society annual conference held in Reading. The discussion centred on the question of how to ensure that end-users of forecasts for extreme events were using them effectively, or making the ‘correct’ decisions. I found the discussion about the need for interdisciplinary work on the subject quite interesting:

“The EMS-ECAM meeting was largely a science-focussed event, and it was clear that a ‘social science’ input was missing from the plenary discussion. Additionally, while there were many comments about the need for social science, this ‘need’ remained quite vague and undefined. Perhaps the meteorological and hydrological communities would benefit from both a better understanding and awareness of what social scientists have to offer and, subsequently, from help to better define the questions that need to be answered?”

A video of the panel’s statements is now available online, with another video for the audience’s questions and statements.