HEPEX Blog Post #4: Linking flows to flood hazard

The Global Flood Awareness System forecasts when there will be extreme flows within a river, but doesn’t provide the end-user with any information about the potential hazard, i.e. extent and depth of the flood, which are the most relevant bits of information for end-users. In my HEPEX Blog Post from this month I discussed the hierarchy of different ways of linking the flow forecast to the flood hazard, something that I have been thinking about recently as part of a paper I am writing.


“With respect to flooding at least, to have value for decision-making we need to link the forecast of a particular magnitude river flow with the hazard posed by that size flow. When we are tasked with forecasting a flood, what is really meant is that there needs to be some indication of the area that will be flooded, and not only what is going on in the river channel. In practice, this means somehow linking the meteorology to the hydraulics of the floodplain.

How is this currently done?

1. Linking with historical datasets

Collection of historical data can provide a very tangible baseline for decision-makers, e.g. the Environment Agency communicated for Reading, UK in 2014 that ‘flood levels on the Thames are likely to reach those of 2007 but not 2003’ in Reading. This works better where there is clear information about historical hazard (observed inundation extent or levels) rather than of the impact, as the impact of a flood is compounded by many other factors.

2. Linking with offline maps produced by inundation (hydraulic models).

Given an adequate flow record, topographic data (NASA have recently released 30m-pixel resolution global data), and information about channel width and depth, it is possible to create ‘look-up’ maps of flood extent and depth for a particular river flow using offline hydraulic modelling. This approach may not work too well for complex river systems where the dynamics of the flood, i.e. the timing and magnitude of different tributaries, would be important.

However, even where there is adequate observed data to drive the models there are questions about how the hydrometeorological and hydraulic models should be linked. Hydromet modellers bypass the potential issue of the model not predicting the correct rainfall magnitude by adopting a model climatology approach, whereby a 1 in 100 year flood within the model world is assumed to equate to a 1 in 100 year flood within the real world. However, we don’t really know whether this assumption holds true. We can imagine it isn’t likely to; the model climatology is usually derived from a reforecast dataset, perhaps spanning 30 years, whereas the observed climatology is also dependent on the length of the observed dataset. Extrapolating a 1 in 100 year flood from each of these would likely yield very different results.

What might the gold standard be?

3. Real-time inundation forecasting

The gold-standard aim is to be able to fully couple the meteorological model to the inundation model, thereby enabling real-time forecasts of flood inundation and depth. This would have the benefits over #2 of not only being able to represent dynamics, but also not being so reliant on scenarios that have already been modelled. It is of course computationally expensive and relies on there being adequate topographic and channel structure data for flow routing.

probperc

 

Perhaps most importantly, the advantage of #2 is that linking model climatology to observed climatology bypasses the problems of forecasting the correct rainfall (flow) magnitude. We would need to be confident that the hydromet model is getting the rainfall magnitudes correct, and that there is no drift with lead time, to be confident that the coupled model chain would predict anywhere near the correct inundation extent.

What should we be focussing on?

As well as the other challenges that I have already highlighted, I am concerned that producing forecasts of expected inundation extent will pose communication challenges; often providing a very precise estimate seems to give overconfidence in the forecast itself.

The inundation map provided would need to communicate the whole accumulation of uncertainty in the model cascade. Perhaps providing statements like the Environment Agency did for the River Thames earlier this year is the best representation of the certainty in our knowledge? Especially for forecasting using the Global Flood Awareness System (GloFAS), can we really be confident that the impact of sub-grid scale topography and human-control of flood defence structures is minimal?

If I’m going to stick my neck out (and try to get some discussion going), operationally I think we should be focussing on #2, providing scenarios of what might happen if particular thresholds were breached. This means we need research on how to link two different model climatologies. I think we need to carry out a lot more research to understand the model climatology in general…

However, though I don’t have a lot of confidence that we are getting the magnitude of the flow correct (this certainly applies for GloFAS, maybe less so for other systems), I do believe that #3 is a grand scientific challenge that is worth considering from a research perspective.”

HEPEX Blog Post #3: Developing HEPS for humanitarian action

A short piece for the HEPEX blog putting together some thoughts I have gathered over the first year of my fellowship, about the interdisciplinary challenges associated with using flood forecasts for humanitarian action:

Evacuated villagers

Villagers evacuated before Typhoon Phailin, India

“Within the HEPEX community I’m sure we would all like to see our research and forecasting efforts contributing to better preparedness before disasters; hopefully saving lives and livelihoods. We also know that in many situations our forecast models have seen a flood coming well in advance.

But for many disasters, a good forecast has not led to anything more than humanitarian organisations spreading the word that something is imminent, whereas many might hope that aid could be prepositioned or evacuations could be carried out – something that the Indian government  managed so effectively prior to Typhoon Phailin (though post-disaster recovery was arguably less successful). Moving supplies or evacuating people by road prior to a flood is much cheaper than using a helicopter during the disaster.

Ultimately, funding is required to evacuate or preposition supplies, but despite the known benefits of acting before a disaster strikes, donors are reluctant to release money when there is no certainty that the disaster will occur, and so a paradigm shift is needed to convince donors of the benefit of acting early even when the event will not happen on every occasion they do so. Coughlan de Perez et al. have an excellent paper, currently in NHESS discussions, on how the Red Cross are carrying out pilot schemes to effect this shift, but little work has been undertaken elsewhere to look at how forecasts could be used.

This is perhaps because another issue surrounding the use of forecasts is that humanitarian action usually falls within two categories, either response to a disaster that has already happened, or long-term disaster risk reduction. With funding channelled through these functions, who takes the responsibility for forecast-based action?

These are perhaps problems that do not fall within the typical HEPS-scientist skillset, but solving them is vital if we want to see our models used. I believe that we need to do our best from the modelling side to ensure that pilot schemes are successful, but we also need to appreciate that the successful use of forecasts is not solely a scientific problem related to model skill. How would you feel if your charity donation went towards preparing for a disaster that never happened?”

HEPEX Blog Post #1: Hazards, risks, advice, how far should we go?

I wrote this HEPEX blog post in October 2013 to cover a discussion of the same name at the European Meteorological Society annual conference held in Reading. The discussion centred on the question of how to ensure that end-users of forecasts for extreme events were using them effectively, or making the ‘correct’ decisions. I found the discussion about the need for interdisciplinary work on the subject quite interesting:

“The EMS-ECAM meeting was largely a science-focussed event, and it was clear that a ‘social science’ input was missing from the plenary discussion. Additionally, while there were many comments about the need for social science, this ‘need’ remained quite vague and undefined. Perhaps the meteorological and hydrological communities would benefit from both a better understanding and awareness of what social scientists have to offer and, subsequently, from help to better define the questions that need to be answered?”

A video of the panel’s statements is now available online, with another video for the audience’s questions and statements.

February visit to the UK Flood Forecasting Centre in Exeter

In February I joined colleagues from the Oxford Martin School who research the usability of probabilistic forecasts to visit the joint EA / Met Office Flood Forecasting Centre. We were learning about how the forecast is produced by shadowing shifts and carrying out semi-structured interviews.

While my Leverhulme Fellowship is focussed on global models for humanitarian response, this was a valuable opportunity to experience the world of flood forecasting outside of the academic research bubble. As scientists it is all too easy to envisage a decision-making process where every aspect can be quantified and therefore automated, so it is important to gain some perspective and observe the benefits in terms of trust and understanding of maintaining a conversation between forecasters and forecast users. Many would perhaps be surprised to learn that issuing a flood warning is a collaborative decision making process, not solely determined by model output.

The Flood Forecasting Centre

Decision-relevant early-warning thresholds for ensemble flood forecasting systems

I am currently looking into different methods of determining the thresholds for warning with the Global Flood Awareness System, and exploring the sensitivity of the warning system to that choice of threshold. I will be comparing methods that produce warnings when thresholds are reached in the model climatology to others that can provide a ‘first guess’ of potential impacts for end-users. For example, I am currently looking at integrating information derived from global scale inundation mapping and population density databases.

I will be attending a meeting of the Global Floods Working Group in early March, and then the European Geophysical Union conference in Vienna at the end of April, where I will hopefully have the opportunity to present some of the results.

Integrating population density datasets with flood maps at the global scale.

‘First guess’ flood warning systems: looking at the potential of integrating flood risk information derived from population density datasets and flood inundation maps at the global scale.

Kick-off Meeting at JRC, Ispra

On the 16th-18th October I visited the Joint Research in Ispra with Florian Pappenberger to meet with Jutta Thielen and others involved in the Global and European Flood Awareness Systems (GloFAS and EFAS). I delivered a seminar to a small group of researchers about my previous research and the work that I have proposed to do with GloFAS, and had an interesting discussion with Vera Thiemig about her work on users of flood forecasts and Early Warning Systems in Africa (see here, paywalled), which ties in quite well with the work that I propose to do in understanding end-users requirements for lead time and their definitions of a successful forecast.

An initial meeting with Florian, Jutta Thielen, Peter Salamon and Beatriz Revilla-Romero discussed the work that has recently been carried out at JRC and ECMWF with respect to global flood forecasting, and outlined the planned future developments that might provide further opportunities for my fellowship project. We also discussed observed data availability now and into the future and how we could collaborate to produce the best datasets for both observations and vulnerability.

The question of how best to represent vulnerability within an early warning system is an interesting one – a small magnitude flood in one location might have a much larger impact than a large magnitude flood in another, and this should perhaps affect how a warning is communicated to the end-user of the forecast. This is a clear example of how engagement with end-users is vital for the development of these early warning systems, and for the success of this project.