Measuring the spatial predictability of rainfall

robert_plant By Dr. Robert Plant (University of Reading)
23rd May 2014

Here are three snapshots of the rain rate over the southern part of the UK, obtained from the radar network. The first two are for the 2nd August 2013, at 9Z and 16Z while the plot on the right is for 14Z on the following day. At a glance we can see that these days were convectively active, but for the most part this is not simply scattered convection. In particular, there are quite a few linear features and some localized areas of heavy, almost continuous, rain.

pic1

Suppose now that we were forecasters looking at plots akin to these and produced from a model. These plots might give us cause for some concern. The last one in particular, with a linear feature along the centre of the southwest peninsula is somewhat reminiscent of the devastating Boscastle flash flood in 2004. In that case, prolonged, intense convective rainfall was maintained along a near-stationary convergence line that was itself caused by a stalled sea-breeze front. The Boscastle flood was a rather extreme example, of course, but the basic situation are not particularly unusual. Indeed, a recent climatology of heavy, prolonged convective rainfall in the UK (ie, quasi-stationary convective storms) has been conducted by Rob Warren and highlights this mechanism and the south-west peninsula in particular. (It would be remiss of me not to congratulate Rob for submitting his PhD thesis on the day that this blog entry goes live. For details of his climatology work see http://www.met.reading.ac.uk/~hy010960/phd/climatology.html)

Model output is not perfect of course. What are the model uncertainties that we ought to be mindful of? Well, we can reasonably suppose that the model has not gone completely askew and has at least have generated some heavy rain in the south of the country. But that rain may nonetheless be somewhat too intense or too light. Also, the model simulation may have produced the rain in the wrong place. This has potentially profound implications for the forecast of flood risk if it alters the relation between the heavy rainfall and the river catchments. Of course, it is just for these reasons that a convective-scale ensemble is so useful: a set of simulations that provides information about the uncertainties in the forecast.

But how should we exploit the ensemble of simulations? In principle, it contains an enormous amount of information, and certainly an enormous amount of data is produced. But we need to be able to extract key aspects quickly and easily. Staring hard at many, many plots from the various simulations can give a good subjective sense of the forecast uncertainty, but is not always the best approach, and is certainly not the most efficient. Another PhD student at Reading, Seonaid Dey, is working on just such issues, not directly as part of FFIR, but obviously closely related. Here is an example of the methods being developed.

pic2

Variations in the intensity of the rainfall between different ensemble simulations are relatively easy to assess, and you can no doubt devise some simple but useful analysis methods without thinking too hard. But the spatial predictability is a tougher proposition. In the figure above, the top line shows the same rainfall data as before, whilst the bottom line shows a measure of spatial predictability produced from the ensemble data for the same time. Let’s talk through the interpretation to show why such analysis is so useful, and then I’ll explain a bit about the calculation methodology for the benefit of interested experts.

The dark reds show the locations with high spatial predictability for rainfall, and light colours have little spatial predictability. One thing to notice immediately is that for areas where there is no rain forecast by any of the model simulations then it is natural that the spatial predictability of the rainfall should be diagnosed as very low, as indeed is the case. In the first example the rainfall is not very predictable. There is a linear feature across Wales that all the simulations agreed about, as well as the scattered showers along the south coast. But the line feature passing close to the Bristol channel is uncertain in the forecast, as is the rainfall area over East Anglia.

The second example is an interesting case with mixed predictability, which stresses the point that a single spatial predictability measure for the whole domain is not sufficient to give the full picture: we really need to able to produce such maps. Imagine a NE to SW line passing through the Wash. The rainfall to the north of this line is well captured, with good agreement across the ensemble. However, predictability to the south of the line is very low, and the feature in the radar data just crossing the south coast could not have been forecast with any confidence.

Finally the last example shows that the rainfall oriented along the south west peninsula is very spatially predictable across ensemble members, and so shows us very quickly and easily that this is a very probable event that’s well worth watching. Fortunately, these storms were not particularly intense and there were no reports of damaging impacts.

And so, as promised, I should close by giving some idea of the calculations. We adapt a verification metric known as the Fractions Skill Score, or FSS. It is used to compare all the possible combinations of two simulations. For each pair, we determine a skill score imagining that one of the simulations is being used to predict the other. The score varies with spatial scale: at the grid scale the simulations are unlikely to agree but considered at larger scales the simulations start to appear more alike. Successively increasing the scale over which the calculation is performed, we can identify that for which one simulation has meaningful skill at usefully predicting the other. Average this scale over all possible pairs, and that’s what was plotted.

Leave a Reply

Your email address will not be published.