Author Archives: Rob Thompson

Time series of flash flood events and associated meteorological conditions

archer By David Archer (Newcastle University [JBA])
16th December 2014

As a contribution to the SINATRA project, I have compiled a chronology of flash floods and categorised associated meteorological conditions and impacts. This has now been completed for three regions of England, the southwest, the northeast and the northwest. As the first step in analysis, I have produced time series of events by decades (but earlier by century). Figures below for Southwest England show the number of flash flood events of differing severity, the number of large hail events of differing magnitude or severity, and the number of lightning occurrences with differing severity of impact.

For the Southwest 301 flash flood events, where associated thunder was reported, were extracted from a total of over 450 events.   Given the availability of newspaper sources, the lists can be considered comprehensive from at least 1850 and perhaps from 1820.

Perhaps the most striking feature of the flash flood series is the contrast between the low numbers of events in the second half of the twentieth century (within our own experience) and the number from 1850 to 1940 with a dip around 1900. The pattern and contrast is repeated with the occurrence of large hail and the impact of lightning. The time series for the northeast and northwest show very similar time distributions.

Are these all spurious and can flash flood variations all be explained by variation in reporting or changes in infrastructure (though this could work both ways, with improvements in urban drainage but increases in paved area). In the case of hail effects, changes in the strength of glass could explain part of the variation but not the observed frequency of occurrence. With respect to lightning, partial explanation could be made by improvements in lightning protection or in the number of people exposed to events.

Or are these variations real (my belief is yes) – if so how can you explain the meteorological basis? Comments welcome!

Convective flood events

The first graph is based on an extraction of events where thunder was reported in association with the event (hence convective). These were categorised in terms of the severity of the associated flooding:

1F for events where a large number of properties were flooded, most commonly from pluvial flooding before the runoff reached an established water course; 2F for events where a small number of properties was flooded.; Total is the sum of 1F and 2F events. I have excluded from this analysis those events where only road flooding was reported or where unspecified remarks were made of the occurrence of flooding.

fig1

 

Occurrences of severe hail

Where hail was reported in association with a flood event, the severity of the hail was categorised as follows:

1H Where the hail was of sufficient size and intensity to cause the breaking of glass in windows and skylights as well as in green houses; 2H Where large hail or ice was reported (often with weight diameter or circumference) but without reference to the breaking of glass. I have excluded from the analysis, occurrence of hail of normal size, even where it accumulated to a considerable depth.

fig2

 

Occurrence and effects of lightning

Three main categories of lightning effect and damage are noted:

1L where lightning strikes have caused the death of one or more people in the event; 2L where people have been struck and injured or animals have been killed by lightning; 3L where buildings have been struck. Reports in this category usually refer to damage caused to the buildings by the strike itself or an ensuing fire. I have excluded from the analysis events where lightning has been reported but no specific evidence of impact on people, animals or buildings is noted.

 

 

 

fig3

Flood forecasting in the UK: what should we learn from the Winter 2013/14 floods?

LizStephens_w HannahCloke_w By Dr. Liz Stephens and Prof. Hannah Cloke (University of Reading)
2nd December 2014

This post was originally published on: http://hepex.irstea.fr/flood-forecasting-in-the-uk-what-should-we-learn-from-the-winter-201314-floods/.

The Summer 2007 floods provided significant impetus for improvement to hydrological ensemble forecasting capabilities, and the events of Winter 2013/14 provided a good testing ground for these improvements. Liz Stephens and I discuss this in an article for the Geographical Journal coming out next week; the full paper can be found here (paywall).

BawJyMWCYAAhynb

 

image via @alby

While there were countless newspaper articles devoted to criticism of the government’s role in the damage and disruption caused by last winter’s flooding, the role of forecasting was seen as a success.

Forecasts of upcoming floods contribute to reducing flood risk as a whole, enabling preventative actions: such as closure of barriers and placement of temporary flood defences:

With an early ‘heads-up’, emergency responders, including police, fire and rescue, and local authorities can begin to devote resources to an imminent flood event, checking on critical assets, ensuring that their equipment is in the right places and in good working order, and that enough people are on shift.

Changes to flood forecasting in the UK, brought about in response to the Pitt Review of the 2007 summer floods, have led to considerable improvements not only in how floods are forecasted, but also in the coordination of early warning and emergency response. The formation of the joint Environment Agency / Met Office Flood Forecasting Centre facilitated a radical change in practice and coordination between the two organisations, providing a strategic overview of flood risk across the country from forecasters with both meteorological and hydrological expertise.

The development of a probabilistic storm surge model out to a 7-day lead time came just in time, with the then Secretary of State for Environment, Flood and Rural Affairs, Owen Paterson, stating that “from the earliest signs of a possible surge threat, Government Departments and agencies, local resilience fora and local authorities were making preparations.”

However, the forecasting system for fluvial floods doesn’t currently extend out to the same lead time as that of the surge model, with the Met Office’s ensemble flood forecasting model only currently running out to 3 days. Given the significant benefits seen from having longer time to prepare for the December storm surge, furthering UK capabilities for probabilistic river forecasting should be seen as a key priority if we are to learn a forecast lesson from the winter 2013/14 floods.

Stephens, E., & Cloke, H. (2014). Improving flood forecasts for better flood preparedness in the UK (and beyond). The Geographical Journal. doi: 10.1111/geoj.12103

 

SINATRA meeting

HannahCloke_w By Prof. Hannah Cloke (University of Reading)
28th November 2014

Last week 45 flood scientists and stakeholders came together from across the UK (and beyond) for the Annual SINATRA science meeting in the ‘Beehive’ at Newcastle University (18-20 November, 2014).
photo2
Presentations and scientific discussions were wide ranging and included progress in flood model development, how to catch floods in the field, the challenges of using remotely sensed hydrological data in data assimilation, new high temporal resolution archives of rainfall and river flow, narratives of flooding from old archives, flood process representation in land surface models, atmospheric drivers of floods in the UK, the impact of the spatial distribution of rainfall on morphological change and flood risk, the challenges of verifying the impacts of flooding and bringing together our knowledge on flooding from intense rainfall. Keep your eye on this blog for more information on these!

Postdocs and PhDs on SINATRA had the opportunity to showcase their work on investigating catchment susceptibility to flooding from intense rainfall, and it was great to see some new collaborations forming over posters.

IMAG0032

 

IMAG0034

 

Flash flooding in Exeter

download By Dr. Albert Chen (University of Exeter)
28th October 2014

In the evening of 16 Oct 2014, a local thunderstorm occurred in Exeter and the torrent downpours led to flash flooding in many parts of the city.

Locations of flood-related incidents across Exeter during the evening of 16 Oct 2014
FFIR

The Met Office reported that 19mm of rainfall was recorded in Exeter between 7pm and 8pm. The fire crews received more than 70 weather related calls and attended 55 incidents due to internal or external flooding. Some roads were impassable due to the 2 feet depth deep water and several basement flats were inundated. Elderly and disabled people were rescued and evacuated from their flooded ground floor properties. An infant school was also flooded and had to be closed for clean-up the day after.

Several manhole covers were reported lifted or missing due to the pressure in the sewer pipes, overwhelmed by the high discharge. Some homes and business also experienced power cut during the storm.

 

Related links:

Devon and Somerset Fire and Rescue Service

BBC: Exeter flooding sees elderly people evacuated from homes

ITV: Flash flooding in Exeter

Exeter Express & Echo

Exeter hit by flash floods – fire crews receive over 70 calls as inch of rain falls in an hour

Videos and pictures: Flash floods brings chaos to roads, homes and businesses across Exeter

Breaking : Storm leads flash flooding across Exeter

Stories of dramatic rescues emerge as clean-up following flash flood in Exeter continues

Western Morning News: Flash floods as thunderstorms bring sudden downpours

Recent Floods in Montpellier

laura_baker By Dr. Rob Thompson (University of Reading)
24th October 2014

Recent weeks have seen a number of high profile flash flood events around the world, but perhaps the most striking have been in Montpellier, Central Southern France. The city has suffered 2 severe flash flood events in a week as intense thunderstorms hit just north of the city. This caused River Lez to burst its banks sending water through the city. THe city was not the only affected inhabitation however, about 60 towns were placed in natural disaster measures.

Image from Mail Online.
Photograph: Sylvain Thomas/AFP/Getty Images

According to Meteo-France, the all-time rainfall record since the site was established in 1979 was broken, 252 mm (9.9 inches) fell between 2pm and 5pm, and of the total, 95 mm (3.74 inches) fell in a single hour. By the end of the day, half the average annual rainfall had fallen within a single day. This lead to some dramatic damage, as 10 feet of water left cars suspended in trees and creating major damage to the city’s football stadium.

Image from Guardian Online.
Photograph: Sylvain Thomas/AFP/Getty Images

Warnings had been in place, with “vigilance rouge” (essentially red alert) warnings for the department due to the forecast of slow moving thunderstorms. The air in the region was warm and moist, ideal for the formation of intense thunderstorms and hence rainfall.

This was the second case of flooding from intense rainfall in Montpellier in a week, the week before, a similar setup lead to large and tall storms, demonstrating their intensity, as can be seen in the image below:

5 hour averaged SEVERI IR output (a good proxy for cloud top height)
Image from EUMETSAT.

The meteorological situation is not particularly unusual and the storms typical of the setup, forming on the Western edge of a blocking anticyclone, though the positioning and hence impacts were unusual, and that 2 events occurred in quick succession particularly so.

This sort of meteorological situation however is predictable as demonstrated by the red alerts issued by Meteo-France, better observations and understanding of the small scale meteorology and hydrology would be hoped to lead to improved, more precise warnings for such events in the future, thanks to work done by the many projects such as FRANC (for the meteorology), SINATRA (for the hydrology)  and the later integration of the two sciences to a joined up system.

Developing a pilot surface water flood forecasting tool for Glasgow and operational use during the Commonwealth Games

By Linda Speight (Senior Hydrometeorologist SEPA / Scottish Flood Forecasting Service)
7th Octobber 2014

This year the 2014 Commonwealth Games were held in Glasgow and everyone in Scotland was keen for it to be a success. Glasgow however has a known history of summer surface water flooding (2002, 2007, 2011, 2012, 2013). The Scottish Flood Forecasting Service (SFFS) is responsible for flood forecasting in Scotland and we felt that if heavy rain was to occur, our existing surface water forecasting capabilities would not be sufficient to meet the expected briefing requirements.

Glasgow2002_flooding_01_GCCcopyright

Figure 1 Previous surface water flooding in Glasgow

So what could be done? Surface water flood forecasting has until recently been regarded as challenging, difficult and often simply not possible. But with recent improvement to surface water flood mapping and ensemble forecasting for convective events we were keen to meet the challenge head on.

The SFFS set up a project to develop an operational pilot for a 10km by 10km area of Glasgow and to explore future surface water flood forecasting methods for Scotland in general. The project was largely funded through Scotland’s Centre of Expertise for Waters (CREW), with the Met Office, CEH Wallingford, Deltares and the James Hutton Institute as partners.

Requirements

To ensure the pilot model would meet the needs of end users we set up a steering group of key responders in Glasgow. The steering group provided an excellent forum for scientists and operational responders to discuss the challenges of surface water flood forecasting from different perspectives, to engage end users with the project at an early stage, and to set realistic expectations for the model output.

The steering group requirements were to focus on the 6 to 24 hour lead time as this is when proactive preparations can be made. Responders wanted guidance on event timings, locations that might be affected, possible impacts and severity and crucially a stand down message when the event is over or the risk level reduces.

From a SFFS perspective the key goal was to have a fully operational model that combined forecasts and impacts and could be integrated with our existing tools.

The model set up

The pilot model aimed to make the best use of emerging science developments within our existing systems. The methodology built on some of the concepts that the Natural Hazards Partnership have been developing for a national surface water flood forecasting tool but applied them at a smaller scale and integrated them with existing SFFS tools.

The model runs eight times a day using forecast rainfall from the Met Office blended ensemble (based on MOGREPS-UK and STEPS) and Nowast ensemble giving rainfall on a 2km grid for the next 24 hours. Each rainfall ensemble is run through the CEH Grid-to-Grid model to convert rainfall to surface runoff based on variable such as antecedent conditions, land use, slope etc. It is assumed that the surface runoff from Grid-to-Grid is equivalent to the effective rainfall used to produce SEPA’s new pluvial flood maps. This means that for each 1km grid square the most appropriate flood inundation map and impact assessment from the library can be identified. Threshold were set for people and property impacts (population, utilities, commercial properties and community services) and transport (road and rail) enabling impacts to be compared to our established severity thresholds. Combining this with the probability of exceeding the thresholds allowed the overall surface water flood risk to be identified within our existing flood risk matrix (Figure 2).

fig2Figure 2 Surface water forecasting methodology used in the pilot model

The model output is viewed through web reports developed by Deltares utilising the Flood Estimation and Warning (FEWS) software (Figure 3). The reports showed output for a number of parameters over the full 24 hour time period and broken down into 6 hour time steps (an example is shown in Figure 4).

fig3Figure 3 FEWS Glasgow home page

FEWS Glasgow allows forecaster to select any forecasts from the past 24 hours allowing consideration of run to run variability for convective events

Operational use

The pilot model has been running throughout the summer. During the two week period of the Commonwealth Games an additional SFFS forecasting hydrologist was on duty alongside the national forecaster. They were responsible for producing the Glasgow Daily Surface Water Flood Forecast which was sent out to responders. An important part of their role was to brief the SEPA Resilience Officer who was advising organisations in the Games Multi Agency Control Centre to ensure SEPA was able to give a consistent and informed message about the flood risk to the Games.

Luckily the weather during the Games was largely fine however there were occasions when the additional surface water guidance provided a real benefit to the organisers and responders. This took two forms, firstly enabling the SFFS to advise that although heavy rainfall was forecast in the wider West Central Scotland region, flooding impacts in Glasgow itself were unlikely. Secondly, as was the case on the last weekend of the Games, to provide information on the timing, likely impacts and probability of possible flooding in Glasgow.

fig4Figure 4 FEWS Glasgow forecasts for the afternoon of Sunday 3rd August

FEWS Glasgow forecasts for afternoon of Sunday 3rd August 2014. Photographs show the conditions at the Games at this time. The risk level had been increased as we got closer to the event as the likelihood increased. There were reports of minor impacts across Glasgow over the weekend. (photo credit www.eveningtimes.co.uk)

Where next?

The pilot model has shown that it is possible to provide an informed surface water forecasting service and the feedback received on the service has been positive. The lesson we learnt during the Games is that the staffing requirements to operate such a service are high due to the need to continually monitor the forecast due to variability between model runs, and the briefing requirements to help end users understand the uncertainty. The data volumes involved in rolling out a similar service to the whole of Scotland would also require careful management. Improving capabilities for surface water flood forecasting is one of the core strategic aims of SEPA’s Flood Warning Strategy for 2012 – 2016 and we will use the experience gained in this trial to help improve surface water forecasting for other areas of Scotland in the future.

More details
Further details on the development of this project throughout the year are available on the SFFS blog. There is also a report for the initial review stage of the project. The work was presented at the British Hydrology Society conference (Sep 2014) and a research paper and final report will be published in due course so keep an eye out.

Diagnosing correlated observation error statistics

Waller By Dr. Joanne Waller  (University of Reading)
10th September 2014

The forecasting of intense rainfall requires a best guess of the current atmospheric state; this is calculated using a data assimilation scheme. Data assimilation techniques combine observations with a model prediction of the state, known as the background, to provide a best estimate of the state, known as the analysis.

 

For a data assimilation scheme to produce an optimal estimate of the state the errors associated with the observations and the background must be well understood and correctly specified. Previously much attention has been given to the errors associated with the background and a sophisticated representation of these errors is now used in the assimilation. In contrast, less emphasis has been given to understanding the nature of the observation errors.

 

The errors associated with the observations can be attributed to four main sources:

  • Instrument error.
  • Error introduced in the observation operator  (a model that allows the predicted quantity to be compared to the observation, for more information see this blog).
  • Errors of representativity – these arise where the observations can resolve spatial scales that the model cannot.
  • Pre-processing errors  – errors introduced by pre-processing of the data

For most observations the instrument error is uncorrelated; the remaining sources of error are likely to introduce correlations which are both state and time dependent.  Typically these correlations are not well understood and have been ignored in operational assimilation. Instead, the unknown and unaccounted for correlations are represented by inflating the error variance.  The quantity of observations used is also reduced, hence reducing the error correlations, either by discarding data or combining data to create superobservations. To improve the use and quantity of observations used in the data assimilation scheme it is imperative to understand and accurately represent the errors associated with the observations.

fig1

 

As part of project FRANC we are aiming to estimate and incorporate into the assimilation correlated error statistics for a number of observation types. Information on the errors associated with the observation is hidden in the ‘innovation’, which is the difference between the value of the measured observation and the corresponding model equivalent; this may be taken from either the background or analysis. To extract these observation errors, we use a diagnostic that calculates statistical averages of background and analysis innovations to provide an estimate of the observation error covariance matrix. In theory this method provides an exact estimate of the observation errors. However, in complex operational systems many simplifying assumptions are made and therefore the result of the diagnostic may not be exact, though it is expected to be a reasonable approximation.

Our studies have attempted to calculate and understand the errors associated with Doppler radar radial wind measurements.  Using the diagnostic we calculated observation error correlations along the radar beam at different ranges away from the radar and at different heights in the atmosphere. Initial results show that Doppler radar radial wind error variances increase with height. Results also show that correlation length scales range between 10km and 40km and are longer at lower heights and at far range. Now we have some idea of the observation error statistics associated with Doppler radar radial winds we must attempt to include them in the assimilation (a challenge in itself!). The correct inclusion of these errors in the assimilation should improve the analysis, which in turn should improve the forecasts of intense rainfall.

Can a Climate model Reproduce Extreme Regional Precipitation Events?

kevin_pearson By Dr. Kevin Pearson (University of Reading)
27th August 2014

In a future warmer world, regions in the mid-latitudes like the U.K. will very likely experience more intense and more frequent instances of extreme rainfall. While there are theoretical reasons to expect this to be the case, specific evidence for it is largely based on the results of computer simulations with climate models. So an important question is: how good are these climate models at actually simulating extreme rainfall events? We carried out some comparisons with real events to test this as part of the Storm Risk Mitigation programme funded by the Natural Environment Research Council.

Often heavy rainfall is generated by convection cells in the atmosphere that have a size of a few hundred metres. This is about 100 times smaller than the size of the grid boxes typically used by global climate models on long, multi-decade runs and about 10 times smaller even than the grid boxes often used for day-to-day weather forecasting. This makes them challenging to model accurately. At other times the high rainfall amounts come from a large weather system or a series of systems that produce longer lasting but usually less intense periods of rain. The storm system that affected the UK on 19-20 July 2007 contained both elements with intense local rainfall events embedded in the fronts of a depression. This system moved slowly northward from France on 19 July and was centred over southeast England by midday on the 20th. The previous two months had seen above average rainfall that had left the ground saturated and the storm led to widespread flooding, memorably in the area around Tewkesbury.

blogfig1

 

Fig 1: Hourly precipitation rate averaged over a box representing England-Wales comparing different simulations. Both the model runs are plotted for the Global, 12km and 4km simulations that started at 0900 and 1200 on 19 July 2007.

We simulated this event by running the Met Office’s forecast model in three configurations: a global model with a grid cell resolution of approximately 40km around the UK, and higher resolution local models for just the UK region with 12km and 4km grid spacings. These were each set running from two start times (0900 and 1200) on 19 July. Alongside these and using the same 0900 starting point, we ran the High-Resolution Global Environmental Monitoring (HiGEM) climate model that is based on the same underlying computer code as the forecast models. Figure 1 shows a comparison of the hourly rain rate over England and Wales between all the model runs. All of the forecast models have a similar peak rainfall rate and consistent timing while the HiGEM model is a little more intense and peaks somewhat later. Figure 2 shows a comparison of the accumulated rainfall from the global model and HiGEM over the two days as well as those measured by the Met Office’s network of rain gauges. The global model produces similar results to the 4km and 12km models and are all close to what was actually observed. The peak rainfall from these models appears on along a northwest-southeast line that is displaced only slightly to the northeast of the real location. The HiGEM model produces most of its rainfall further north and on a more east-west alignment. This reflects the later time of its peak rainfall intensity since the system was moving northwards and rotating at this point. Averaged over England and Wales, the HiGEM model overestimated the total amount of rain by 17% compared to the observed value whereas the forecast models underestimated it by 11%. So, in this particular case and on the scale of England and Wales, the HiGEM model produced a total rainfall amount with comparable accuracy to the forecast models albeit an over- rather than an under-estimate.

blogfig2

 

Figure 2: Comparison of the accumulated global (left) and HiGEM (right) model rainfall from 1200 on 19th to 0900 on 21st. The model data have both been interpolated onto the same grid with 12km spacing and the size of the original gridboxes is shown. Also shown are the values measured by the Met Office rain gauge network at various locations.

Another assessment we carried out was to compare the statistics of the rainfall produced in a long run (50 years) of the climate model to the long-term rainfall record for England and Wales (1931 to 2011). Overall, the HiGEM model produced a range of daily rainfall amounts that was very similar to the observed values. The model did tend to produce a slightly larger proportion of light, drizzly rain days compared to reality and to underproduce the fraction of days with extreme rainfall. This difference became more pronounced when the total rainfall amount was compared for longer accumulation times ranging from a few days up to a month. This suggests that the model has a tendency to miss some of those situations where a series of storm systems pass over the UK in succession. So while the model is still capable of producing heavy rainfall events, it may underpredict the instances where the ground has become saturated by previous storms systems and thus have an increased risk of flooding.

The full article is available at http://onlinelibrary.wiley.com/doi/10.1002/qj.2428/abstract

Integration meeting

laura_baker By Dr. Rob Thompson (University of Reading)
29th July 2014

A month ago the SINATRA and FRANC projects had a combined meeting in Reading, all set to discuss integration and the future of the FFIR project. The day started with some talks about the more basic aspects of the projects, intended to get the audience up to speed, a way to place the various elements of FRANC and SINATRA into context. I talked about the weather aspects of intense rainfall, focusing on convection (exactly the sort of thing we’ve seen in the last few weeks in the UK) and then moved onto the current state of operational radar meteorology in the UK. Sue Ballard told us about the state of modelling of convection in particular and the importance and current science of the data assimilation scheme. This was followed by Chris Skinner on Morphodynamic hydrology models… certainly a learning curve for me there and finally Slobodan Djordjevic telling us about state of the art urban flood forecasting – some of those resolutions seem amazing.

BrH3NdUCIAAM4QC

 

Then it was poster time over coffee. Each work task in the 2 projects had a poster, to explain the project to the other members, this is of course vital for integration – we need to know what each other do to be able to see how the tasks fit together. On a person note this was my chance to see what work will benefit from the radar improvements we are working on, of course the answer is most, but some certainly seem more direct and important. The poster session certainly got people talking and interacting, the first step towards integrating.

Sinatra conference 2014_32Breakout sessions followed spread around lunch. The sessions were certainly fast paced, but certainly got the participants discussing integration and the future of FFIR. The reports from the different groups showed that there was much data sharing and knowledge to exchange during FFIR and we hope that this meeting will be the catalyst to get the integration of the projects really going, and lead to thoughts for further integration as the FFIR project continues.

 

How to best use the initial conditions of the forecast

 Andy Picture By Andrew Oxley (University of Surrey)
25th July 2014

This last year some areas of Britain have seen some very damaging floods. Brief periods of intense rainfall can lead to flash flooding, which can cause a lot of damage to properties and even threaten lives. An advance flood warning system is important so preparations can be made to minimise the damage. To improve the predictions of these events, more accurate forecasts of heavy rainfall are needed.

It is important to understand how a weather forecast is conducted. Weather forecasting is an initial value problem, we have observed and calculated data at a certain time, and we need to integrate the equations of motion over time to produce a forecast. So, two ways to make a forecast more accurate are to increase the accuracy of our initial guess of the state of the atmosphere, and to improve the way this information is used to produce the forecast.

Cloud2

Coming from a mathematics background, my PhD is focusing on the latter. We do not have a perfect, computationally viable mathematical model describing what is happening in the atmosphere, one of the major challenges in weather forecasting is to work with the basic equations and reduce them down to systems of equations that can be used to produce a forecast. During this process we inevitably lose information, my research involves looking at models of cloud formation and growth and seeing what kind of behaviours we are missing by taking the linear, workable case, and how regime changes affect the behaviour, such as in the transition to deep convective clouds. Something which hasn’t been studied in great depth.

Currently we are revisiting work conducted by Christopher Bretherton, who derives a linear model of moist convection, and seeing what we can learn from it and how we can extend the theory further.