Category Archives: Uncategorized

Atmospheric Precursors to Flash Floods

adrian_champion By Dr. Adrian Champion (University of Reading)
19th May 2014

One of the most important requirements to improving our prediction of flash floods is to know and understand what the atmospheric precursors are for a flash flood, i.e. what was the atmosphere doing that caused the flash flood to occur? As mentioned in an earlier post (Summer Intense Rainfall Events vs Winter Flooding, 28th January 2014, Adrian Champion) there are many problems associated with predicting flash floods, however the main issue when understanding the atmospheric precursors is the scale at which flash floods occur.

Flash floods, floods that last for less than a day, affect a very small area and are caused by weather systems that are only a few 10s of km in size. This makes them very difficult to predict and observe, with weather forecast models only recently able to predict systems of this size, and ground observations also being at this scale. Therefore to detect what atmospheric features cause flash floods is extremely challenging.

Atmospheric Rivers

Atmospheric Rivers are areas of high moisture convergence that, when present, stretch back from the UK coastline across the Atlantic for at least 2000 km and are present for at least 18 hours. This represents a significant amount of moisture that has the potential to fall as intense rain over the UK. Atmospheric Rivers have already been shown to be the cause of 60% of all the most extreme winter flooding events in the UK over the past 30 years.

Preliminary findings suggest that Atmospheric Rivers cannot be associated with summer flash flooding events. This is not particularly surprising as atmospheric rivers provide a continual supply of moisture over a large area for an extended period, this does not match with the scale or duration of flash floods. Therefore one of the questions being investigated by SINATRA is whether there is a common atmospheric feature that can be attributed to summer flash floods. This also requires an observational record of intense rain events.

June2012MF

Observing Intense Rain

To be able to associate certain atmospheric features to summer flash floods we need to know when there was intense rain. Part of the work being undertaken at the Department of Meteorology, University of Reading is to create a record of when flash floods occurred using an observational dataset. One dataset being used is the raingauge network, a series of buckets that collect water and record every time 0.2 mm of rain is collected – known as tipping bucket raingauges. The advantage of this dataset is that it observes the amount of rain reaching the ground with a high degree of accuracy. The disadvantage of this dataset is that they are spread sporadically across the UK and are very susceptible to errors, either mechanical or due to local factors. These errors need to be addressed before the intensities can be used in observing the rain.

A further problem is calculating what intensity rainfall may lead to a flash flood. This is highly dependent on the ground conditions prior to the intense rain. One of the causes of the prolonged flooding over the winter was due to the ground becoming saturated, followed by a series of heavy rainfall events. The ground is still saturated from the winter flooding, and therefore even a relatively low-intensity rain event may cause flash flooding. The opposite is also a problem; last summer during the drought period, the ground was so dry that it was unable to absorb any rain, resulting in higher chance of flash flooding.

rain_gauge_diagram

Summary

It is clear that associating flash floods with a common atmospheric feature is a complex task that has many problems. Whether or not an intense rain event causes a flash flood is dependent on the preceding ground conditions, where the rain falls (i.e. which catchment) and the period over which the rain falls. It is hoped, however, that by finding an atmospheric feature that is commonly associated with summer flash floods, the prediction of these events can be improved.

Tyne project wins national river partnership prize

Geoff Parkin By Dr. Geoff Parkin and Eleanor Starkey (Newcastle University)
15th May 2014

A project involving a partnership between Tyne Rivers Trust, Newcastle University, and community groups has won the Partnership category in the inaugural England River Prize run by Environment Agency, River Restoration Centre and WWF.

The Haltwhistle Burn Catchment Restoration Fund (CRF) project involves a ‘total catchment’ approach which brings together organisations and individuals to tackle issues of water quality and flood risk management as well as collectively improving our understanding of the sub-catchment and river processes. To support the CRF project outcomes, Tyne Rivers Trust has part funded Eleanor Starkey’s PhD within CEG, with supervisors Geoff Parkin, Paul Quinn and Andy Large. Eleanor’s research project ‘Community Monitoring and Modelling for Catchment Management and Restoration within the UK’ is focussing on engagement with the local community in the Haltwhistle Burn catchment, a tributary of the Tyne, to monitor a range of catchment parameters and issues using low-cost and simple techniques, develop and test green-engineering catchment management techniques, and use modelling strategies that can provide meaningful results back to the community. This work builds on existing research in CEG in natural flood management, hydraulic and hydrological model development, and crowd-sourcing methods for data gathering.

pic

This research study is closely aligned with the Sinatra project objectives, in using community sourced data to provide essential information on rapidly responding catchments, which complement the spatially-sparse national observation networks for rainfall and river flows. The study is providing evidence of the value of this information, as well as testing methods for engaging with local communities that will help to support the work of the Sinatra Flood Action Team (FloAT).

Visit http://research.ncl.ac.uk/haltwhistleburn/ for further details.

FFIR at the European Geosciences Union’s General Assembly, 2014

christopher_skinner_150 By Dr. Chris Skinner (University of Hull)
9th May 2014

The scientific conference is a vital way for scientists to meet and discuss their research with one another. They are an opportunity both for yourself to share your latest research, and receiving first hand feedback/criticism for it, and to catch up on the latest, and cutting edge research that is going on in other institutions. When it come to conferences, they do not get much bigger than the European Geosciences Union’s General Assembly, affectionately known as EGU.

EGU is an annual, weeklong conference held in Vienna, around Easter time. For many of us working as part of the FFIR programme, it is a vital fixture in our calendars, and that is simply because of its size. It is big, real big. To give you some figures taken from their website, the 2014 meeting was attended by 12,437 scientists from 106 countries, who presented 9,583 posters and gave 4,829 oral presentations. I was but one of 1,120 scientists from the UK. The advantage of it being so big is that a lot of people – important, clever and useful people – attend and they encompass a wide range of disciplines. It is fertile ground for new ideas and collaborations.

blog1

A 360° panoramic from outside the EGU (Austria Centre Vienna)

A typical day at EGU is a long one. The oral sessions begin at 8.30am and last for six 15 minute presentations before a half hour coffee break – where one can sample the delights of a Viennese Melange. Two sessions in the morning are followed by an hour and a half break for lunch. This is a chance to grab some food, peruse the several, large poster halls, and have a look around the displays in the foyer. These range from ESA, to representatives from scientific publishers, to companies promoting the equipment or services they have for sale. My personal favourite was from the Earth Engine team from Google, who were demonstrating their beta for an open source GIS – it is one to keep an eye on. After lunch there are a further two oral session blocks until 5.00pm, and these are followed by a two hour poster session where you have the opportunity to talk to the poster’s author. These are always lively and busy. From 7.00pm there are often further things to attend, such as meetings, workshops or debates – my week included a SINATRA team meeting and a function to celebrate the successful first year of the Earth Surface Dynamics open-access journal.

A session I found particularly interesting this year was the “Precipitation: Measurement, Climatology, Remote Sensing, Modelling” session. It featured several presentations regarding the development of the Global Precipitation Measurement mission (GPM), which aims to dramatically increase the coverage of satellite instrumentation that can directly detect rainfall and its relative intensity. The core satellite in the constellation launched earlier in 2014 and from the sessions it is clear that it is working well, and the indicators are that our ability to observe rainfall from orbit will be greatly improved. This might not have a huge impact on forecasting FFIR in the UK, where we are well served by ground recording instrumentation, but will have a big impact on areas that are not, such as South America and sub-Saharan Africa.

My favourite presentation of the week, however, was given by Massimiliano Zappa, from the Swiss Federal Research Institute. Zappa’s presentation was one of the invited presentations in the “Ensemble hydro-meteorological forecasting” session, titled “HEPS challenges the wisdom of the crowds: the PEAK-Box Game. Try it yourself!”  The PEAK-Box approach is a method of better understanding and communicating the uncertainty around peak flood forecasts, by using a visual box representing the possible range of peak discharges from a forecast ensemble, and during the session Zappa had the audience make their own predictions (a bit like the old ‘Spot the Ball’ competitions). He predicted that, through the “wisdom of the crowd”, the average of the audience’s response should be close to the actual peak flood. I will let you know when he has finished collating and distributing the results whether or not it has worked!

This blog is just a mere flavour of the activities at EGU. For me, personally, the most rewarding and productive aspect is getting to meet, face to face, many of the people I will be working alongside in the SINATRA project, as well as many excellent scientists from around the globe that beforehand I had only ever communicated with on Twitter, or in the blogosphere. And if you get the chance to escape for a few hours, there is always the beautiful city of Vienna to explore.

 blog2

View over the New Danube at sunset, close to EGU. It’s pretty nice.

You can follow Chris Skinner on Twitter: @cloudskinner

Representing model error in high resolution ensemble forecasts

laura_baker By Dr. Laura Baker (University of Reading)
14th April 2014

Ensemble weather forecasts are used to represent the uncertainty in the forecast, rather than just giving a single deterministic forecast. In a very predictable system, all the ensemble members typically follow a similar path, while in an unpredictable system, the ensemble may have a large divergence or spread between members.
1
Schematic diagram of an ensemble system

A simple way to create an ensemble is to perturb the initial conditions of the forecast. Since the atmosphere is a chaotic system, a small perturbation can potentially lead to a large difference in the forecast. However, just perturbing the initial conditions of the forecast is sometimes not enough, and these ensembles can often be underspread, which means that they do not cover the full range of possible states that could occur. This means that the ensemble forecast could miss what actually occurs in observations. One way to further increase the spread of the ensemble is to add some representation of model error, or model uncertainty, into the forecast. Model uncertainty becomes relatively more important as you go down to smaller scales, so in a high-resolution ensemble it is more important to include these effects.

A recent study as part of the DIAMET project (http://www.ncas.ac.uk/index.php/en/diamet-introduction) aimed to investigate the effects of randomly perturbing individual parameters in the forecast model as a way of representing model error. We used a configuration of the Met Office Unified Model with a resolution of 1.5 km and a domain covering the southern part of the UK. We generated an ensemble with one control member and 23 perturbed members. The initial conditions for each ensemble member came from a global ensemble forecast with a lower resolution (60 km). Since our domain is a sub-domain of the global model, the lateral boundary conditions are also derived from the global model forecast, and each ensemble member has perturbed boundary conditions corresponding to their initial condition perturbations.

We focussed on a single case study which occurred during one of the DIAMET field campaign periods. This case was particularly interesting from an ensembles perspective because it involved the passage of a frontal rain band with an interesting banded structure which was not well represented in the operational forecast. None of the individual ensemble members captured the two separate rain bands, but some of them had rain in the location of the second band.

2 The left panel shows the radar rain rate at 1500 UTC on 20 September 2011. The right panel shows the control member forecast rain rate at the same time. The model fails to capture the second rain band.
3
This figure shows each of the ensemble members in the ensemble (before the parameter perturbations were applied) at the same time as the figure above. Note the large variability in the position and intensity of the rain band between members.

We perturbed parameters in the boundary layer and microphysics parameterisation schemes. 16 parameters were chosen to be perturbed, which were known by experts to have some uncertainty in their values. We perturbed each parameter randomly within a certain range, and each ensemble member had different random perturbations applied to its parameters. We focussed our analysis on near-surface variables (wind speed, temperature and relative humidity) which could be compared with observations from surface stations, and rainfall rate and accumulation (which could be compared with radar observations). We found that for the near-surface variables, representing model error using this method improved the forecast skill and increased the spread of the ensemble. In contrast, for the rainfall the forecast skill and ensemble spread were degraded by this method after the first couple of hours of the forecast.

This study is a useful first step to developing a high-resolution ensemble system with a representation of model error. This work was recently published in Nonlinear Processes in Geophysics and can be accessed here: http://www.nonlin-processes-geophys.net/21/19/2014/npg-21-19-2014.html .

 

Modern Weather Radar – Developments for Intense Rainfall

laura_baker By Dr. Rob Thompson (University of Reading)
4th April 2014

If we are looking to predict flooding from intense rainfall, we are going to need to know just how intense that rainfall really is – and where. The traditional way to measure rainfall is with a raingauge – a collector that measures the amount of rain falling into a collector of area around 50 square cm. Of course a raingauge can only measure the rainfall at the site, for areal coverage, we turn to radar.

Weather radar developed after the second world war when weather echoes had been noticed in aircraft and ship tracking radars. They developed until the 1980s when weather radar networks were being built, including in the UK. The (slightly out of date) Met Office fact sheet 15 provides an excellent explanation of how weather radar works and the addition of Doppler radar (which is something of a misnomer, it in fact does not use the Doppler shift, the effect would be too small to measure) so I won’t be repeating that. I shall instead be discussing the latest development in the UK network and one that is currently rolling out is dual-polarisation (at the time of writing there are 4 dual-polarisation radars in the UK network).

Thurnham radar, Kent
Doppler Radar Weather Station, Thurnham (1) (Danny Robinson) / CC BY-SA 2.0

 

What is Dual-Polarisation?

So what is dual polarisation and what does it give us? With a modern dual polarisation radar the radar considers horizontally and vertically polarised waves separately. Usually they are transmitted simultaneously and the received and separated. This leads to a range of new parameters that are of use to the radar meteorologist and therefore is of great benefit. Those benefits are only now being fully researched (including in the FRANC project by me) but include improved rejection of non-meteorological echoes, better classification of echoes (detecting rain/snow/hail etc.) and importantly, improved ability to quantitatively measure in heavy rain.

Using Radar in Intense Rainfall

During very heavy rainfall, some of the electromagnetic waves from the radar are absorbed or scattered out of the beam (that proportion of the scattering back to the radar is the radar signal), that means that there is less power in the beam beyond the rain. When rain is very heavy this can result in the beam having significantly reduced power from returns beyond heavy rain and therefore appears to show less rainfall than is truly there.

intro pic

The figure shows the reflectivity of very intense rainfall event passing London on 20th July 2007. Warmer colours show high reflectivities and hence heavier rainfall. The radar is the star on the right side of the image, and it is clear to see that there is a “hole” in the radar echo behind the intense band of rainfall. This appears as a “searchlight”, and at it’s worst when the intense rain band lines up with the radar for maximal absorption of the radar beam. This particularly extreme case is also labelled with how serious the problem is, without knowledge of this “attenuation” problem, only 4mm/hr rainfall would be observed for Heathrow, when gauged measurement suggested that was 68mm/hr. Without dual-polarisation capability, a conventional radar can estimate the attenuation, but this estimate is unstable and so cannot be used to make large corrections such as required in a case like this as it could introduce greater errors than it fixed. That means that with a conventional radar just when at it’s most vital, it gives less certain results.

Dual-Polarisation to the Rescue

So why will dual-polarisation help? Amongst the parameters gained by the addition of dual-polarisation is “differential phase shift”. This is measured as the phase difference between the horizontally and vertically polarised return, the horizontal return is delayed compared to the vertical. That delay is a result of drop shapes (as described in this NASA article – this suddenly turns very technical though – be warned!) not the teardrop shape as usually depicted, but a sphere, becoming more hamburger shaped as they become larger. The shape means large drops appear larger to the horizontal wave than vertical wave, and so suffer more delay in the phase, the phase builds up as it passes through large raindrops. And of course larger raindrops generally means more rain. That gain in differential phase shift goes hand in hand with attenuation, as you can see in the figure below (the same time as the reflectivity plot above – note that this extreme case the data passes the wavelength and restarts – this is corrected in algorithms).

phidp

By using the differential phase shift as an estimator of attenuation rate, stable corrections to the measured reflectivity can be made – but what is the relationship between differential phase shift and attenuation? In fact that varies quite significantly – we need another constraint to improve further.

Radiometric Emission

That extra constraint comes from the radiometric emission of the rain, in the radar frequency. As long ago as 1859 Kirchoff’s Law was known, anything that absorbs electromagnetic waves will also emit them equally effectively. We use the radar to measure this emission as an increase in the noise where there are no scatters reflecting the beam. This can be converted to a total attenuation along the beam. Once we have a total attenuation and differential phase shift to know how to distribute that total, we can make reliable corrections.

That will lead to more accurate rainfall estimates for intense, flood producing rainfall. More accurate measurements that are fed into the computer models of both the weather and hydrology to predict the floods before they happen.

Note on Article in BHS Newsletter ‘Circulation’, 120, February 2014 5-7

David Archer and Hayley Fowler ‘Do floods cause more loss of life from the peak or from their rate of rise?’

The brief paper argues that the speed of onset of flooding is a key factor in causing loss of life because it can trap householders in places from which they cannot escape. The paper gives examples of historical flash floods with rapid rates of rise where multiple deaths occurred, including the well-known flood on the Lud at Louth in 1920 (23 deaths) but also at Barnsley in 1838 (27 deaths there and 5 elsewhere), near Truro in 1846 (39 deaths). The paper also describes a more recent flood where a ’10 foot wall of water’ swept a boy to his death on the River Gelt in June 1982 and a similar event on the West Allen in Northumberland where a swimmer narrowly escaped. Flood waves generated in the headwaters of the River Tyne can persist with very rapid rates of rise and sometimes with a steepening wave front 80 km downstream to the estuary. Such an event occurred in July 2002 and although it caused no loss of life, it is clear that such events pose great risks to river users notably fishermen who may stand knee-deep in water some distance from the water’s edge. With the flood discharge rising by 160 m3 sec-1 in 15 minutes, they would have less than a minute to escape. The article does not specifically answer the question in the title but it shows that there is a different category of risk from that associated with the peak flow. The intention of the FRANC and SINATRA projects to address such issues as meteorological forecasting, hydrological modelling and forecasting of such flash flood events is noted.

The 2014 flooding – The current scale and what the future holds

The beginning of 2014 has seen extensive flooding, predominantly in the south of England, due to a series of storms passing over the UK resulting in some of the largest rainfall totals seen on record: the wettest January since records began, twice the monthly average for January for Southern England with December and January being the wettest two-month period since records began (http://metofficenews.wordpress.com/2014/02/06/uks-exceptional-weather-in-context/).

The cause of the series of storms has been due to an unusually strong North Atlantic jet stream, resulting in a continual supply of moisture from the Atlantic over the UK causing the high rainfall totals. The cause of the strong North Atlantic jet stream is due to exceptionally cold weather in Canada and the USA and Pacific jet stream, caused by enhanced rainfall over Indonesia due to higher than normal ocean temperatures.

The effect of climate change will be for these events, and summer flash flooding, to become more frequent, which means that current methods of dealing with flooding, protecting homes in the event of a flood will not be enough. Fundamental changes into the design of homes, where homes are built and even accepting that current housing locations that are susceptible to flooding may need to be abandoned for more flood-resistant locations (http://www.theguardian.com/commentisfree/2014/feb/11/climate-change-flooding-engineer-somerset).

There have been many arguments about whether dredging the rivers would have prevented the current flooding, and whether it will reduce the probability of flooding in the future. It is important to realise that in the current situation the flooding is due to extreme rainfall, and not all of the flooding has been from rivers, e.g. ground-water flooding and storm-surges. Dredging is very costly and the benefit is unclear, e.g. to locations such as Somerset, and will not be a solution to many areas. Dredging isolated parts of a river will only move the problem further downstream; it is not a miracle cure to the current flooding crisis (http://www.theguardian.com/environment/2014/feb/12/flood-crisis-dredging-climate-change).

There are no quick fixes to the current flooding however the UK should be prepared for flooding to become more frequent. Once the current flood-water has disappeared will be the time when we should be remembering the flooding and looking towards long-term solutions to a problem that is going to become more frequent.

Summer Intense Rainfall Events vs Winter Flooding

The UK has recently experienced two different types of flooding events. The first was coastal flooding caused by the low pressure of extra-tropical cyclone essentially pushing water from the sea into the rivers and along the coast. The second was caused by the prolonged rainfall associated with extra-tropical cyclones which have typically travelled across the Atlantic from the East coast of North America, picking up moisture which then falls as rain as the cyclones pass over land due to the presence of raised land. Both these events are more typical during the winter period (October – March) and last several days resulting in a lot of rain over this period. This leads to long periods of flooding (days) and typically over large areas, e.g. the whole of southern England.

The Flooding From Intense Rainfall (FFIR) programme is looking at the flooding that is caused by more short-lived events, such as convective storms that are more typical during the summer period (March – September), resulting in flooding that lasts hours and affecting a much smaller area. One of the most recent infamous examples of this was the flooding in Boscastle in August 2004 (http://www.metoffice.gov.uk/education/teens/case-studies/boscastle) which saw 75 mm of rain over 2 hours resulting in the destruction of houses, businesses as well as a significant economic impact. The summer 2007 floods that affected the whole of the UK were also due to short-lived intense rainfall events over a much wider area.

The difficulties faced when predicting these events is that they have a much smaller scale, typically less than 10s of kms, compared to the winter events which can have scales of 100s to 1000 kms. The timing and location of the rainfall is particularly difficult to predict and have a big impact on how well a specific catchment can cope – large catchments, e.g. the Thames can cope with a short intense rainfall event much more easily than a small catchment, e.g. the Valency which runs through Boscastle. To predict these events with greater accuracy a number of questions need to be addressed: 1) what are the atmospheric conditions that lead to these rainfall events, 2) how does the hydrology of each catchment vary and thus their ability to cope with such events, and 3) how do the different catchments respond to different rainfall intensities. These are the research questions of FFIR which is looking at catchments in the UK and their responses to surface water and flash floods.

Welcome

Flooding From Intense Rainfall (FFIR) is a 5 year NERC funded programme which aims to reduce the risks of damage and loss of life caused by surface water and flash floods through improved identification, characterisation and prediction of interacting meteorological, hydrological and hydro-morphological processes that contribute to flooding associated with high-intensity rainfall events. There are currently two projects within the FFIR programme, both directed by the University of Reading, Project FRANC and Project SINATRA.