What’s in a number?

By Nancy Nichols

Should you care about the numerical accuracy of your computer?  After all, most machines now retain about 16 digits of accuracy, but usually only about 3 – 4 figures of accuracy are needed for most applications;  so what’s the worry?   To demonstrate, there have been a number of spectacular disasters due to numerical rounding error.  One of the most well known is the failure of a Patriot Missile to track and intercept an Iraqi Scud missile in Dharan, Saudi Arabia, on February 25, 1991, resulting in the deaths of 28 American soldiers.

The failure was ultimately attributable to poor handling of  rounding errors.  The computer doing the tracking calculations had an internal clock whose values were truncated when converted to floating-point arithmetic with an error of about 2-20 .   The clock had run up a time of 100 hours, so the calculated elapsed time was too long by 2-20 x 100 hours = 0.3433 seconds, during which time a Scud would be expected to travel more than half a kilometer.

 

(See The Patriot Missile Failure)

The same problem arises in other algorithms that accumulate and magnify small round-off errors due to the finite (inexact) representation of numbers in the computer.   Algorithms of this kind are referred to as ‘unstable’ methods.  Many numerical schemes for solving differential equations have been shown to magnify small numerical errors.  It is known, for example, that L.F. Richardson’s original attempts at numerical weather forecasting were essentially scuppered due the unstable methods that were used to compute the atmospheric flow.   Much time and effort have now been invested in developing and carefully coding methods for solving algebraic and differential equations such as to guarantee stability.   Excellent software is publicly available.  Academics and operational weather forecasting centres in the UK have been at the forefront of this research.

Even with stable algorithms, however, it may not be possible to compute an accurate solution to a given problem.   The reason is that the solution may be sensitive to small errors  –  that is, a small error in the data describing the problem causes large changes in the solution.  Such problems are called ‘ill-conditioned’.   Even entering the data of a problem into a computer  –  for example, the initial conditions for a differential equation or the matrix elements of an eigenvalue problem  –   must introduce small numerical errors in the data.  If the problem is ill-conditioned, these then lead to large changes in the computed solution, which no method can prevent.

So how do you know if your problem is sensitive to small perturbations in the data?  Careful analysis can reveal the issue, but for some classes of problems there are measures of the sensitivity, or the ‘conditioning’, of the problem that can be used.   For example, it can be shown that small perturbations in a matrix can lead to large relative changes in the inverse of the matrix if the ‘condition number’ of the matrix is large.  The condition number is measured as the product of the norm of the matrix and the norm of its inverse.  Similarly  small changes in the elements of a matrix will cause its eigenvalues to have large errors if the ‘condition number’ of the matrix of eigenvectors is large.   Of course to determine the condition numbers is a problem implicitly, but accurate computational methods for estimating the condition numbers are available .

An example of an ill-conditioned matrix is the covariance matrix associated with a Gaussian distribution.   The following figure shows the condition number of a covariance matrix obtained by taking samples from a Gaussian correlation function at 500 points, using a step size of 0.1, for varying length-scales [1].The condition number increases rapidly to 107 for length-scales of only size  L = 0.2  and, for length scales larger than 0.28, the condition number is larger than the computer precision and cannot even be calculated accurately.

This result is surprising and very significant for numerical weather prediction (NWP) as the inverse of covariance matrices are used to weight the uncertainty in the model forecast and in the observations used in the analysis phase of weather prediction.  The analysis is achieved by the process of data assimilation, which combines a forecast from a computational model of the atmosphere with physical observations obtained from in situ and remote sensing instruments.  If the weighting matrices are ill-conditioned, then the assimilation problem becomes ill-conditioned also, making it difficult to get an accurate analysis and subsequently a good forecast [2].  Furthermore, the worse the conditioning of the assimilation problem becomes, the more time it takes to do the analysis. This is important as the forecast needs to be done in ‘real’ time, so the analysis needs to be done as quickly as possible.

One way to deal with an ill-conditioned system is to rearrange the problem to so as to reduce the conditioning whilst retaining the same solution.  A technique for achieving this is to ‘precondition’ the problem using a transformation of the variables.  This is used regularly in NWP operational centres with the aim of ensuring that the uncertainties in the transformed variables all have a variance of one [1][2].  In this table we can see the effects of the length-scale of the error correlations in a data assimilation system on the number of iterations it takes to solve the problem, with and without preconditioning of the problem [1].  The conditioning of the problem is improved and the work needed to solve the problem is significantly reduced.  So checking and controlling the conditioning of a computational problem is always important!

[1]  S.A Haben. 2011. Conditioning and Preconditioning of the Minimisation Problem in

Variational Data Assimilation, University of Reading, Department of Mathematics and Statistics, Haben PhD Thesis

[2]  S.A. Haben, A.S. Lawless and N.K. Nichols.  2011. Conditioning of incremental variational data assimilation, with application to the Met Office system, Tellus, 63A, 782–792. (doi:10.1111/j.1600-0870.2011.00527.x)

WMO Symposium on Data Assimilation

by Amos Lawless

In the middle of September scientists from all round the world converged on a holiday resort in Florianopolis, Brazil for the Seventh World Meteorological Organization Symposium on Data Assimilation. This symposium takes place roughly every four years and brings together data assimilation scientists from operational weather and ocean forecasting centres, research institutes and universities. With 75 talks and four poster sessions, there was a lot of science to fit in to the four and a half days spent there.

 

The first day began with presentation of current plans by various operational centres, and both similarities and differences became apparent. It is clear that many centres are moving towards data assimilation schemes that are a mixture of variational and ensemble methods, but the best way of doing this is far from certain. This was apparent from just the first two talks, in which the Met Office and Meteo-France set out their different strategies for the next few years. For anyone who thought that science always provides clear-cut answers, here was an example of where the jury is still out! Many other talks covered similar topics, including the best way to get information from small samples of ensemble forecasts in large systems.

 

In a short blog post such as this, it is impossible to discuss the wide range of topics that were discussed in the rest of the week, ranging from theoretical aspects of data assimilation to practical implementations. Subjects included challenges for data assimilation at convective scales in the atmosphere, ocean data assimilation, assimilation of new observation types (including winds from radar observations of insects, lightning and radiances from new satellite instruments) and measuring the impact of observations. Several talks proposed development of new, advanced data assimilation methods – particle filters, Gaussian filtering and a hierarchical Bayes filter were all covered. Of particular interest was a presentation on data assimilation using neural networks, which achieved comparable results to an ensemble Kalman filter at a small fraction of the computational cost. This led to a long discussion at the end of the day as to whether neural networks may be a way forward for data assimilation. The final session on the last day covered a variety of different applications of data assimilation, including assimilation of soil moisture, atmospheric composition measurements and volcanic ash concentration, as well as application to coupled atmosphere-ocean models and to carbon flux inversion.

 

Outside the scientific programme the coffee breaks (with mountains of Brazilian cheese bread provided!) and the social events, such as the caipirinha tasting evening and the conference dinner, as well as the fact of having all meals together, provided ample opportunity for discussion with old acquaintances and new. I came home excited about the wide range of work being done on data assimilation throughout the world and enthusiastic to continue tackling some of the challenges in our research in Reading.

The full programme with abstracts is available at the conference web site, where presentation slides will also be eventually uploaded:

http://www.cptec.inpe.br/das2017/

Serving society with better weather and climate information.

by Sarah Dance

I have just come back from the European Meteorological Society 2017 conference in Dublin, where I was co-convenor for a session on Data Assimilation. It’s theme was Serving Society with better Weather and Climate Information. A key challenge for the meteorological communities is how best to harness the wealth of data now available – both observational and modelled – to generate and communicate effectively relevant, tailored and timely information ensuring the highest quality support to users’ decision-making.  The conference produced some highlight videos that sum up the activities better than I could!

Can cars provide high quality temperature observations for use in weather forecasting?

By Diego de Pablos

I am an Undergraduate student in the University of Reading that has recently finished his UROP placement (Undergraduate Research Opportunities Programme) in Reading University, this project was funded by the University and was in partnership with the Met Office. Since I am currently undertaking the Environmental Physics course at the Meteorology department, this project was of interest to me for two reasons: first, I plan on getting a PhD at Reading University and wanted to have a feel for that experience and secondly, the research topic seemed to have potential to improve weather forecasting and road safety overall. The project consisted on having a first look at the temperature observations from the built-in thermometer of a car, and compare them with the UKV model surface temperatures and nearby WOW [1] sites observations.

Even though the use of vehicles in weather forecasting has been studied before [2], advanced thermometers were installed on the vehicles to get the observations in most cases, or other parameters were used (i.e antilock brakes or windshield wipers states). This project aimed to assess the potential of the native ambient air temperature sensor most modern cars (less than ten years old) have. Having these observations available when predicting the road state in the nearby future.

A series of days of temperature observations registered by a car’s built-in thermometer were studied. The method used to extract these observed temperatures was an OBD dongle, which would be connected to the car’s engine management system via the standard OBD port cars have installed behind the steering wheel. The dongle would then send this information to the driver’s phone via Bluetooth. In the phone app, observations and other parameters available from the dongle are decrypted, and are later sent to a selected URL via 3G/4G connections. The data would then be stored in metdb, the database used by the Met Office in the UK, and made available for forecasting.

 

The trial showed a need for further testing regarding the thermometers, as it was suggested that the sensor readings could have a bias with height and speed. However, the potential availability of data, by sheer quantity alone is outstanding, as around 20 million cars would be available to take part in the data collection in the UK.

All in all, using car sensors for weather forecasting seems to have potential and will be studied thoroughly in the near future, to hopefully tie its advancements with those of car technologies.

References:

[1] Weather Observations Website – Met Office. https://wow.metoffice.gov.uk/. Accessed: 10th of August 2017.

[2] William P. Mahoney III and James M. O’Sullivan. “Realizing the Potential of Vehicle-Based Observations”. In: Bulletin of the American Meteorological Society 94.7 (2013), pp. 1007– 1018. doi: 10.1175/BAMS-D-12-00044.1. eprint: https://doi.org/10.1175/BAMS-D-1200044.1. url: https://doi.org/10.1175/BAMS-D-12-00044.1.

UFMRM WG Webinar: “DARE to use CCTV images to improve urban flood forecasts”

It is difficult to accurately predict urban floods; there are many sources of error in urban flood forecast due to unknown model physics, computational limits, input data accuracy etc. However, many sources of model and input errors can be reduced through the use of data assimilation methods – mathematical techniques that combine model predictions with observations to produce more accurate forecast.

In this talk I will motivate and introduce the idea of using CCTV images as a new and valuable additional source of information in cities for improving the urban flood predictions through data assimilation methods. This work is part of the Data Assimilation for REsilient City (DARE) project.

You can see the whole presentation on YouTube here or view slides here.

Wetropolis flood demonstrator

Wetropolis flood demonstrator

By Onno Bokhove, School of Mathematics, University of Leeds, Leeds.

  1. What is Wetropolis?

The Wetropolis flood demonstrator is a conceptual, life installation showcasing what an extreme rainfall event is and how such an event can lead to extreme flooding of a city, see below in Fig. 1. A Wetropolis day is chosen to be 10s and it rains on average every 5.5min for 90% of the time during a Wetropolis day, i.e., 9s in two locations both in an upstream reservoir and in a porous moor in the middle of the catchment. This is extreme rainfall and it causes extreme flooding in the city. It can rain either 10%, 20%, 40% or 90% in a day; and, either nowhere, only in the reservoir, only on the porous moor or in both locations. Rainfall amount and rainfall location are randomly drawn via two skew-symmetric Galton boards, each with four outcomes, see Fig. 2. Each Wetropolis day, so every 10s, a steel ball falls down the Galton board and determines the outcome, which outcome we can follow visually: at the first split there is a 50% chance of the ball going to the left and of 50% to the right, and the next two splits one route can only go right with a 100% chance and the other one splits even with 50%-50% again; subsequent splits are even again. An extreme event occurs with probability 7/256, so about 3% of the time. In 100 wd’s, or 1000s, this amounts to about every 5.5min on average. When a steel ball rolls through one of the four channels of the Galton board it optically triggers a switch and via Arduino electronics each Galton board steers pump actions of (1,2,4,9)s causing it to rain in the reservoir and/or the porous moor.

Fig. 1. Overview of the Wetropolis flood demonstrator with its winding river channel of circa 5.2m and the slanted flood plains on one side, a reservoir, the porous moor, the (constant) upstream inflow of water, the canal with weirs, the higher city plain, and the outflow in the water tank/bucket with its three pumps. Two of these pumps switch on randomly for (1,2,4,9)s of the 10s `Wetropolis Day’ (SI-unit: wd). Photo compilation: Luke Barber.

 

Wetropolis’ construction is based on my mathematical design with a simplified one-dimensional kinematic model representing the winding river, a one-dimensional nonlinear advection diffusion equation for the rainfall dynamics in the porous moor, and simple time-dependent box models for the canal sections and the reservoir, all coupled together with weir relations. The resulting numerical calculations were approximate but led to the design by providing estimates of the strength of the pumps (1-2l in total for the three aquarium pumps), the length and hence the size of the design with the river water residence time typically being 15-20s, and the size of the porous moor. The moor visually shows the dynamics of the ground water level during no or weak rainfall as well as strong rainfall, and how it can delay the through flow when the conditions are dry prior to the rainfall by circa 2-3wd (20-30s). When the rainfall is strong, e.g., for two consecutive days of extreme Boxing Day rainfall (see movie in [2]), the moor displays surface water overflow and thus drains nearly instantly in the river channel.

Fig. 2 Asymmetric Galton board. Every Wetropolis day, 10s, a steel ball is released at the top (mechanism not shown here). The 4×4 possible outcomes in two of such boards, registered in each by 4 electronic eyes (not shown here either), determine the rainfall and location in Wetropolis, repectively. Photo: Wout Zweers.

Wetropolis’ development and design was funded as an outreach project in the Maths Foresees’ EPSRC Living with Environmental Change network [1].

  1. What are its purposes?

Wetropolis was first designed to be a flood demonstrator in outreach purposes for the general public. It can fit in the back half of a car and can be transported. Comments from everyone, including the public, are positive. Remarks from scientists and flood practitioners such as people from the Environment Agency, however, made us realise that Wetropolis can also be used and extended to test models and explore concepts in the science of flooding.

 

  1. Where has Wetropolis been showcased hitherto?

The mathematical design and modelling was done and presented early June 2016 at a seminar for the Imperial College/University of Reading Mathematics of Planet Earth Doctoral Training Centre. Designer Wout Zweers and I started Wetropolis’ construction a week later. One attempt failed (see June 2016 posts in [2]) because I made an error in using the Manning coefficient in the calculations, necessitating an increase of the channel length to 5m to have sufficient residence time of water in the 1:100 sloped river channel. Over the summer progress was made with a strong finish late August 2016 so we could showcase it at the Maths Foresees’ General Assembly in Edinburgh [1]. It was subsequently shown at the Leeds Armley Museum public Boxing Day exhibit December 8th, 2016 and also in March 2017. I gave a presentation for 140 flood victims for the Churchtown Flood Action Group Workshop, late January 2017 in Churchtown, on the science of flood including Wetropolis. We showcased it further at: Be Curious public science festival, University of Leeds; the Studygroup Maths Foresees (see Fig. 3), at the Turing Gateway to Mathematics, Cambridge; and, a workshop of the River and Canal Trust in Liverpool.


Fig. 3. Wetropolis at the Turing Gateway to Mathematics. Photo TGM. Duncan Livesey and Robert Long (Fluid Dynamics’ CDT, Leeds) are explaining matters.

  1. What are its strengths and weaknesses?

The strength of Wetropolis is that it is a life visualisation of probability for rainfall and flooding in extreme events combined, river hydraulics, groundwater flow, and flow control, since the reservoir has valves such that we can store and release water interactively). It is a conceptual model of flooding rather than a literal scale model. This is both a weakness and a strength because one needs to explain the translation of a 1:200 return period extreme flooding and rainfall event to one with a 1:5.5min return period, explain that the moor and reservoir are conceptual valleys where all the rain falls, since rain cannot fall everywhere. This scaling and translation is part of the conceptualisation, which the audience, whether public or scientific, needs to grasp. The visualisations of flooding in the city and the ground water level changes will be improved.

  1. Where does Wetropolis go from here?

Wetropolis’ revisited is under design to illustrate aspects of Natural Flood Management such as slowing-the-flow by inserting or taking our roughness features, leaky dams and the great number of such dams needed to create significant storage volume of flood waters, as well as the risk of their failure. Wetropolis will (likely) be shown alongside my presentation in the DARE international workshop on high impact weather and flood prediction in Reading, November 20-22, 2017. Finally, analysis of river levels gauges combined with the peak discharge of the Boxing Day 2015 floods of the Aire River leading to the extreme massive flooding in Kirkstall, Leeds reveals that the estimated flood excess volume is about a 1 mile by 1 mile by 1.8m deep (see [3] and Fig. 4). Storing of all this excess flood volume in 4 to 5 artificially induced and actively controlled flood plains upstream of Leeds seems possible. Moreover, it could possibly have prevented the floods. Active control of flood plains via moveable weirs is now considered, also in a research project with Wetropolis featuring as conceptual yet real test environment. (PhD and/or DARE postdoc posts are available soon.)

Fig. 4. Leeds’ flood levels at Armley Mills Museum: 1866: bottom, 2015: top, 5.21m. Photo O.B. with Craig Duguid (Fluid Dynamics’ CDT, Leeds) showcasing Wetropolis.

 References and links

[1] Maths Forsees UK EPSRC LWEC network [2] Resurging Flows, public page with movies of experiments, river flows and Boxing Day 2015 floods in Leeds and Bradford, photos and comments on fluid dynamics. Two movies on 31-08-2016 show Wetropolis in action. In one case two consecutive extreme rainfall events led to a Boxing Day 2105 type of flood. (What is the chance of this happening in Wetropolis?) Recall that record rainfall over 48hrs in Bingley and Bradford, Yorkshire, contributed for a large part to the Boxing Day floods in 2015. [3] ‘Inconvenient Truths’ about flooding . My introduction at the 2017 Study Group.

Coupled atmosphere-ocean data assimilation re-interpreted

This gallery contains 1 photo.

Coupled atmosphere-ocean data assimilation re-interpreted by Polly Smith So my original plan for this blog was to write something about my research on coupled atmosphere-ocean data assimilation but then my PI Amos Lawless beat me to it with his recent post. I was pondering on how I might put a new spin on things when […]

Improving Aircraft Observations using Data Assimilation

Improving Aircraft Observations using Data Assimilation

By Jeremy Holzke

I am half way through my six week Summer research placement which is funded by the EPSRC DARE project. As a second year Robotics student at the University of Reading, I am interested in collecting data from various sources and processing it so it can then be used for a robot to interact with its environment. I am undertaking this project as it has a very similar goal to a robot sensing its environment apart that the processed data will be used for better estimates of temperature in our atmosphere. I am also taking part in this Summer placement to see if I would be interested to do research in the future as my career. I will be investigating how data from aircraft and data from a numerical weather prediction (NWP) model can be combined to give the best estimate of the true temperature at the location of the observation.

Collecting observations from aircraft for meteorological purposes is most definitely not a new concept; in fact, it has been around since World War 1. The number of observations collected has grown ever since, especially with the wide range of applications weather now/forecasting provides. Some of these include military applications, agriculture and in particular for air traffic management. A main advantage of using aircraft derived observations in the 21st century, is that there around 13-16 thousand planes around the world at any time that can transmit valuable meteorological data.

Most commercial airplanes transmit a report called Mode Selective Enhanced Surveillance (Mode-S EHS) which contains data such as the speed, direction and altitude of the plane, as well as the Mach number which can be used to derive temperature and horizontal wind observations. The advantage of using Mode-S EHS reports is that they are transmitted at a high frequency but because of the short nature of the reports, data precision is reduced. Hence, large errors can appear in the derived temperature.

The aim of this project is to take aircraft-derived observations and combine them with modelled weather data from the Met Office UKV (UK variable resolution model), to get a better estimate of the temperature observation. A technique known as Optimal Interpolation, that takes account of the relative uncertainties in the two data sources was implemented in MATLAB. I have carried out some initial tests of the method using observation data from the National Centre for Atmospheric Science’s research plane; the Facility for Atmospheric Airborne Measurements (FAAM).

References:

A.K. Mirza, S.P. Ballard, S.L. Dance, P. Maisey, G.G. Rooney, and E.K. Stone, “Comparison of aircraft-derived observations with in situ research aircraft measurements,” Quarterly Journal of the Royal Meteorological Society, vol. 142, no. 701, pp. 2949–2967, 2016, issn: 1477-870X. doi: 10.1002/qj.2864 [Online]. Available from Royal Meteorological Society

 

Can observations of the ocean help predict the weather?

Can observations of the ocean help predict the weather?

by Dr Amos Lawless

It has long been recognized that there are strong interactions between the atmosphere and the ocean. For example, the sea surface temperature affects what happens in the lower boundary of the atmosphere, while heat, momentum and moisture fluxes from the atmosphere help determine the ocean state. Such two-way interactions are made use of in forecasting on seasonal or climate time scales, with computational simulations of the coupled atmosphere-ocean system being routinely used. More recently operational forecasting centres have started to move towards representing the coupled system on shorter time scales, with the idea that even for a weather forecast of a few hours or days ahead, knowledge of the ocean can provide useful information.

A big challenge in performing coupled atmosphere-ocean simulations on short time scales is to determine the current state of both the atmosphere and ocean from which to make a forecast. In standard atmospheric or oceanic prediction the current state is determined by combining observations (for example, from satellites) with computational simulations, using techniques known as data assimilation. Data assimilation aims to produce the optimal combination of the available information, taking into account the statistics of the errors in the data and the physics of the problem. This is a well-established science in forecasting for the atmosphere or ocean separately, but determining the coupled atmospheric and oceanic states together is more difficult. In particular, the atmosphere and ocean evolve on very different space and time scales, which is not very well handled by current methods of data assimilation. Furthermore, it is important that the estimated atmospheric and oceanic states are consistent with each other, otherwise unrealistic features may appear in the forecast at the air-sea boundary (a phenomenon known as initialization shock).

However, testing new methods of data assimilation on simulations of the full atmosphere-ocean system is non-trivial, since each simulation uses a lot of computational resources. In recent projects sponsored by the European Space Agency and the Natural Environment Research Council we have developed an idealised system on which to develop new ideas. Our system consists of just one single column of the atmosphere (based on the system used at the European Centre for Medium-range Weather Forecasts, ECMWF) coupled to a single column of the ocean, as illustrated in Figure 1.  Using this system we have been able to compare current data assimilation methods with new, intermediate methods currently being developed at ECMWF and the Met Office, as well as with more advanced methods that are not yet technically possible to implement in the operational systems. Results indicate that even with the intermediate methods it is possible to gain useful information about the atmospheric state from observations of the ocean. However, there is potentially more benefit to be gained in moving towards advanced data assimilation methods over the coming years. We can certainly expect that in years to come observations of the ocean will provide valuable information for our daily weather forecasts.

Figure 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Smith, P.J., Fowler, A.M. and Lawless, A.S. (2015), Exploring strategies for coupled 4D-Var data assimilation using an idealised atmosphere-ocean model. Tellus A, 67, 27025, http://dx.doi.org/10.3402/tellusa.v67.27025.

Fowler, A.M. and Lawless, A.S. (2016), An idealized study of coupled atmosphere-ocean 4D-Var in the presence of model error. Monthly Weather Review, 144, 4007-4030, https://doi.org/10.1175/MWR-D-15-0420.1

First recording of surface flooding in London using CCTV cameras

On Friday 2nd of June 2017 Met Office issued a yellow warning of heavy rain with possible hail and lightning over London. Also Environmental Agency issued a number of flood alerts for London for the same period of time. This allowed us to test our newly setup system for recording open data CCTV images from London Transport Cameras (aka JamCams).

Following the flood alerts we setup to record all Transport for London (TFL) cameras which where within the main flood alert areas, these were 4 areas in London.

Figure 1. Areas selected for recording TFL CCTV camera images on 2nd of June 2017 corresponding to flood alerts from Environmental Agency.

This resulted in downloading images from just over 110 CCTV cameras accross from  the marked areas in Figure 1. Download started on many cameras at 2:30pm on 2nd of June 2017 and continued for 24h with an image downloaded every 5min.

Many of these images showed heavy rain as it passed over London on the afternoon of the 2nd June 2017; some cameras even captured images of lightning which was seen over North London but we didn’t capture any images of flooding in the four coloured areas in Figure 1.

Figure 2. Image of heavy rain on A23 Brixton Rd/Vassell Rd as seen by one of the CCTV cameras in London on 2nd July 2017 at 5:19pm

Figure 3. Image of lightning on captured on London CCTV camera at A12 East Cross Route on 2nd of June 2017 at 4:17pm

However, following the flooding allert on London for Transport site allowed us to capture surface flooding that happened on the North Circular road between 4-7pm resulting in traffic jams in the area.

Figure 4. Map of the surface flooding on the North Circular on 2nd of June 2017

The surface flooding was very localised and only one camera captured it, the one just below the blue circle in the Figure 4. We recorded both still and video images from this camera. In the video below you can see the surface flooding affecting the slip road going North.

We are currently setting up similar systems to download live traffic CCTV images from Leeds, Bristol, Exeter, Newcastle, Glasgow, and Tewkesbury.