The Role of Synoptic Meteorology on UK Air Pollution

By Chris Webber

In the past year the issue of air pollution within the UK has been elevated, driven by the loss of life that it causes (in 2013 > 500,000 years of UK lives lost due to air pollution 1). Air pollution concentrations within the UK are a function of both pollutant emissions and meteorology. This study set out to determine how synoptic meteorology impacts UK particulate matter (PM) concentrations with an aerodynamic diameter ≤ 10 µg m-3 ([PM10]).

The influence of synoptic meteorology on air pollution concentrations is well studied, with anticyclonic conditions over a region often found to be associated with the greatest pollutant concentrations 2,3. Webber et al. (2017) evaluated the impact of synoptic meteorology on UK Midlands [PM10]. They identified Omega block events as the synoptic meteorological condition that is associated with the most frequent UK daily mean [PM10] threshold exceedance events (episodes). A UK [PM10] episode is defined as daily mean [PM10] 10 µg m-3 above a mean UK Midlands concentration.

This study uses the Met-Office HADGEM3-GA4 atmosphere-only climate model to gather information on the flow regimes influencing the UK throughout Omega block events. For this study, temperature and wind velocity are constrained using ERA-Interim reanalysis data, in a process termed nudging. This study uses four inert tracers, emitted throughout Europe (Figure 1), to identify flow regimes from the highest PM10 emission regions throughout Europe. To enable their transport across Europe, the tracers are designed to replicate the lifetime of sulphate aerosol.


Figure 1. This study’s four tracer emission regions throughout Europe.

This study has identified 28 Omega blocks within the winter months (DJF) between December 1999 and February 2008. The anomalous mean sea level pressure composite for the 28 Omega blocks is shown for the onset day in Figure 2 and bears resemblance to a classical Omega block pattern (Figure 3). The Omega block onset is defined as in Webber et al. (2017), where the western flank of an upper level anticyclone has been detected within the northeast Atlantic/ European region. 

Figure 2. Mean Sea Level Pressure Anomaly for 28 Omega block events on the day of onset, relative to a DJF 1999-2008 dataset mean.

Figure 3. An idealised schematic of an Omega block pattern. The High and Low refer to mean sea level pressure anomalies, while the solid black line represents flow streamlines (Met Office, 2017).

Figure 4 shows this study’s key result, the UK Midlands daily mean concentration of each tracer throughout the evolution of an Omega block. The observed UK Midlands [PM10] and modelled [PM10], the latter generated from the modelled tracers using multiple linear regression, are also shown. Within Figure 4 the solid line is the mean concentration throughout the Omega block subtracted by 1.65 x the standard deviation of that concentration (for a 1-tailed statistical test this equates to a 95th percentile confidence interval). The horizontal dashed lines represent the dataset means (negating the Omega block events) for each quantity. If the solid black line is greater than the horizontal dashed line in any of the panels, this represents a significant increase (p<0.05) in the concentration above the dataset mean.

Figure 4. Observed PM10, modelled PM10 and tracer concentrations throughout the 9 days of an Omega block event subtracted by 1.65 x the standard deviation of that concentration (solid lines). Horizontal dashed line represents the DJF 1999-2008 dataset mean for each tracer or PM10 concentration. 

Figure 4 shows that Omega block events result in significant increases in both observed and modelled [PM10] on day +1 relative to the onset of an Omega block event. This is the maxima that was recognised by Webber et al. (2017) to lead to an elevated probability in UK Midlands PM10 episodes.

The key message from Figure 4 is that the peak in UK [PM10] throughout Omega block events is driven by an increase in locally sourced pollution and advected European pollution. Omega blocks have previously been thought to result in elevated [PM10] through the accumulation of locally sourced pollution, however this study is one of the first to show that this is not the whole story. We see significant influence from European tracers, which coincide with modelled UK [PM10] peaks.


1 EEA, 2016. Exceedance of air quality limit values in urban areas (Indicator CSI 004), European Environment Agency.

2 Barmpadimos, I., Keller, J., Oderbolz, D., Hueglin, C., Prevot, A. S. H., 2012. One decade of parallel fine (PM2.5) and coarse (PM10-PM2.5) particulate matter measurements in Europe: trends and variability. Atmos. Chem. Phys. 12, 3189-3203.

3 McGregor, G. R., Bamzelis, D., 1995. Synoptic Typing and Its Application to the Investigation of Weather Air-Pollution Relationships, Birmingham, United-Kingdom. Theor. Appl. Climatol. 51, 223-236.

4 Altenhoff, A. M., Martius, O., Croci-Maspoli, M. I., Schweirz, C., Davies, H. C., 2008. Linkage of atmospheric blocks and synoptic-scale Rossby waves: a climatological analysis. Tellus A, 60, no. 5, 1053-1063.

4 Webber, C. P., Dacre, H. F., Collins, W. J., Masato, G., 2017. The dynamical impact of Rossby wave breaking upon UK PM10 concentration. Atmos. Chem. Phys. 17, 867-881.

5 Met-Office, 2017. Blocking Patterns. Available online at: [Accessed July 2017].


Posted in Aerosols, Atmospheric chemistry, Boundary layer, Environmental hazards, Urban meteorology | Tagged | Leave a comment

BoBBLE: Air-sea interactions and intraseasonal oscillations in the Bay of Bengal

By Simon Peatman

The Indian Summer Monsoon (ISM) is one of the most significant features of the tropical climate. The heavy rain it brings during boreal summer provides around 80% of the annual precipitation over much of India with over 1 billion people feeling the impact, especially through the effect on agriculture. The typical impression of the monsoon which many members of the public have is of constant, torrential rain during the summer months as though a tap is turned on sometime in June, eventually petering out at the end of the season. The reality, however, is an active/break cycle on intraseasonal timescales.

The chief source of intraseasonal variability in the ISM region during boreal summer is the BSISO (Boreal Summer IntraSeasonal Oscillation). Similar to the famous real-time multivariate MJO (RMM) indices of Wheeler and Hendon (2004), which divide the Madden-Julian Oscillation into eight phases, the BSISO indices were developed by Lee et al (2013) based on multivariate EOF analysis of outgoing longwave radiation (OLR) and 850 hPa zonal wind. Two cycles were identified (BSISO1 and BSISO2); here we consider the former. BSISO1 is found to have a period of 30–60 days; a composite life cycle of OLR and 850 hPa wind anomalies is shown in Figure 1b in which a nominal period of 48 days has been chosen, with each frame of the animation corresponding to one day. The boreal summer climatology is shown for reference in Figure 1a. Alternate active and suppressed regions of convection begin over the equatorial region of the Indian Ocean, propagating northwards over the Arabian Sea (west of India), India, the Bay of Bengal (BoB; east of India) and south-east Asia. The MJO chiefly consists of slow eastward propagation of large-scale organized convective envelopes from the Indian Ocean, through the Maritime Continent, to the tropical Pacific. The BSISO may be thought of as the MJO’s northward branch, unique to the boreal summer months.


Figure 1: Boreal summer (May to October) OLR from AVHRR and 850 hPa wind from ERA-Interim (1981-2010): (a) climatology; (b) anomaly for each day of a nominal 48-day BSISO1 life cycle, computed by linearly interpolating between composites of the eight phases. (After Adrian Matthews’ MJO diagrams.)

The monsoon onset, withdrawal and active/break events are generally poorly forecast. This is partly due to a poor understanding of the physical mechanisms behind them. One major gap in our knowledge is the effect of air-sea interactions in the BoB. The Bay of Bengal Boundary Layer Experiment (BoBBLE) seeks to improve our understanding of these interactions and their effect on ISM variability. The BoBBLE field campaign took place in June–July 2016 and performed intensive observations in the BoB of atmospheric conditions, air-sea fluxes, and ocean currents and stratification.

In particular, the campaign saw the deployment of Argo floats (which remained in place after the campaign as part of the global Argo network) and seagliders [follow the link and select Mission 31]. These can be seen in this video clip showing the route of the research vessel Sindhu Sidhana (Figure 2). 

The BoBBLE field campaign took place in June–July 2016 and performed intensive observations in the BoB of atmospheric conditions, air-sea fluxes, and ocean currents and stratification. In particular, the campaign saw the deployment of Argo floats (which remained in place after the campaign as part of the global Argo network) and seagliders (follow the link and select Mission 31) and the research vessel Sindhu Sidhana.

As part of the BoBBLE project, at Reading we are using data from the field campaign as part of a range of hindcast modelling experiments.

Figure 2: Research Vessel Sindhu Sidhana.


Lee, J.-Y., B. Wang, M. C. Wheeler, X. Fu, D. E. Waliser, and I.-S. Kang, 2013. Real-Time Multivariate Indices for the Boreal Summer Intraseasonal Oscillation over the Asian Summer Monsoon Region. Clim. Dyn.40, 493–509.

Wheeler, M. C., and H. H. Hendon, 2004. An All-Season Real-Time Multivariate MJO Index: Development of an Index for Monitoring and Prediction, Mon. Weather Rev.132, 1917–1932.

Posted in Climate, Climate modelling, Monsoons, Numerical modelling, Oceans | Tagged | Leave a comment

Responding to the threat of hazardous pollutant releases in cities

By Denise Hertwig

High population density and restricted evacuation options make cities particularly vulnerable to threats posed by air-borne contaminants released into the atmosphere through industrial accidents or terrorist attacks. In order to issue evacuation or sheltering advice to the public and avoid or mitigate impacts on human health, emergency responders need timely information about expected pollutant pathways, concentration levels and associated human exposure risks. Such information can be derived from atmospheric dispersion models, which are key components of emergency response management systems. Concentration predictions from such models need to be as robust and reliable as possible, while at the same time being available in near real-time.

Due to the effect of buildings on local wind fields, the dispersion behaviour in cities is distinctively different from rural areas. Pollutant transport is determined to a large degree by the arrangement of buildings and streets. Effects like pollutant channelling along street canyons, plume branching in intersections or pollutant trapping and mixing in low-speed recirculation zones behind buildings make urban dispersion scenarios particularly complex (Figure 1). High-resolution (approx. 1 m), building-resolving simulation methods can reproduce these effects with a high level of accuracy, but the turnaround time for such model output is currently much too long to be usable in emergency events. Instead, simpler fast-running modelling tools are needed.

The EPSRC-funded DIPLOS project (‘Dispersion of Localised Releases in a Street Network’) set out to identify driving urban dispersion mechanisms and to improve their representation in fast emergency-response dispersion models. Based on high-resolution simulations and wind-tunnel experiments, representatives of the street-network dispersion modelling class were evaluated in detail. An example of this relatively new approach is the model SIRANE, which is the only street-network dispersion model currently in operational use.

Figure 1: Flow streamlines in idealised urban settings studied in DIPLOS. (a) Helical motion through elongated streets and low-speed recirculation zones in sheltered street canyons in a uniform geometry. (b) Flow disturbances created by a tall building: downdraft on the windward side, low-speed recirculating updraft regions on the leeward side. Thick arrows show the ambient wind direction.

Models like SIRANE treat urban areas as a network of streets connected at intersections and compute pollutant fluxes through the interfaces between street and intersection volumes (Figure 2). While not resolving buildings explicitly, network models are directly aware of the street layout and thus of possible pollutant pathways. This model feature is crucial as even in simplified urban settings like the ones considered in DIPLOS, the main direction of pollutant transport at pedestrian level can be vastly different from the ambient wind direction because pathways are restricted by the street topology (Figure 3a). Comparisons with high-resolution large-eddy simulations showed that the simple street-network methodology is able to capture this mechanism and also accounts for pollutant exchange with flow above roof-level in a realistic way (Figure 3b). Overall, the comparatively simple street-network models were found to perform as well as more sophisticated stochastic dispersion models while only needing a fraction of the time to run (typically, a few minutes) and minimal wind input information. They outperform analytical Gaussian solutions, which are widely applied for regulatory purposes, but lack any explicit building awareness. Model performance, however, is crucially dependant on the suitable modelling of transport velocities along streets. Further improvements of the network-model performance can be achieved by taking into account the delayed dispersion of pollutants trapped in building wakes, effects of isolated tall buildings and turbulent concentration fluctuations.

The work presented here is the result of a collaboration between the Universities of Reading, Southampton and Surrey and the École Centrale de Lyon in France. DIPLOS is coming to an end in August 2017. An overview of the project, researchers and institutions involved and of science output is available on the DIPLOS website.

Figure 2: (a) Representation of the topology of streets and intersections in street-network dispersion models. (b) Horizontal transport and vertical exchange mechanisms in network models. Background images from Google Earth.

Figure 3: Case of continuous pollutant release at ground-level within an intersection: (a) Plume envelope with colours indicating the plume height. The height of the buildings is H = 10 m (dark red plume areas are at approx. 4.5 H). (b) Street-network model prediction of the mean pollutant concentrations in streets and intersections (right) in comparison to reference results from high-resolution large-eddy simulation (LES; left). Stars mark the location of the source.

Posted in Environmental hazards, Numerical modelling, Urban meteorology | Tagged , | Leave a comment

Time scales of atmospheric circulation response to CO2 forcing

By Paulo Ceppi

An important question in current climate change research is, how will atmospheric circulation change as the climate warms? When simulating future climate scenarios, models commonly predict a shift of the midlatitude circulation to higher latitudes in both hemispheres – generally referred to as a “poleward circulation shift”. As an example, under a “business as usual” future emissions scenario the North Atlantic jet stream is predicted to shift northward during the summer months (Figure 1). If true, this would likely affect the average amount of precipitation, wind, and sunshine experienced in the UK and more generally across Western Europe.


Figure 1: Change in eastward wind speed at 850 hPa in the RCP8.5 (“business as usual”) experiment during the 21st century for June-August. The response is calculated as the mean of 2070-2100 minus 1960-1990. Grey contours indicate the wind climatology, while colour shading denotes the change (in m/s). Results are averages over 35 coupled climate models.

Such shifts of circulation being caused by global warming, it is natural to assume that the more the planet warms, the larger the circulation shift. But is that assumption generally true? More specifically: as the planet warms in response to CO2 forcing, do circulation shifts scale with the change in global-mean temperature? Here we are interested in the time evolution of the transient response to CO2 forcing, i.e. the period during which the climate adjusts to the change in CO2. This evolution is best represented in climate model experiments in which CO2 concentrations are increased abruptly and then held constant; since the forcing happens all at once, the various time scales of climate response are cleanly separated. Below I will present results from the so-called “abrupt4xCO2” experiment, in which a set of climate models were subjected to a sudden quadrupling of CO2 concentrations and then run for 150 years.

It turns out that as climate changes following a sudden quadrupling of CO2, circulation shifts do not generally scale with global warming. Instead, two distinct phases of circulation change occur: during the first 5 years or so, the planet warms quickly and the jet streams shift poleward; but thereafter the jets tend to stay at a constant latitude, despite the fact that the planet continues to warm substantially. This is summarised in Figure 2 below, where the curves indicate changes in the latitude of the jet stream (averaged over a set of 28 climate models). In the North Pacific region, the change in jet stream latitude even changes sign over the course of the experiment (Figure 2b).


Figure 2: Change in annual-mean jet stream latitude (measured as the latitude of peak eastward wind at 850 hPa) in climate models during the first 140 years following a quadrupling of atmospheric CO2, as a function of global-mean surface temperature. The curves indicate means across 28 climate models. Shading denotes the 75% range of responses. The jet shifts are in degrees latitude and positive anomalies are defined as poleward. Circles denote individual years until year 10; diamonds denote decadal means between year 11 and year 140. The black crosses indicate the means of years 5-10 and 121-140, respectively.

How can we explain this peculiar time evolution of circulation shifts? The evolution of changes in atmospheric temperature and circulation is mainly controlled by how the ocean surface warms in response to greenhouse gas forcing. Like the atmosphere, the ocean has its own circulation and in some regions deep, cold water rises to the surface – a process known as “upwelling”. Due to this and other processes, the ocean surface does not warm at the same rate everywhere; in particular, upwelling regions like the Southern Ocean experience delayed warming.

We find that the time scales of ocean surface warming determine the time scales of change in atmospheric circulation, via the changes in atmospheric temperature. In particular, the patterns of ocean surface warming before and after year 5 of the experiment are strikingly different; when imposed in an atmospheric climate model, we obtain circulation changes consistent with the results shown in Figure 2, confirming the role of the ocean surface in controlling the atmospheric response.

While the scenario of abrupt CO2 increase described in Figure 2 is unlikely to happen in the real world, further analysis shows that the two time scales of circulation shift are also present in more realistic scenarios of gradual greenhouse gas increase. This indicates that care must be taken when extrapolating transient circulation shifts to estimate changes in future warmer climates.


Ceppi et al., 2017. Fast and slow components of the extratropical atmospheric circulation response to CO2 forcing: submitted to Journal of Climate.

Zappa et al., 2015. Improving climate change detection through optimal seasonal averaging: the case of the North Atlantic jet and European precipitation: Journal of Climate, doi: 10.1175/JCLI-D-14-00823.1

Posted in Atmospheric chemistry, Climate, Climate change, Climate modelling, Numerical modelling | Tagged | Leave a comment

What’s in a number?

By Nancy Nichols

Should you care about the numerical accuracy of your computer? After all, most machines now retain about 16 digits of accuracy, but usually only about 3-4 figures of accuracy are needed for most applications;  so what’s the worry? To demonstrate, there have been a number of spectacular disasters due to numerical rounding error. One of the most well known is the failure of a Patriot missile to track and intercept an Iranian Scud missile in Dharan, Saudi Arabia, on 25 February 1991, resulting in the deaths of 28 American soldiers. 

The failure was ultimately attributable to poor handling of  rounding errors. The computer doing the tracking calculations had an internal clock whose values were truncated when converted to floating-point arithmetic with an error of about 2-20 . The clock had run up a time of 100 hours, so the calculated elapsed time was too long by 2-20 x 100 hours = 0.3433 seconds, during which time a Scud would be expected to travel more than half a kilometre.






The same problem arises in other algorithms that accumulate and magnify small round-off errors due to the finite (inexact) representation of numbers in the computer. Algorithms of this kind are referred to as ‘unstable’ methods. Many numerical schemes for solving differential equations have been shown to magnify small numerical errors. It is known, for example, that L.F. Richardson’s original attempts at numerical weather forecasting were essentially scuppered due the unstable methods that were used to compute the atmospheric flow. Much time and effort have now been invested in developing and carefully coding methods for solving algebraic and differential equations such as to guarantee stability. Excellent software is publicly available. Academics and operational weather forecasting centres in the UK have been at the forefront of this research.

Even with stable algorithms, however, it may not be possible to compute an accurate solution to a given problem. The reason is that the solution may be sensitive to small errors  –  that is, a small error in the data describing the problem causes large changes in the solution. Such problems are called ‘ill-conditioned’. Even entering the data of a problem into a computer  –  for example, the initial conditions for a differential equation or the matrix elements of an eigenvalue problem  –   must introduce small numerical errors in the data. If the problem is ill-conditioned, these then lead to large changes in the computed solution, which no method can prevent.   

So how do you know if your problem is sensitive to small perturbations in the data?  Careful analysis can reveal the issue, but for some classes of problems there are measures of the sensitivity, or the ‘conditioning’, of the problem that can be used. For example, it can be shown that small perturbations in a matrix can lead to large relative changes in the inverse of the matrix if the ‘condition number’ of the matrix is large. The condition number is measured as the product of the norm of the matrix and the norm of its inverse.  Similarly  small changes in the elements of a matrix will cause its eigenvalues to have large errors if the ‘condition number’ of the matrix of eigenvectors is large. Of course to determine the condition numbers is a problem implicitly, but accurate computational methods for estimating the condition numbers are available.

An example of an ill-conditioned matrix is the covariance matrix associated with a Gaussian distribution. Figure 2 below shows the condition number of a covariance matrix obtained by taking samples from a Gaussian correlation function at 500 points, using a step size of 0.1, for varying length-scales [1].  The condition number increases rapidly to 107 for length-scales of only size L = 0.2  and, for length scales larger than 0.28, the condition number is larger than the computer precision and cannot even be calculated accurately.

Figure 2

This result is surprising and very significant for numerical weather prediction (NWP) as the inverse of covariance matrices are used to weight the uncertainty in the model forecast and in the observations used in the analysis phase of weather prediction. The analysis is achieved by the process of data assimilation, which combines a forecast from a computational model of the atmosphere with physical observations obtained from in situ and remote sensing instruments. If the weighting matrices are ill-conditioned, then the assimilation problem becomes ill-conditioned also, making it difficult to get an accurate analysis and subsequently a good forecast [2]. Furthermore, the worse the conditioning of the assimilation problem becomes, the more time it takes to do the analysis. This is important as the forecast needs to be done in ‘real’ time, so the analysis needs to be done as quickly as possible.

One way to deal with an ill-conditioned system is to rearrange the problem to so as to reduce the conditioning whilst retaining the same solution. A technique for achieving this is to ‘precondition’ the problem using a transformation of the variables. This is used regularly in NWP operational centres with the aim of ensuring that the uncertainties in the transformed variables all have a variance of one [1][2]. In Table 1 we can see the effects of the length-scale of the error correlations in a data assimilation system on the number of iterations it takes to solve the problem, with and without preconditioning of the problem [1]. The conditioning of the problem is improved and the work needed to solve the problem is significantly reduced. So checking and controlling the conditioning of a computational problem is always important!

Table 1


[1]  S.A Haben, 2011. Conditioning and Preconditioning of the Minimisation Problem in Variational Data Assimilation. University of Reading, Department of Mathematics and Statistics, PhD Thesis:

[2]  S.A. Haben, A.S. Lawless and N.K. Nichols,  2011. Conditioning of incremental variational data assimilation, with application to the Met Office system, Tellus, 63A, 782–792. (doi:10.1111/j.1600-0870.2011.00527.x)

Posted in Climate modelling, data assimilation, Numerical modelling, Weather forecasting | Leave a comment

A Presidential address …

By Ellie Highwood

I have been President of the Royal Meteorological Society (RMetS) for almost a year now (I will serve two years in total) and people keep asking me “how’s it going?” or “are you enjoying it?” Before I answer those questions let me describe the role.

The role of President of a small society like RMetS is a bit of everything to be honest. Obviously there are the formal things – I chair Council meetings three times a year as well as the Awards Committee and then present the awards at the AGM. Since we are developing our next 3 year strategy there are also meetings and workshops with a group of members and Council to do that. Ahead of each Council there are about 3 hours of work to do on the papers to make sure I understand what is in them – these can be about new initiatives the RMetS wants to do, reports from the other committees or the annual accounts. Some people who have experienced me chairing a Termly Staff Meeting at Reading will perhaps be surprised to learn that all the Council meetings have tended to over-run! It is definitely a challenge to make sure every voice is heard on some quite complex issues without getting bogged down in the detail. In truth, it’s not my favourite part of the job, but it comes in chunks as between times the working groups, committees and Society staff are busy getting on with things. In fact, having served 4 years as Vice President prior to becoming president (not usual but a variety of exceptional conditions led to me “filling in”), I was in more regular committee meetings in that role. Not only did I attend the same meetings as the President, but also chaired the Strategic Programme Board and was a member of the Membership Development Committee.

On top of this, there are the less predictable items – dealing with requests/complaints from members and supporting the Chief Executive, Liz Bentley, in the day to day running of the Society and in strategic planning. Liz runs a small office with 8 or so employees and sometimes it’s good to have someone outside the line management structure to chat things through with. We meet once per month – it’s certainly very handy that RMetS Headquarters is in Reading! Unusually this year there are significant anniversaries of the formation of the Canadian Meteorological and Oceanographic Society and Australian Meteorological and Oceanographic Society from the previous “local” branches of the RMetS. To mark this, I will be travelling to Melbourne in August along with Liz and Brian Golding (outgoing Chair of Meetings Committee) to represent the Society (fitting in a seminar at Monash University on the way). I sent a video birthday message to Canada because the timing didn’t fit with the rest of my life to do both. I will also be playing a reasonably big role at the RMetS national conferences and at some point will have to give a Presidential address both at a national meeting, and in Scotland.

I expect I spend on average half a day per week, if that, on RMetS business unless there is a crisis – which rarely happens due to the excellent work of staff and volunteers. I am also lucky that my predecessor Jennie Campbell had to do the negotiation with our publishers Wiley and that’s not due again for another couple of years (I hope). So, back to the original questions:

How is it going? Well, I think (apart from the length of those Council meetings). I wouldn’t expect an RMetS President to come in suddenly change everything – it just isn’t that kind of Society, and 2 years is too short a term to do that. Instead I see my role as nudging and encouraging movement in certain directions that may already have started happening, e.g. review of what it means to be a Fellow of the Royal Meteorological Society, tightening up our nominations and awards processes and making them more transparent, and getting discussions about diversity and inclusion happening (well you would expect nothing less given my day job, right?).

Am I enjoying it? Hmm. Interesting. It is certainly a great honour to be President, but somewhat intimidating every time I walk up the staircase at Headquarters and see my picture there alongside centuries of great meteorologists (imposter syndrome klaxon). I am proud to be involved with shaping our learned society and, dare I say, moving it along a little to be fit for the next generation of meteorologists. The volunteers on Council and the various Committees are each one of them fascinating.  I loved handing out the awards at the AGM, and I love working with the RMetS staff on conferences and such like. It is great fun. But it is weird. It doesn’t feel like a thing most of the time. Which is probably as it should be. We wouldn’t want Presidents to let power go to their head now would we?

If you’d like to get involved with the Royal Meteorological Society there are many ways to do so. I started being involved as a postdoc and got a lot of my formal meeting experience and contacts through the RMetS. Visit the website to see what they are up to and whether you can help, attend a meeting or a conference, or nominate someone for Council or an award.

Posted in Royal Meteorological Society, Women in Science | Leave a comment

Belmont Forum: joined-up thinking from science funders

By Vicky Lucas

The Belmont Forum supports ‘international transdisciplinary research providing knowledge for understanding, mitigating and adapting to global environmental change’.

The Belmont Forum fund research and themes include sustainability, climate predictability, ecosystem services and arctic observing.  The group considers research to be part of a value chain which is socially responsible, inclusive and provides innovative solutions.  Furthermore, open data policies and principles are considered essential to making informed decisions in the face of rapid changes affecting the Earth’s environment.

Belmont Forum Funded Projects
Andy Turner of NCAS and the University of Reading Meteorology Department, leads a Belmont Forum funded project, BITMAP, also jointly funded by JPI Climate.  BITMAP is the ‘Better understanding of Interregional Teleconnections for prediction in the Monsoon and Poles’.  The research is an Indo-UK-German collaboration between the Indian National Centre for Medium Range Weather Forecasting and the universities of Reading and Hamburg.

As is regularly the case with Belmont funding calls, a multi-national consortium was required and each of the participating countries contributed support, with NERC the relevant funder in the UK.  Andy says that his project is ‘encouraging international collaboration and bridging the gap between academic climate science and the more applied needs of weather forecasting’.  The project is going well and only six months from starting, a paper on an algorithm for tracking storms is already in preparation by Kieran Hunt.  Andy observes that in addition to regular virtual meetings between the three countries that ‘as papers from the individual countries begin to be published, the collaboration on the project will increase and more ideas will be shared’.

Scott Osprey of the University of Oxford leads GOTHAM the ‘Globally Observed Teleconnections and their role and representation in Hierarches of Atmospheric Models’, also funded by the Belmont Forum.  When asked about the role of the Belmont Forum, Scott pointed to the ability of this international group to encourage ‘new international research communities for tackling large and complex environmental issues beyond the purview of most national research centres’.

Data Intensive Environmental Research
The e-Infrastructure and Data Management sub-group of the Belmont Forum was set up to concentrate on overcoming barriers in data sharing, use and management for environmental and global change research.  By improving data sharing, data intensive research will be accelerated.  The University of Reading has been involved for several years as Robert Gurney co-chairs the e-I&DM group.

The e-I&DM promotes the FAIR data principles which in the detail include the use of rich metadata, standards for vocabularies and data formats, along with persistent identifiers, clear licencing and provenance, to ensure that data are:

  • Findable
  • Accessible
  • Interoperable
  • Reusable

These principles have been embraced by many, including the European Commission and Horizon 2020 funded projects.

The number of countries participating in the e-I&DM group is smaller than the parent Belmont Forum, with the active roles provided by France (ANR), Taiwan (MOST), Japan (JST), US (NSF) and the UK (NERC).

Back-of-the-envelope calculation for a data management plan

The Belmont Forum e-I&DM group is currently developing a template for data plans, intended to be a light touch, to highlight issues such as cost, documentation, anticipated restrictions on accessing data and data management after the lifetime of the project.  Organisations such as NERC already have guidelines, but the Belmont Forum can help to standardise the actions of a number of countries.  Andy Turner identified that flexibility from funders is the key when asking for a data management plan at proposal stage, that ‘only very rough estimates of data sizes might be possible and it is difficult to say at the outset how much data produced will have long-term value’.  Scientists are asked to make projections in the knowledge that the change of track of the research through a project might make for changed data management needs.  Nevertheless, considering data management issues from the outset can only help to raise awareness on the value of the data projects produce and to highlight the potential value in reuse.

The Belmont Forum provides the opportunity to produce joined-up thinking from science funders and councils.  The group uses its global reach to influence and fund collaborative research and to work on specific issues for data intensive environmental research.  The data behind this research, which is channelled into discussions, analyses and papers, is also being more widely acknowledged as a valuable resource in itself.  The Belmont Forum is providing leadership and agreement to develop and disseminate best practice for the data themselves.

Vicky Lucas is the Human Dimensions Champion, Belmont Forum e-I&DM & Training Manger, IEA

Posted in Climate | Leave a comment

Soil Moisture retrieval from satellite SAR imagery

By Keith Morrison

Soil moisture retrieval from satellite synthetic aperture radar (SAR) imagery uses the knowledge that the signal reflected from a soil is related to its dielectric properties. For a given soil type, variations in dielectric are controlled solely by moisture content changes. Thus, a backscatter value at a pixel can be inverted via scattering models to obtain surface moisture. However, this retrieval is complicated by the additional sensitivity of the backscatter to surface roughness and overlying vegetation biomass.

For the simplest cases of bare or lightly vegetated soils, extraction of accurate soil moisture information relies on an accurate model representation of the relative contributions of soil moisture and surface roughness. Models to invert backscatter into soil moisture can be broadly categorised into physical, empirical, or semi-empirical. Empirical models have used experimental results to derive explicit relationships between the radar backscattering and moisture. However, these models tend to be site-specific, only being applicable to situations where radar parameters and soil conditions are close to those used in the initial model derivation. Semi-empirical models start with a theoretical description of the scene, and then use simulated or experimental data to direct the implementation of the model. Such models are useful as they provide relatively simple relationships between surface properties and radar observables that capture a lot of the physics of the radar-soil interaction. The key advantages of such models are that they are much less site dependent in comparison to empirical models, and can also be applied when little or no information about the surface roughness is available. Theoretical, or physical, models are based on a robust description of the mathematics of the radar-soil interaction, providing backscatter through a rigorous inversion. Their generality means they are applicable to a wide range of site conditions and sensor characteristics. However, in practice, because the models require the input of a large number of variables it makes their parameterisation complex, and consequently their implementation difficult. As such, semi-empirical models have generally been the most favoured.

The approaches outlined above only use the incoherent component – backscatter intensity – to characterise the soil moisture, discarding potentially useful information contained in the phase. Recently, however, a causal link between soil moisture and interferometric phase has been demonstrated, and the development of phase-derived soil products will see increasing attention. The figure below shows the first demonstration of phase-retrieved soil moisture, applied across agricultural fields (De Zan et al, 2014). Here, the differential phase (in degrees) between two SAR images clearly shows delineation along field boundaries, associated with differing moisture states.


De Zan, F., et. al., 2014. IEEE Transactions on Geoscience and Remote Sensing, 52, 418–425

Posted in Climate, earth observation, Hydrology, land use, Measurements and instrumentation, Numerical modelling, Remote sensing | Tagged | Leave a comment

Can observations of the ocean help predict the weather?

By Amos Lawless

It has long been recognized that there are strong interactions between the atmosphere and the ocean. For example, the sea surface temperature affects what happens in the lower boundary of the atmosphere, while heat, momentum and moisture fluxes from the atmosphere help determine the ocean state. Such two-way interactions are made use of in forecasting on seasonal or climate time scales, with computational simulations of the coupled atmosphere-ocean system being routinely used. More recently operational forecasting centres have started to move towards representing the coupled system on shorter time scales, with the idea that even for a weather forecast of a few hours or days ahead, knowledge of the ocean can provide useful information.

A big challenge in performing coupled atmosphere-ocean simulations on short time scales is to determine the current state of both the atmosphere and ocean from which to make a forecast. In standard atmospheric or oceanic prediction the current state is determined by combining observations (for example, from satellites) with computational simulations, using techniques known as data assimilation. Data assimilation aims to produce the optimal combination of the available information, taking into account the statistics of the errors in the data and the physics of the problem. This is a well-established science in forecasting for the atmosphere or ocean separately, but determining the coupled atmospheric and oceanic states together is more difficult. In particular, the atmosphere and ocean evolve on very different space and time scales, which is not very well handled by current methods of data assimilation. Furthermore, it is important that the estimated atmospheric and oceanic states are consistent with each other, otherwise unrealistic features may appear in the forecast at the air-sea boundary (a phenomenon known as initialization shock).

However, testing new methods of data assimilation on simulations of the full atmosphere-ocean system is non-trivial, since each simulation uses a lot of computational resources. In recent projects sponsored by the European Space Agency and the Natural Environment Research Council we have developed an idealised system on which to develop new ideas. Our system consists of just one single column of the atmosphere (based on the system used at the European Centre for Medium-range Weather Forecasts, ECMWF) coupled to a single column of the ocean, as illustrated in Figure 1.  Using this system we have been able to compare current data assimilation methods with new, intermediate methods currently being developed at ECMWF and the Met Office, as well as with more advanced methods that are not yet technically possible to implement in the operational systems. Results indicate that even with the intermediate methods it is possible to gain useful information about the atmospheric state from observations of the ocean. However, there is potentially more benefit to be gained in moving towards advanced data assimilation methods over the coming years. We can certainly expect that in years to come observations of the ocean will provide valuable information for our daily weather forecasts.


Smith, P.J., Fowler, A.M. and Lawless, A.S., 2015. Exploring strategies for coupled 4D-Var data assimilation using an idealised atmosphere-ocean model. Tellus A, 67, 27025,

Fowler, A.M. and Lawless, A.S., 2016. An idealized study of coupled atmosphere-ocean 4D-Var in the presence of model error. Monthly Weather Review, 144, 4007-4030,

Posted in Boundary layer, data assimilation, Numerical modelling, Oceans, Weather forecasting | Leave a comment

It melts from the top too …

By David Ferreira

The global sea level rises at about 3 mm/year. Oceans absorb nearly 90% of the heat trapped in the atmosphere by anthropogenic gases like carbon dioxide. As water warms, it expands: this effect explains about half of the observed sea level rise. The other half is due to the melting of ice stored over land, that is, glaciers, the Greenland ice sheet and the Antarctic ice sheet.

Although the latter was a relatively small contributor, recent estimates suggest an increased mass loss from Antarctica in the last decade. Up to now, Antarctica was thought to lose most of its mass at the edges.

The Antarctic ice sheet behaves a bit like a pile of dough that slowly collapses under its own weight. The ice spreads over the whole, and then over the oceans as floating ice, known as ice shelves. Ice shelves are usually found at the end of fast ice-streams channeled by mountains (there are hundreds of these around the continents). The ice shelves in contact with the “warm” ocean (~ 2-4 °C) and melt slowly. Occasionally the process is more abrupt, the ice shelves shed icebergs, some of which are many kilometres in size (an iceberg much larger than Greater London is about to break loose from the Larsen ice shelf). On long timescales, the ice loss at the edges is compensated by snow falling on top of the ice sheet. In recent decades, however, the mass loss at the edges has been slightly larger than the gain through snowfall (a transfer of water to the oceans and a contribution to the sea level rise). The leading explanation for this recent imbalance is that the rate at which warm water is brought to the ice shelves has increased, possibly because of a strengthening of the winds that drive the ocean currents.

A recent paper brings a new element into the picture: the Antarctic ice sheet does not only melt at the edges but also from the top (Kingslake et al., 2017). The surface melt process was thought to be exclusive to Greenland as Antarctica is too cold, even in summer, for temperature to rise above 0°C. So, how is this happening? Melt water in Antarctica seems to originate next to blue ice or exposed rocks. Within the white world of Antarctica, blue ice and rocks are dark. That is, they absorb more sunlight than snow and could (locally) create the conditions for melting. The melt water then gathers into elongated ponds that can grow by kilometres within weeks. Kingslake et al. have documented this process for hundreds of ice streams around Antarctica, sometimes deep into the continent, highlighting a much more widespread phenomenon than previously thought.

What are the possible consequences? These ponds can accelerate the mass loss to the ocean. For example, if they form over land, they may flush to the base of the ice sheet, “lubricate” the ice-ground interface and speed up the ice flow to the coast. If the ponds form over the ice shelves, the added pressure due to the weight of liquid water can help fracture the ice shelves and create icebergs.

Then, the natural question is whether the Antarctic ice shelf is more susceptible to rising temperatures than we think. Unlike the melting at the edge which involves indirect mechanisms through changing winds and ocean currents, surface melting could be directly influenced by increasing temperatures. How important could that be in terms of sea level rise? This remains to be quantified as modern ice sheet models do not take this effect into account, or at least underestimate it.


Kingslake et al, 2017: doi: 10.1038/nature22049

Posted in antarctica, Climate, Cryosphere, Oceans, Polar | Tagged | Leave a comment