Can we use future data to improve our knowledge of the ocean?

By Chris Thomas

An interesting problem in climate science is working out what happened in the world’s oceans in the last century. How did the temperature change, where were the currents strongest, and how much ice was there at the poles? These questions are interesting for many reasons, including the fact that most global warming is thought to be occurring in the oceans and learning more about when and where this happened will be very useful for both scientists and policymakers.

There are several ways to approach the problem. The first, and maybe the most obvious, is to use the observations that were recorded at the time. For example, there are measurements of the sea surface temperature spanning the entire last century. These measurements were made by (e.g.) instruments carried on ships, buoys drifting in the ocean, and (in recent decades) satellites. This approach is the most direct use of the data, and arguably the purest way to determine what really happened. However, particularly in the ocean, the observations can be thinly scattered, and producing a complete global map of temperature requires making various assumptions which may or may not be valid.

The second approach is to use a computer model. State-of-the-art models contain a huge amount of physics and are typically run on supercomputers due to their size and complexity. Models of the ocean and atmosphere can be guided using our knowledge of factors such as the amount of COin the atmosphere and the intensity of solar radiation received by the Earth. Although contemporary climate models have made many successful predictions and are used extensively to study climate phenomena, the precise evolution of an individual model run will not necessarily reproduce reality particularly closely due to the random variation which often occurs.

The final technique is to try to combine the first two approaches in what is known as a reanalysis. The process of reanalysis involves taking observations and combining them with climate models in order to work out what the climate was doing in the past. Large-scale reanalyses usually cover multiple decades of observations. The aim is to build up a consistent picture of the evolution of the climate using observations to modify the evolution of the model in the most optimal way. Reanalyses can yield valuable information about the performance of models (enabling them to be tuned), explore aspects of the climate system which are difficult to observe, explain various observed phenomena, and aid predictions of the future evolution of the climate system. That’s not to say that reanalyses don’t have problems, of course; a common criticism is that various physical parameters are not necessarily conserved (which can happen if the model and observations are radically different). Even so, many meteorological centres around the world have conducted extensive reanalyses of climate data. Examples of recent reanalyses include GloSea5 (Jackson et al. 2016), CERA-20CMERRA-2 (Gelaro et al. (2017)) and JRA-55 (Kobayashi et al. (2015)).

When performing a reanalysis the observations are typically divided into consecutive “windows” spanning a few days. The model starts at the beginning of the first window and runs forward in time. The reanalysis procedure pushes the model trajectory towards any observations that are in each window; the amount by which the model is moved depends on how much we believe the model is correct instead of the observation. A very simplified schematic of the procedure can be found in Figure 1.

Figure 1: A very simplified schematic of how reanalysis works. The data (black stars) are divided into time windows indicated by the vertical lines. The model, if left to its own devices, would take the blue trajectory. If the data are used in conjunction with the model it follows the orange trajectory.

This takes us to the title of the post. Obviously it’s not actually possible to use data from the future (without a convenient time machine), but the nice aspect of a reanalysis is that all of the data are available for the entire run. Towards the start of the run we have knowledge of the observations in the “future”; if we believe these observations will enable us to push the current model closer to reality it is desirable for us to use them as effectively as possible. One way to do that would be to extend the length of the windows, but that eventually becomes computationally unfeasible (even with the incredible supercomputing power available these days).

The question, therefore, is whether we can use data from the “future” to influence the model at the current time, without having to extend the window to unrealistic lengths. The methodology to do this has been introduced in our paper (Thomas and Haines, 2017). The essential idea is to use a two-stage procedure. The first run is a standard reanalysis which incorporates all data except the observations that appear in the future. The second stage then uses the future data to modify the trajectory again. Two stages are required because the key quantity of interest is the offset between the future observations and the first trajectory; without this, we’d just be guessing how the model would behave and would not be able to exploit the observations as effectively.

Our paper describes a test of the method using a simple system: a sine-wave shape travelling around a ring. Observations are generated at different locations and the model trajectory is modified accordingly. It is found that including the future observations improves the description relative to the first stage; some results are shown in Figure 2. The method has been tested in a variety of situations (including different models) and is reasonably robust even when the model varies considerably through time.

Figure 2: Results obtained when using the new methodology in a simple simulation. The left hand plot shows the results after the first stage, and the right hand plot shows the results after the second stage. In each plot the horizontal axis is space and the vertical axis is time. Values closer to zero (the white areas) indicate the procedure has performed well, whereas values further from zero (the blue and orange areas) indicate it has not been as successful. The second stage has more white areas, showing an improvement over the first. (The labels at the top of each plot indicate where the observations are located.)

We have now implemented the method in a large-scale ocean reanalysis which is currently running on a supercomputer. We are particularly interested in a process known as the AMOC (Atlantic Meridional Overturning Circulation) which is a North-South movement of water in the Atlantic Ocean (see Figure 3 for a cartoon). It is believed that the behaviour of water in the northernmost reaches of the Atlantic can influence the strength of circulation in the tropical latitudes; crucially, this relationship is strongest at a time lag of several years (Polo et al. (2014)). Data collected by the RAPID measurement array in the North Atlantic ( take the role of the “future” data in the reanalysis and are used to modify the model trajectory in the North Atlantic. The incorporation of RAPID data in this way has not been done before and we’re looking forward to the results!

Figure 3: A cartoon of the AMOC and the RAPID array in the North Atlantic (adapted from The red and blue curves indicate the movement of water. The yellow circles indicate roughly where the RAPID array is located.


Jackson, L. C., Peterson, K. A., Roberts, C. D. and Wood, R. A. 2016. Recent slowing of Atlantic overturning circulation as a recovery from earlier strengthening. Nat. Geosci. 9(7), 518–522.

Kobayashi, S. et al. 2015. The JRA-55 Reanalysis: General specifications and basic characteristics. J. Meteor. Soc. Japan Ser. II 93(1), 5–48.

Gelaro, R. et al. 2017. The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) J. Clim. 30(14), 5419–5454.

Thomas, C. M. and Haines, K. 2017. Using lagged covariances in data assimilation. Accepted for publication in Tellus A.

Polo, I., Robson, J., Sutton, R. and Balmaseda, M. A. 2014. The Importance of Wind and Buoyancy Forcing for the Boundary Density Variations and the Geostrophic Component of the AMOC at 26°N. J. Phys. Oceanogr. 44(9), 2387–2408.


Posted in Climate, Climate modelling, Oceans | Tagged | Leave a comment

Sunny, Windy Sundays

By Daniel Drew

Throughout the day National Grid (the system operator of the electricity network in Great Britain) must ensure there is a balance between the demand for electricity and the amount generated. Historically this has involved forecasting the level of demand based on meteorological conditions and human activity and adjusting the generation from conventional power stations accordingly. However, the dramatic growth of wind and solar power capacity in recent years makes things more complicated as there is now a need accurately to forecast the renewable generation as well.

National Grid has a licence obligation to keep the system frequency between 49.5 and 50.5 Hz. Any imbalance between supply and demand leads to a change in the frequency of the network. The rate at which the frequency changes following an imbalance between supply and demand is dependent on the system inertia – higher levels mean it takes longer to reach a new steady state. System inertia is the stored rotating energy of all the machines directly connected to the network, it is therefore a measure of resistance in the network to changes in frequency. The growth of renewable generators such as solar panels and wind turbines reduces the amount of system inertia. This presents a challenge to National Grid, particularly on days where renewables provide a large proportion of demand. It is therefore important to have a clear understanding of the proportion of electricity provided by renewables throughout the year.

During the calendar year of 2016, wind and solar power contributed approximately 15% of UK electricity. However, for individual 30 minute periods this proportion can be a lot higher. Figure 1 shows that the contribution of renewables to electricity demand exceeded 25% for approximately 5% of the year. In general, the highest penetrations are observed on sunny, windy and warm days, when the electricity demand is relatively low and the generation levels of wind and solar are both relatively high. If these meteorological conditions happen to fall on a Sunday the proportion of renewables is amplified as electricity demand is highly suppressed.

Figure 1. The cumulative distribution of the 30 min proportion of electricity demand provided by wind and solar power for 2016. Derived from data from

Given the short time period for which the turbines and solar panels have been installed, the distributions shown in Figure 1 are based on a limited number of meteorological conditions. It is therefore unclear what proportion of demand could be provided by renewables based on the current capacity of wind farms and solar panels. We are therefore currently working with National Grid to extend the dataset taking into account the full range of meteorological conditions which could occur in the UK.


Posted in Renewable energy | Leave a comment

The Role of Synoptic Meteorology on UK Air Pollution

By Chris Webber

In the past year the issue of air pollution within the UK has been elevated, driven by the loss of life that it causes (in 2013 > 500,000 years of UK lives lost due to air pollution 1). Air pollution concentrations within the UK are a function of both pollutant emissions and meteorology. This study set out to determine how synoptic meteorology impacts UK particulate matter (PM) concentrations with an aerodynamic diameter ≤ 10 µg m-3 ([PM10]).

The influence of synoptic meteorology on air pollution concentrations is well studied, with anticyclonic conditions over a region often found to be associated with the greatest pollutant concentrations 2,3. Webber et al. (2017) evaluated the impact of synoptic meteorology on UK Midlands [PM10]. They identified Omega block events as the synoptic meteorological condition that is associated with the most frequent UK daily mean [PM10] threshold exceedance events (episodes). A UK [PM10] episode is defined as daily mean [PM10] 10 µg m-3 above a mean UK Midlands concentration.

This study uses the Met-Office HADGEM3-GA4 atmosphere-only climate model to gather information on the flow regimes influencing the UK throughout Omega block events. For this study, temperature and wind velocity are constrained using ERA-Interim reanalysis data, in a process termed nudging. This study uses four inert tracers, emitted throughout Europe (Figure 1), to identify flow regimes from the highest PM10 emission regions throughout Europe. To enable their transport across Europe, the tracers are designed to replicate the lifetime of sulphate aerosol.


Figure 1. This study’s four tracer emission regions throughout Europe.

This study has identified 28 Omega blocks within the winter months (DJF) between December 1999 and February 2008. The anomalous mean sea level pressure composite for the 28 Omega blocks is shown for the onset day in Figure 2 and bears resemblance to a classical Omega block pattern (Figure 3). The Omega block onset is defined as in Webber et al. (2017), where the western flank of an upper level anticyclone has been detected within the northeast Atlantic/ European region. 

Figure 2. Mean Sea Level Pressure Anomaly for 28 Omega block events on the day of onset, relative to a DJF 1999-2008 dataset mean.

Figure 3. An idealised schematic of an Omega block pattern. The High and Low refer to mean sea level pressure anomalies, while the solid black line represents flow streamlines (Met Office, 2017).

Figure 4 shows this study’s key result, the UK Midlands daily mean concentration of each tracer throughout the evolution of an Omega block. The observed UK Midlands [PM10] and modelled [PM10], the latter generated from the modelled tracers using multiple linear regression, are also shown. Within Figure 4 the solid line is the mean concentration throughout the Omega block subtracted by 1.65 x the standard deviation of that concentration (for a 1-tailed statistical test this equates to a 95th percentile confidence interval). The horizontal dashed lines represent the dataset means (negating the Omega block events) for each quantity. If the solid black line is greater than the horizontal dashed line in any of the panels, this represents a significant increase (p<0.05) in the concentration above the dataset mean.

Figure 4. Observed PM10, modelled PM10 and tracer concentrations throughout the 9 days of an Omega block event subtracted by 1.65 x the standard deviation of that concentration (solid lines). Horizontal dashed line represents the DJF 1999-2008 dataset mean for each tracer or PM10 concentration. 

Figure 4 shows that Omega block events result in significant increases in both observed and modelled [PM10] on day +1 relative to the onset of an Omega block event. This is the maxima that was recognised by Webber et al. (2017) to lead to an elevated probability in UK Midlands PM10 episodes.

The key message from Figure 4 is that the peak in UK [PM10] throughout Omega block events is driven by an increase in locally sourced pollution and advected European pollution. Omega blocks have previously been thought to result in elevated [PM10] through the accumulation of locally sourced pollution, however this study is one of the first to show that this is not the whole story. We see significant influence from European tracers, which coincide with modelled UK [PM10] peaks.


1 EEA, 2016. Exceedance of air quality limit values in urban areas (Indicator CSI 004), European Environment Agency.

2 Barmpadimos, I., Keller, J., Oderbolz, D., Hueglin, C., Prevot, A. S. H., 2012. One decade of parallel fine (PM2.5) and coarse (PM10-PM2.5) particulate matter measurements in Europe: trends and variability. Atmos. Chem. Phys. 12, 3189-3203.

3 McGregor, G. R., Bamzelis, D., 1995. Synoptic Typing and Its Application to the Investigation of Weather Air-Pollution Relationships, Birmingham, United-Kingdom. Theor. Appl. Climatol. 51, 223-236.

4 Altenhoff, A. M., Martius, O., Croci-Maspoli, M. I., Schweirz, C., Davies, H. C., 2008. Linkage of atmospheric blocks and synoptic-scale Rossby waves: a climatological analysis. Tellus A, 60, no. 5, 1053-1063.

4 Webber, C. P., Dacre, H. F., Collins, W. J., Masato, G., 2017. The dynamical impact of Rossby wave breaking upon UK PM10 concentration. Atmos. Chem. Phys. 17, 867-881.

5 Met-Office, 2017. Blocking Patterns. Available online at: [Accessed July 2017].


Posted in Aerosols, Atmospheric chemistry, Boundary layer, Environmental hazards, Urban meteorology | Tagged | Leave a comment

BoBBLE: Air-sea interactions and intraseasonal oscillations in the Bay of Bengal

By Simon Peatman

The Indian Summer Monsoon (ISM) is one of the most significant features of the tropical climate. The heavy rain it brings during boreal summer provides around 80% of the annual precipitation over much of India with over 1 billion people feeling the impact, especially through the effect on agriculture. The typical impression of the monsoon which many members of the public have is of constant, torrential rain during the summer months as though a tap is turned on sometime in June, eventually petering out at the end of the season. The reality, however, is an active/break cycle on intraseasonal timescales.

The chief source of intraseasonal variability in the ISM region during boreal summer is the BSISO (Boreal Summer IntraSeasonal Oscillation). Similar to the famous real-time multivariate MJO (RMM) indices of Wheeler and Hendon (2004), which divide the Madden-Julian Oscillation into eight phases, the BSISO indices were developed by Lee et al (2013) based on multivariate EOF analysis of outgoing longwave radiation (OLR) and 850 hPa zonal wind. Two cycles were identified (BSISO1 and BSISO2); here we consider the former. BSISO1 is found to have a period of 30–60 days; a composite life cycle of OLR and 850 hPa wind anomalies is shown in Figure 1b in which a nominal period of 48 days has been chosen, with each frame of the animation corresponding to one day. The boreal summer climatology is shown for reference in Figure 1a. Alternate active and suppressed regions of convection begin over the equatorial region of the Indian Ocean, propagating northwards over the Arabian Sea (west of India), India, the Bay of Bengal (BoB; east of India) and south-east Asia. The MJO chiefly consists of slow eastward propagation of large-scale organized convective envelopes from the Indian Ocean, through the Maritime Continent, to the tropical Pacific. The BSISO may be thought of as the MJO’s northward branch, unique to the boreal summer months.


Figure 1: Boreal summer (May to October) OLR from AVHRR and 850 hPa wind from ERA-Interim (1981-2010): (a) climatology; (b) anomaly for each day of a nominal 48-day BSISO1 life cycle, computed by linearly interpolating between composites of the eight phases. (After Adrian Matthews’ MJO diagrams.)

The monsoon onset, withdrawal and active/break events are generally poorly forecast. This is partly due to a poor understanding of the physical mechanisms behind them. One major gap in our knowledge is the effect of air-sea interactions in the BoB. The Bay of Bengal Boundary Layer Experiment (BoBBLE) seeks to improve our understanding of these interactions and their effect on ISM variability. The BoBBLE field campaign took place in June–July 2016 and performed intensive observations in the BoB of atmospheric conditions, air-sea fluxes, and ocean currents and stratification.

In particular, the campaign saw the deployment of Argo floats (which remained in place after the campaign as part of the global Argo network) and seagliders [follow the link and select Mission 31]. These can be seen in this video clip showing the route of the research vessel Sindhu Sidhana (Figure 2). 

The BoBBLE field campaign took place in June–July 2016 and performed intensive observations in the BoB of atmospheric conditions, air-sea fluxes, and ocean currents and stratification. In particular, the campaign saw the deployment of Argo floats (which remained in place after the campaign as part of the global Argo network) and seagliders (follow the link and select Mission 31) and the research vessel Sindhu Sidhana.

As part of the BoBBLE project, at Reading we are using data from the field campaign as part of a range of hindcast modelling experiments.

Figure 2: Research Vessel Sindhu Sidhana.


Lee, J.-Y., B. Wang, M. C. Wheeler, X. Fu, D. E. Waliser, and I.-S. Kang, 2013. Real-Time Multivariate Indices for the Boreal Summer Intraseasonal Oscillation over the Asian Summer Monsoon Region. Clim. Dyn.40, 493–509.

Wheeler, M. C., and H. H. Hendon, 2004. An All-Season Real-Time Multivariate MJO Index: Development of an Index for Monitoring and Prediction, Mon. Weather Rev.132, 1917–1932.

Posted in Climate, Climate modelling, Monsoons, Numerical modelling, Oceans | Tagged | Leave a comment

Responding to the threat of hazardous pollutant releases in cities

By Denise Hertwig

High population density and restricted evacuation options make cities particularly vulnerable to threats posed by air-borne contaminants released into the atmosphere through industrial accidents or terrorist attacks. In order to issue evacuation or sheltering advice to the public and avoid or mitigate impacts on human health, emergency responders need timely information about expected pollutant pathways, concentration levels and associated human exposure risks. Such information can be derived from atmospheric dispersion models, which are key components of emergency response management systems. Concentration predictions from such models need to be as robust and reliable as possible, while at the same time being available in near real-time.

Due to the effect of buildings on local wind fields, the dispersion behaviour in cities is distinctively different from rural areas. Pollutant transport is determined to a large degree by the arrangement of buildings and streets. Effects like pollutant channelling along street canyons, plume branching in intersections or pollutant trapping and mixing in low-speed recirculation zones behind buildings make urban dispersion scenarios particularly complex (Figure 1). High-resolution (approx. 1 m), building-resolving simulation methods can reproduce these effects with a high level of accuracy, but the turnaround time for such model output is currently much too long to be usable in emergency events. Instead, simpler fast-running modelling tools are needed.

The EPSRC-funded DIPLOS project (‘Dispersion of Localised Releases in a Street Network’) set out to identify driving urban dispersion mechanisms and to improve their representation in fast emergency-response dispersion models. Based on high-resolution simulations and wind-tunnel experiments, representatives of the street-network dispersion modelling class were evaluated in detail. An example of this relatively new approach is the model SIRANE, which is the only street-network dispersion model currently in operational use.

Figure 1: Flow streamlines in idealised urban settings studied in DIPLOS. (a) Helical motion through elongated streets and low-speed recirculation zones in sheltered street canyons in a uniform geometry. (b) Flow disturbances created by a tall building: downdraft on the windward side, low-speed recirculating updraft regions on the leeward side. Thick arrows show the ambient wind direction.

Models like SIRANE treat urban areas as a network of streets connected at intersections and compute pollutant fluxes through the interfaces between street and intersection volumes (Figure 2). While not resolving buildings explicitly, network models are directly aware of the street layout and thus of possible pollutant pathways. This model feature is crucial as even in simplified urban settings like the ones considered in DIPLOS, the main direction of pollutant transport at pedestrian level can be vastly different from the ambient wind direction because pathways are restricted by the street topology (Figure 3a). Comparisons with high-resolution large-eddy simulations showed that the simple street-network methodology is able to capture this mechanism and also accounts for pollutant exchange with flow above roof-level in a realistic way (Figure 3b). Overall, the comparatively simple street-network models were found to perform as well as more sophisticated stochastic dispersion models while only needing a fraction of the time to run (typically, a few minutes) and minimal wind input information. They outperform analytical Gaussian solutions, which are widely applied for regulatory purposes, but lack any explicit building awareness. Model performance, however, is crucially dependant on the suitable modelling of transport velocities along streets. Further improvements of the network-model performance can be achieved by taking into account the delayed dispersion of pollutants trapped in building wakes, effects of isolated tall buildings and turbulent concentration fluctuations.

The work presented here is the result of a collaboration between the Universities of Reading, Southampton and Surrey and the École Centrale de Lyon in France. DIPLOS is coming to an end in August 2017. An overview of the project, researchers and institutions involved and of science output is available on the DIPLOS website.

Figure 2: (a) Representation of the topology of streets and intersections in street-network dispersion models. (b) Horizontal transport and vertical exchange mechanisms in network models. Background images from Google Earth.

Figure 3: Case of continuous pollutant release at ground-level within an intersection: (a) Plume envelope with colours indicating the plume height. The height of the buildings is H = 10 m (dark red plume areas are at approx. 4.5 H). (b) Street-network model prediction of the mean pollutant concentrations in streets and intersections (right) in comparison to reference results from high-resolution large-eddy simulation (LES; left). Stars mark the location of the source.

Posted in Environmental hazards, Numerical modelling, Urban meteorology | Tagged , | Leave a comment

Time scales of atmospheric circulation response to CO2 forcing

By Paulo Ceppi

An important question in current climate change research is, how will atmospheric circulation change as the climate warms? When simulating future climate scenarios, models commonly predict a shift of the midlatitude circulation to higher latitudes in both hemispheres – generally referred to as a “poleward circulation shift”. As an example, under a “business as usual” future emissions scenario the North Atlantic jet stream is predicted to shift northward during the summer months (Figure 1). If true, this would likely affect the average amount of precipitation, wind, and sunshine experienced in the UK and more generally across Western Europe.


Figure 1: Change in eastward wind speed at 850 hPa in the RCP8.5 (“business as usual”) experiment during the 21st century for June-August. The response is calculated as the mean of 2070-2100 minus 1960-1990. Grey contours indicate the wind climatology, while colour shading denotes the change (in m/s). Results are averages over 35 coupled climate models.

Such shifts of circulation being caused by global warming, it is natural to assume that the more the planet warms, the larger the circulation shift. But is that assumption generally true? More specifically: as the planet warms in response to CO2 forcing, do circulation shifts scale with the change in global-mean temperature? Here we are interested in the time evolution of the transient response to CO2 forcing, i.e. the period during which the climate adjusts to the change in CO2. This evolution is best represented in climate model experiments in which CO2 concentrations are increased abruptly and then held constant; since the forcing happens all at once, the various time scales of climate response are cleanly separated. Below I will present results from the so-called “abrupt4xCO2” experiment, in which a set of climate models were subjected to a sudden quadrupling of CO2 concentrations and then run for 150 years.

It turns out that as climate changes following a sudden quadrupling of CO2, circulation shifts do not generally scale with global warming. Instead, two distinct phases of circulation change occur: during the first 5 years or so, the planet warms quickly and the jet streams shift poleward; but thereafter the jets tend to stay at a constant latitude, despite the fact that the planet continues to warm substantially. This is summarised in Figure 2 below, where the curves indicate changes in the latitude of the jet stream (averaged over a set of 28 climate models). In the North Pacific region, the change in jet stream latitude even changes sign over the course of the experiment (Figure 2b).


Figure 2: Change in annual-mean jet stream latitude (measured as the latitude of peak eastward wind at 850 hPa) in climate models during the first 140 years following a quadrupling of atmospheric CO2, as a function of global-mean surface temperature. The curves indicate means across 28 climate models. Shading denotes the 75% range of responses. The jet shifts are in degrees latitude and positive anomalies are defined as poleward. Circles denote individual years until year 10; diamonds denote decadal means between year 11 and year 140. The black crosses indicate the means of years 5-10 and 121-140, respectively.

How can we explain this peculiar time evolution of circulation shifts? The evolution of changes in atmospheric temperature and circulation is mainly controlled by how the ocean surface warms in response to greenhouse gas forcing. Like the atmosphere, the ocean has its own circulation and in some regions deep, cold water rises to the surface – a process known as “upwelling”. Due to this and other processes, the ocean surface does not warm at the same rate everywhere; in particular, upwelling regions like the Southern Ocean experience delayed warming.

We find that the time scales of ocean surface warming determine the time scales of change in atmospheric circulation, via the changes in atmospheric temperature. In particular, the patterns of ocean surface warming before and after year 5 of the experiment are strikingly different; when imposed in an atmospheric climate model, we obtain circulation changes consistent with the results shown in Figure 2, confirming the role of the ocean surface in controlling the atmospheric response.

While the scenario of abrupt CO2 increase described in Figure 2 is unlikely to happen in the real world, further analysis shows that the two time scales of circulation shift are also present in more realistic scenarios of gradual greenhouse gas increase. This indicates that care must be taken when extrapolating transient circulation shifts to estimate changes in future warmer climates.


Ceppi et al., 2017. Fast and slow components of the extratropical atmospheric circulation response to CO2 forcing: submitted to Journal of Climate.

Zappa et al., 2015. Improving climate change detection through optimal seasonal averaging: the case of the North Atlantic jet and European precipitation: Journal of Climate, doi: 10.1175/JCLI-D-14-00823.1

Posted in Atmospheric chemistry, Climate, Climate change, Climate modelling, Numerical modelling | Tagged | Leave a comment

What’s in a number?

By Nancy Nichols

Should you care about the numerical accuracy of your computer? After all, most machines now retain about 16 digits of accuracy, but usually only about 3-4 figures of accuracy are needed for most applications;  so what’s the worry? To demonstrate, there have been a number of spectacular disasters due to numerical rounding error. One of the most well known is the failure of a Patriot missile to track and intercept an Iraqi Scud missile in Dharan, Saudi Arabia, on 25 February 1991, resulting in the deaths of 28 American soldiers. 

The failure was ultimately attributable to poor handling of  rounding errors. The computer doing the tracking calculations had an internal clock whose values were truncated when converted to floating-point arithmetic with an error of about 2-20 . The clock had run up a time of 100 hours, so the calculated elapsed time was too long by 2-20 x 100 hours = 0.3433 seconds, during which time a Scud would be expected to travel more than half a kilometre.






The same problem arises in other algorithms that accumulate and magnify small round-off errors due to the finite (inexact) representation of numbers in the computer. Algorithms of this kind are referred to as ‘unstable’ methods. Many numerical schemes for solving differential equations have been shown to magnify small numerical errors. It is known, for example, that L.F. Richardson’s original attempts at numerical weather forecasting were essentially scuppered due the unstable methods that were used to compute the atmospheric flow. Much time and effort have now been invested in developing and carefully coding methods for solving algebraic and differential equations such as to guarantee stability. Excellent software is publicly available. Academics and operational weather forecasting centres in the UK have been at the forefront of this research.

Even with stable algorithms, however, it may not be possible to compute an accurate solution to a given problem. The reason is that the solution may be sensitive to small errors  –  that is, a small error in the data describing the problem causes large changes in the solution. Such problems are called ‘ill-conditioned’. Even entering the data of a problem into a computer  –  for example, the initial conditions for a differential equation or the matrix elements of an eigenvalue problem  –   must introduce small numerical errors in the data. If the problem is ill-conditioned, these then lead to large changes in the computed solution, which no method can prevent.   

So how do you know if your problem is sensitive to small perturbations in the data?  Careful analysis can reveal the issue, but for some classes of problems there are measures of the sensitivity, or the ‘conditioning’, of the problem that can be used. For example, it can be shown that small perturbations in a matrix can lead to large relative changes in the inverse of the matrix if the ‘condition number’ of the matrix is large. The condition number is measured as the product of the norm of the matrix and the norm of its inverse.  Similarly  small changes in the elements of a matrix will cause its eigenvalues to have large errors if the ‘condition number’ of the matrix of eigenvectors is large. Of course to determine the condition numbers is a problem implicitly, but accurate computational methods for estimating the condition numbers are available.

An example of an ill-conditioned matrix is the covariance matrix associated with a Gaussian distribution. Figure 2 below shows the condition number of a covariance matrix obtained by taking samples from a Gaussian correlation function at 500 points, using a step size of 0.1, for varying length-scales [1].  The condition number increases rapidly to 107 for length-scales of only size L = 0.2  and, for length scales larger than 0.28, the condition number is larger than the computer precision and cannot even be calculated accurately.

Figure 2

This result is surprising and very significant for numerical weather prediction (NWP) as the inverse of covariance matrices are used to weight the uncertainty in the model forecast and in the observations used in the analysis phase of weather prediction. The analysis is achieved by the process of data assimilation, which combines a forecast from a computational model of the atmosphere with physical observations obtained from in situ and remote sensing instruments. If the weighting matrices are ill-conditioned, then the assimilation problem becomes ill-conditioned also, making it difficult to get an accurate analysis and subsequently a good forecast [2]. Furthermore, the worse the conditioning of the assimilation problem becomes, the more time it takes to do the analysis. This is important as the forecast needs to be done in ‘real’ time, so the analysis needs to be done as quickly as possible.

One way to deal with an ill-conditioned system is to rearrange the problem to so as to reduce the conditioning whilst retaining the same solution. A technique for achieving this is to ‘precondition’ the problem using a transformation of the variables. This is used regularly in NWP operational centres with the aim of ensuring that the uncertainties in the transformed variables all have a variance of one [1][2]. In Table 1 we can see the effects of the length-scale of the error correlations in a data assimilation system on the number of iterations it takes to solve the problem, with and without preconditioning of the problem [1]. The conditioning of the problem is improved and the work needed to solve the problem is significantly reduced. So checking and controlling the conditioning of a computational problem is always important!

Table 1


[1]  S.A Haben, 2011. Conditioning and Preconditioning of the Minimisation Problem in Variational Data Assimilation. University of Reading, Department of Mathematics and Statistics, PhD Thesis:

[2]  S.A. Haben, A.S. Lawless and N.K. Nichols,  2011. Conditioning of incremental variational data assimilation, with application to the Met Office system, Tellus, 63A, 782–792. (doi:10.1111/j.1600-0870.2011.00527.x)

Posted in Climate modelling, data assimilation, Numerical modelling, Weather forecasting | Leave a comment

A Presidential address …

By Ellie Highwood

I have been President of the Royal Meteorological Society (RMetS) for almost a year now (I will serve two years in total) and people keep asking me “how’s it going?” or “are you enjoying it?” Before I answer those questions let me describe the role.

The role of President of a small society like RMetS is a bit of everything to be honest. Obviously there are the formal things – I chair Council meetings three times a year as well as the Awards Committee and then present the awards at the AGM. Since we are developing our next 3 year strategy there are also meetings and workshops with a group of members and Council to do that. Ahead of each Council there are about 3 hours of work to do on the papers to make sure I understand what is in them – these can be about new initiatives the RMetS wants to do, reports from the other committees or the annual accounts. Some people who have experienced me chairing a Termly Staff Meeting at Reading will perhaps be surprised to learn that all the Council meetings have tended to over-run! It is definitely a challenge to make sure every voice is heard on some quite complex issues without getting bogged down in the detail. In truth, it’s not my favourite part of the job, but it comes in chunks as between times the working groups, committees and Society staff are busy getting on with things. In fact, having served 4 years as Vice President prior to becoming president (not usual but a variety of exceptional conditions led to me “filling in”), I was in more regular committee meetings in that role. Not only did I attend the same meetings as the President, but also chaired the Strategic Programme Board and was a member of the Membership Development Committee.

On top of this, there are the less predictable items – dealing with requests/complaints from members and supporting the Chief Executive, Liz Bentley, in the day to day running of the Society and in strategic planning. Liz runs a small office with 8 or so employees and sometimes it’s good to have someone outside the line management structure to chat things through with. We meet once per month – it’s certainly very handy that RMetS Headquarters is in Reading! Unusually this year there are significant anniversaries of the formation of the Canadian Meteorological and Oceanographic Society and Australian Meteorological and Oceanographic Society from the previous “local” branches of the RMetS. To mark this, I will be travelling to Melbourne in August along with Liz and Brian Golding (outgoing Chair of Meetings Committee) to represent the Society (fitting in a seminar at Monash University on the way). I sent a video birthday message to Canada because the timing didn’t fit with the rest of my life to do both. I will also be playing a reasonably big role at the RMetS national conferences and at some point will have to give a Presidential address both at a national meeting, and in Scotland.

I expect I spend on average half a day per week, if that, on RMetS business unless there is a crisis – which rarely happens due to the excellent work of staff and volunteers. I am also lucky that my predecessor Jennie Campbell had to do the negotiation with our publishers Wiley and that’s not due again for another couple of years (I hope). So, back to the original questions:

How is it going? Well, I think (apart from the length of those Council meetings). I wouldn’t expect an RMetS President to come in suddenly change everything – it just isn’t that kind of Society, and 2 years is too short a term to do that. Instead I see my role as nudging and encouraging movement in certain directions that may already have started happening, e.g. review of what it means to be a Fellow of the Royal Meteorological Society, tightening up our nominations and awards processes and making them more transparent, and getting discussions about diversity and inclusion happening (well you would expect nothing less given my day job, right?).

Am I enjoying it? Hmm. Interesting. It is certainly a great honour to be President, but somewhat intimidating every time I walk up the staircase at Headquarters and see my picture there alongside centuries of great meteorologists (imposter syndrome klaxon). I am proud to be involved with shaping our learned society and, dare I say, moving it along a little to be fit for the next generation of meteorologists. The volunteers on Council and the various Committees are each one of them fascinating.  I loved handing out the awards at the AGM, and I love working with the RMetS staff on conferences and such like. It is great fun. But it is weird. It doesn’t feel like a thing most of the time. Which is probably as it should be. We wouldn’t want Presidents to let power go to their head now would we?

If you’d like to get involved with the Royal Meteorological Society there are many ways to do so. I started being involved as a postdoc and got a lot of my formal meeting experience and contacts through the RMetS. Visit the website to see what they are up to and whether you can help, attend a meeting or a conference, or nominate someone for Council or an award.

Posted in Royal Meteorological Society, Women in Science | Leave a comment

Belmont Forum: joined-up thinking from science funders

By Vicky Lucas

The Belmont Forum supports ‘international transdisciplinary research providing knowledge for understanding, mitigating and adapting to global environmental change’.

The Belmont Forum fund research and themes include sustainability, climate predictability, ecosystem services and arctic observing.  The group considers research to be part of a value chain which is socially responsible, inclusive and provides innovative solutions.  Furthermore, open data policies and principles are considered essential to making informed decisions in the face of rapid changes affecting the Earth’s environment.

Belmont Forum Funded Projects
Andy Turner of NCAS and the University of Reading Meteorology Department, leads a Belmont Forum funded project, BITMAP, also jointly funded by JPI Climate.  BITMAP is the ‘Better understanding of Interregional Teleconnections for prediction in the Monsoon and Poles’.  The research is an Indo-UK-German collaboration between the Indian National Centre for Medium Range Weather Forecasting and the universities of Reading and Hamburg.

As is regularly the case with Belmont funding calls, a multi-national consortium was required and each of the participating countries contributed support, with NERC the relevant funder in the UK.  Andy says that his project is ‘encouraging international collaboration and bridging the gap between academic climate science and the more applied needs of weather forecasting’.  The project is going well and only six months from starting, a paper on an algorithm for tracking storms is already in preparation by Kieran Hunt.  Andy observes that in addition to regular virtual meetings between the three countries that ‘as papers from the individual countries begin to be published, the collaboration on the project will increase and more ideas will be shared’.

Scott Osprey of the University of Oxford leads GOTHAM the ‘Globally Observed Teleconnections and their role and representation in Hierarches of Atmospheric Models’, also funded by the Belmont Forum.  When asked about the role of the Belmont Forum, Scott pointed to the ability of this international group to encourage ‘new international research communities for tackling large and complex environmental issues beyond the purview of most national research centres’.

Data Intensive Environmental Research
The e-Infrastructure and Data Management sub-group of the Belmont Forum was set up to concentrate on overcoming barriers in data sharing, use and management for environmental and global change research.  By improving data sharing, data intensive research will be accelerated.  The University of Reading has been involved for several years as Robert Gurney co-chairs the e-I&DM group.

The e-I&DM promotes the FAIR data principles which in the detail include the use of rich metadata, standards for vocabularies and data formats, along with persistent identifiers, clear licencing and provenance, to ensure that data are:

  • Findable
  • Accessible
  • Interoperable
  • Reusable

These principles have been embraced by many, including the European Commission and Horizon 2020 funded projects.

The number of countries participating in the e-I&DM group is smaller than the parent Belmont Forum, with the active roles provided by France (ANR), Taiwan (MOST), Japan (JST), US (NSF) and the UK (NERC).

Back-of-the-envelope calculation for a data management plan

The Belmont Forum e-I&DM group is currently developing a template for data plans, intended to be a light touch, to highlight issues such as cost, documentation, anticipated restrictions on accessing data and data management after the lifetime of the project.  Organisations such as NERC already have guidelines, but the Belmont Forum can help to standardise the actions of a number of countries.  Andy Turner identified that flexibility from funders is the key when asking for a data management plan at proposal stage, that ‘only very rough estimates of data sizes might be possible and it is difficult to say at the outset how much data produced will have long-term value’.  Scientists are asked to make projections in the knowledge that the change of track of the research through a project might make for changed data management needs.  Nevertheless, considering data management issues from the outset can only help to raise awareness on the value of the data projects produce and to highlight the potential value in reuse.

The Belmont Forum provides the opportunity to produce joined-up thinking from science funders and councils.  The group uses its global reach to influence and fund collaborative research and to work on specific issues for data intensive environmental research.  The data behind this research, which is channelled into discussions, analyses and papers, is also being more widely acknowledged as a valuable resource in itself.  The Belmont Forum is providing leadership and agreement to develop and disseminate best practice for the data themselves.

Vicky Lucas is the Human Dimensions Champion, Belmont Forum e-I&DM & Training Manger, IEA

Posted in Climate | Leave a comment

Soil Moisture retrieval from satellite SAR imagery

By Keith Morrison

Soil moisture retrieval from satellite synthetic aperture radar (SAR) imagery uses the knowledge that the signal reflected from a soil is related to its dielectric properties. For a given soil type, variations in dielectric are controlled solely by moisture content changes. Thus, a backscatter value at a pixel can be inverted via scattering models to obtain surface moisture. However, this retrieval is complicated by the additional sensitivity of the backscatter to surface roughness and overlying vegetation biomass.

For the simplest cases of bare or lightly vegetated soils, extraction of accurate soil moisture information relies on an accurate model representation of the relative contributions of soil moisture and surface roughness. Models to invert backscatter into soil moisture can be broadly categorised into physical, empirical, or semi-empirical. Empirical models have used experimental results to derive explicit relationships between the radar backscattering and moisture. However, these models tend to be site-specific, only being applicable to situations where radar parameters and soil conditions are close to those used in the initial model derivation. Semi-empirical models start with a theoretical description of the scene, and then use simulated or experimental data to direct the implementation of the model. Such models are useful as they provide relatively simple relationships between surface properties and radar observables that capture a lot of the physics of the radar-soil interaction. The key advantages of such models are that they are much less site dependent in comparison to empirical models, and can also be applied when little or no information about the surface roughness is available. Theoretical, or physical, models are based on a robust description of the mathematics of the radar-soil interaction, providing backscatter through a rigorous inversion. Their generality means they are applicable to a wide range of site conditions and sensor characteristics. However, in practice, because the models require the input of a large number of variables it makes their parameterisation complex, and consequently their implementation difficult. As such, semi-empirical models have generally been the most favoured.

The approaches outlined above only use the incoherent component – backscatter intensity – to characterise the soil moisture, discarding potentially useful information contained in the phase. Recently, however, a causal link between soil moisture and interferometric phase has been demonstrated, and the development of phase-derived soil products will see increasing attention. The figure below shows the first demonstration of phase-retrieved soil moisture, applied across agricultural fields (De Zan et al, 2014). Here, the differential phase (in degrees) between two SAR images clearly shows delineation along field boundaries, associated with differing moisture states.


De Zan, F., et. al., 2014. IEEE Transactions on Geoscience and Remote Sensing, 52, 418–425

Posted in Climate, earth observation, Hydrology, land use, Measurements and instrumentation, Numerical modelling, Remote sensing | Tagged | Leave a comment