How climate modelling can help us better understand the historical temperature evolution

By: Andrea Dittus

Figure 1: Annual global mean surface temperatures from NASA GISTempNOAA GlobalTempHadley/UEA HadCRUT4Berkeley EarthCowtan and WayCopernicus/ECMWF and Carbon Brief’s raw temperature record. Anomalies plotted with respect to a 1981-2010 baseline. Figure and caption from Carbon Brief (

Earth’s climate has warmed by approximately 0.85 degrees over the period from 1880 to 2012 [IPCC, 2013] due to anthropogenic emissions of greenhouse gases. However, the rate of warming throughout the twentieth and early twenty-first centuries has not been uniform, with periods of accelerated warming and cooling (Figure 1). A key player in determining the historical evolution of global temperatures besides greenhouse gases are anthropogenic aerosols. Aerosols are airborne particles that scatter or absorb incoming solar radiation, and affect cloud properties, therefore altering the surface energy budget. Different aerosols species have different properties and climate impacts, but perhaps the most important aerosols in the context of global climate variability are sulphate aerosols, which account for a large proportion of anthropogenic aerosol. As a scattering aerosol, sulphate has a cooling effect on global climate and has partially offset some of the warming induced by emissions of greenhouse gases. Although we know that aerosols play an important role for global climate, the magnitude of historical aerosol forcing remains uncertain [e.g. Stevens, 2015; Kretzschmar et al., 2017; Booth et al., 2018].

In climate models, the representation of aerosol processes is very diverse, resulting in a wide spread in the magnitude of aerosol forcing across different climate models [Wilcox et al., 2015]. Consequently, the climate effects of aerosols are also very different from model to model. Studies have suggested that aerosol forcing can influence the phasing of key modes of multi-decadal variability such as the Atlantic Multidecadal Variability [Booth et al., 2012] and Pacific Decadal Oscillation [Smith et al., 2016], although the degree of influence is still unclear [e.g. Zhang et al., 2013; Oudar et al., 2018]. Key open questions are whether these findings are model dependent, influenced by the magnitude of simulated aerosol forcing, ensemble size, or a combination of these.

Figure 2: Simulated temperatures for each ensemble member across the different aerosol scalings for the period 1941 to 1970. The numbers 0.2 to 1.5 indicate the scaling factor that was applied to the anthropogenic aerosol emissions. Blue indicates that temperatures are cooler than the reference temperature defined as the 1.0 scaling ensemble mean 1850-2014 climatology, red indicates warmer temperatures.

The SMURPHS Project (Securing Multidisciplinary Understanding of Hiatus and Surge Events, is a multi-disciplinary project whose aim is to improve our understanding of the causes of variations in the observed rate of warming. As part of this project, we have designed an ensemble of historical climate simulations with the HadGEM3-GC3.1 climate model, where anthropogenic aerosol emissions were scaled up or down to sample a wide range in historical aerosol forcing. The emergence of large ensembles in the climate modelling community has highlighted the importance of sampling a large number of realisations, to better estimate the forced response (common to all members run with the same forcings) and magnitude of internal variability (individual to each member). As a compromise between the need to sample a wide range of aerosol forcing and multiple initial condition members, we have opted to run four different initial condition members for five different aerosol scalings. Figure 2 illustrates the effect of aerosol forcing on temperature in the SMURPHS ensemble for the period from 1941 to 1970, a period particularly sensitive to aerosol forcing (not shown). Along the x-axis, different magnitudes of aerosol forcing represent the sensitivity of climate model simulations to aerosol forcing. On the y-axis, each line represents a single realisation to highlight the role of internal variability. The simulations with higher aerosol emissions are systematically colder than the simulations with lower aerosol emissions, consistent with the expected response to increasing aerosol forcing across the ensemble. 

Going forward, these simulations will allow us to investigate how variations in historical aerosol forcing have shaped climate variability in the twentieth and early twenty-first century, from global mean surface temperatures to multi-decadal modes of variability and beyond.


Booth, B. B. B., N. J. Dunstone, P. R. Halloran, T. Andrews, and N. Bellouin (2012), Aerosols implicated as a prime driver of twentieth-century North Atlantic climate variability, Nature, 484, 228-232, doi:10.1038/nature10946

Booth, B. B. B., G. R. Harris, A. Jones, L. Wilcox, M. Hawcroft, and K. S. Carslaw (2018), Comments on “Rethinking the Lower Bound on Aerosol Radiative Forcing,” J. Climate, 31, 9407–9412, doi:10.1175/JCLI-D-17-0369.1.

IPCC, 2013: Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

Kretzschmar, J., M. Salzmann, J. Mülmenstädt, O. Boucher, and J. Quaas (2017), Comment on “Rethinking the Lower Bound on Aerosol Radiative Forcing,” J. Climate, 30, 6579–6584, doi:10.1175/JCLI-D-16-0668.1.

Oudar, T., P. J. Kushner, J. C. Fyfe, and M. Sigmond (2018), No Impact of Anthropogenic Aerosols on Early 21st Century Global Temperature Trends in a Large Initial-Condition Ensemble, Geophysical Research Letters, 45, 9245-9252, doi:10.1029/2018GL078841.

Smith, D. M., B. B. B. Booth, N. J. Dunstone, R. Eade, L. Hermanson, G. S. Jones, A. A. Scaife, K. L. Sheen, and V. Thompson (2016), Role of volcanic and anthropogenic aerosols in the recent global surface warming slowdown, Nature Clim. Change, 6, 936–940, doi:10.1038/nclimate3058.

Stevens, B. (2015), Rethinking the Lower Bound on Aerosol Radiative Forcing, J. Climate, 28, 4794–4819, doi:10.1175/JCLI-D-14-00656.1.

Wilcox, L. J., E. J. Highwood, B. B. B. Booth, and K. S. Carslaw (2015), Quantifying sources of inter-model diversity in the cloud albedo effect, Geophysical Research Letters, 42, 1568–1575, doi:10.1002/2015GL063301.

Zhang, R. et al. (2013), Have Aerosols Caused the Observed Atlantic Multidecadal Variability? J. Atmos. Sci., 70, 1135–1144, doi:10.1175/JAS-D-12-0331.1.


Posted in Aerosols, Climate, Climate change, Climate modelling | Leave a comment

The OpenIFS User Workshop

By Bob Plant

I’ve been asked to write a blog post to go live on 17 June, the opening day of the 2019 OpenIFS user workshop. As I’m involved in the organisation, it would almost seem strange not to talk a little about that.

The IFS (Integrated Forecasting System), is the modelling system developed and used at the ECMWF, and which underlies all of their forecasting, data assimilation and reanalysis activity. Brief outlines can be found here for the dynamics and here for the physics.  The OpenIFS version is designed to be used outside of the centre. This allows universities to collaborate more easily with the ECMWF on research projects and supports more teaching-focussed activities.

Students hear a great deal about weather and climate modelling during their studies but have traditionally had little or no opportunity to work directly with the models. Even those whose main interests do not lie in numerical modelling will inevitably rely on modelling results, or will want to analyse model data. So some hands-on modelling experience is valuable, just as those of us who take a more theoretical or model-based perspective nonetheless benefit from being exposed to real experimental data. It’s important that the models should not be looked upon as black boxes that magically generate data, but that students get the opportunity to take out a torch and at least have a bit of a look around in the murky interior.

At the same time, there are obvious practical issues with using full-scale operational-type models in a classroom context. We often look for substantial high-performance computing for model-based research projects and expect to submit jobs that return results after some hours, or perhaps days. Also, while a model might be very nicely designed for the operational or expert research context, it may not be easy for a non-expert to pick up and get started with quickly. The OpenIFS provides a pretty good balance: it is relatively easy to use, but not so easy as to encourage a black-box syndrome.

I was keen to try out OpenIFS for teaching applications in the department, starting with an MSc dissertation project in summer 2015. While not totally plain-sailing, it was sufficiently encouraging to offer something for the MSc team project week in the following year, with Sue Gray and I each supervising a team so that we could help each other out with any teething issues. That worked well, and further team projects and dissertation projects have followed. There is more about those experiences in a short article in the ECMWF newsletter .

Getting back to this week’s workshop, it is a bi-annual event to introduce researchers from across Europe (and occasionally further afield) to the OpenIFS. We also have a scientific theme concerning the impact of moist processes on storm evolution, and there will be various talks and posters on this, alongside others relating to techniques and examples in using the model for research projects.

The key link between the modelling and the theme is our choice of case study. Storm Karl occurred in September 2016. It started out as a tropical system before undergoing an extratropical transition and ultimately produced much rain over Norway. It was observed as part of the NAWDEX (North Atlantic Waveguide and Downstream Impact Experiment) field campaign and there is an overview in this BAMS article. Apparently, it is the first system to undergo an extratropical transition to have been observed with research aircraft at each stage of its evolution, and so I would imagine that it will continue to be a focus of research over the next few years. The article highlights the importance of mid-level moisture, especially for the behaviour of the “warm conveyor belt” in the extratropical regime. Below are example plots from a preliminary OpenIFS simulation. There are also some very nice loops of the satellite imagery, and Met Office global model forecasts at this page, courtesy of Ben Harvey. We plan to perform a variety of modelling experiments and to interpret and understand our results by drawing on ideas from the talks and posters, and of course, plenty of discussions amongst the participants.

Example plots from a preliminary model run, for which thanks to Marcus Koehler. Left: 10m winds at T+42, 18UTC on 26 September. Karl is to the south-east of Greenland. Right: precipitation at the same time.

Numbers are limited for the hands-on computing part of the workshop, but if you are around in Reading and would like to come along to some interesting talks then feel free to join us in GU01 any morning from Tuesday to Friday. Or if you would like to talk about storms or modelling with 50-odd researchers also interested in such things, then again feel free – we’ll be in 1L61 for Tuesday to Friday morning coffee and over the lunch break. Our programme can be seen here.

I mustn’t forget to give credit where it is due. Under the small assumption that all is going to go wonderfully well, that will have been due to Glenn Carver, Gabi Szepszo and Marcus Koehler from ECMWF, and from the Reading side to Sue Gray and myself, Kathryn Boyd, Maria Broadbridge, Ben Harvey and Jake Bland. And finally thanks to our sponsors: we are funded by bringing together contributions from EGU, ESiWACE, the university environment theme, the department visitor fund and ECMWF.

Posted in Academia, Climate, extratropical cyclones, Numerical modelling, Teaching & Learning | Leave a comment

Climate Action by Reducing Digital Waste

By: John Methven

Climate action has never been higher on the global agenda.

There is a pressing need to change our activities and habits, both at work and home, to steer towards a more sustainable future. National governments, public sector organizations and businesses are setting targets to achieve net zero carbon emissions by 2030. Immediately the term “carbon emissions” focuses attention on activities burning fossil fuel: driving a car, taking a train or catching a flight. However, when our Department first attempted its own carbon budget analysis in 2007, including the contribution of our activities to power consumption and carbon emissions far from Reading, we found that about 63% was attributable to computer usage compared with 24% business travel, 8% gas (heating) and 5% building electricity. Commuting to work was not included, although we are fortunate in that the majority of staff and students walk or cycle to work. The computing carbon cost was not even dominated by local power consumption by our computers (18%), or air-con in server rooms on site (7%), although the indirect carbon costs of the manufacture and ultimate waste disposal of those computers was not accounted for. The overwhelming contribution was from the extensive use of remote computing facilities (38%) – namely the supercomputers used to calculate weather forecasts, climate projections and to extend human knowledge in atmospheric and oceanic science.

What a conundrum! While improved weather forecasts save lives worldwide, through disaster risk mitigation, and also improve business efficiency, the daily creation of the forecasts is contributing to climate change which is increasing environmental risk. Back in 2008, many of the top 100 most powerful supercomputers were used for science, among them the leading global weather forecasting centres and international facilities enabling global climate modelling. Only 10 years on, the global cloud computing industry dwarfs the scientific supercomputing activity; even so, the global climate community takes supercomputing energy demands seriously. For example, scientists plan (Balaji, Geos. Model Devel., 2017) to measure the energy consumed during the next generation simulations of future climate (CMIP6) that will contribute to the United Nations IPCC Sixth Assessment Report. As part of that effort they have developed new tools to share experiment design and simulations so that future computer usage can be minimized (Pascoe, Geos. Model Devel. Disc., 2019). Many supercomputing facilities now have a renewable energy supply and there is even a Green500 list ranking supercomputers by energy efficiency.

 However, the revolutionary surge in digital storage has been outside the science sector: Gigabit magazine lists the top 10 cloud server centres in 2018 by capacity. The electricity consumption in the largest data centres worldwide is cited in the range 150-650 MW. Putting it into context, a single data centre can consume electricity equivalent to 2% of the entire UK electricity demand (34,000 MW)! Although some cloud server centres source electricity from renewables, such as dedicated hydro-electric plants, many do not and the total carbon footprint of cloud servers is huge. For example, Jones (Nature, 2018) states, “data centres use an estimated 200 terawatt hours (TWh) each year. That is more than the national energy consumption of some countries, including Iran, but half of the electricity used for transport worldwide, and just 1% of global electricity demand.” Some estimate that the carbon footprint from ICT (including computing, mobile phones and network infrastructure) already exceeds aviation. Although both sectors are expanding rapidly, cloud storage is expanding much faster with projections that over 20% of global electricity consumption will be attributable to computing by 2030. Much of the electricity is used to cool the computers, as well as power the hardware, and waste heat and water consumption are significant environmental issues. Although renewable power generation reduces the environmental impact, it is worth pausing for thought – what is all this data being stored?

Personal cloud storage is dominated by digital photos. Imagine you have been out with friends, your phone has uploaded the images as soon as it can sync to the cloud. No action required from you, but should you think twice? You have contributed to carbon emissions, and worse still the contribution will keep growing as long as you keep the data. How many of those photos will you look at again? Perhaps at least choose the best photos to keep and delete the rest?

In a work context, the storage for most businesses is dominated by email folders. Globally, 85% of email data volume is spam and 85% of that makes it into the inbox. Few people have time to go through their folders to delete unwanted messages and the volume mounts up. Emails are arriving continuously many with attachments, containing unsolicited images and hidden data on fonts (the content could have been relayed in plain text messages). Relentlessly piling up into a teetering heap of digital waste – requiring power to keep it alive – like a Doctor Who monster waiting just in case its master wants to visit tomorrow (artist’s impression?). Is the neglected monster sad? Perhaps a topic for AI fans.         

What can we do? What can you do? An effective contribution to climate action now would be to clear out your waste (somewhere out there on spinning disk), junk those emails and rubbish photos and feel good about it. Sorting tens of thousands of items into “keeps” and junk is a daunting task. Moving forward, wouldn’t it be good if all senders put a “use by date” on their emails and the recipient’s mail tool automatically deleted the message when expiry date was reached? Then we would know that the messages we send, even if unloved, at least do not contribute long-term to global digital waste.

All images have been spared in the creation of this article.


Posted in Climate, Climate change | Leave a comment

Teaching in China and some Good and Bad Teaching Practices

By: Hilary Weller

In April 2019 I visited the Nanjing University Institute of Information, Science and Technology (NUIST) where students are studying for a degree in Meteorology jointly between Reading and NUIST. Staff from Reading visit a couple of times a year to observe the lectures taught by the NUIST staff, teach the students and make new research links. Students study for years 1 and 2 in Nanjing and then come to Reading for their 3rd year. This is an interesting teaching challenge because Reading staff teach a random 2 weeks from 3 undergraduate modules.

I was sent PowerPoint files of seminar style slides — some bullet points, nice pictures and topics for discussion. I can imagine that these could make a terrific lecture if delivered by a charismatic visionary in the field who people would flock to hear talking, using the slides as illustrative prompts. But this is not me and these were not my slides. So, I needed to plan my teaching more carefully. I was asked to teach about measurements and instrumentation and tropical meteorology ­ a particular challenge as my area of expertise is numerical modelling of the atmosphere. I spent plenty of time learning about these subjects and planning my teaching.

I enjoyed learning about measurements of atmospheric radiation from Giles Harrison’s book Meteorological+Measurements+and+Instrumentation so much so that I have started making some YouTube teaching videos. and some online quizzes.

I observed loads of lectures while visiting NUIST, I have observed lectures in Reading, I have attended lectures as a student and I have delivered good and bad lectures myself. Based on this I will describe some difficult teaching situations and how they can be turned around, with or without preparation.

An Example: A Derivation

A lecturer (you?) plans to go through a derivation with students. In Meteorology it might be, for example, deriving thermal wind balance. You would like them to be able to provide a clear, thorough derivation, explaining each step-in full sentences. You have prepared some slides which outline the derivation but do not include complete sentences because you do not want to clutter your slides with words. You will say the linking sentences instead. This is a problem. You cannot expect the students to be able to write a good derivation if you haven’t given them a complete example. So, you might write it out in full for them and give them a copy of the lecture notes before class. But then they have nothing to do other than try to listen during your class. This breaks my first rule:

Make sure that the students have something to do during your class.

To give the students something to do, you go through the derivation on the board and ask them to volunteer what they think the next step might be. This is a natural way of explaining a derivation. However, if you do this with a class, one or two students may give you the answers you want, and the rest might be getting lost without asking questions. After the same person has answered a few questions, you direct your next question to a student who has so far remained quiet. Who cannot answer and is now humiliated. My next rule:

Do not single students out to answer questions.

These two rules seem to be mutually exclusive. I do not know the best way to teach while following these two rules, but I have some suggestions which will also work for large classes.

  1. Notes with gaps.

You could supply the students with printed notes with gaps and the students fill in the gaps during the lecture. They may copy the text for the gaps from the board or work it out for themselves. This way, the students take away a well written derivation, with all of the linking sentences between equations, and they have something to do and think about during the class. After you have gone through the derivation you could give them a couple of similar derivations to work through in pairs, asking for help if needed.

  1. Flipped classroom.

This teaching style can work very well but can also take a lot of preparation ­ you need to prepare material for the students to work through before and during class. The pre-class activity might be to read a section of a book or watch some SHORT videos. But you need to be careful not to overload the students. The activity before class should be straightforward and not take longer than the homework would have taken (which you must cancel). During class you can help them with more challenging material (assuming they have had time to go through the material before class). The tasks during class might be similar derivations or using the equation derived to explain some observed phenomena.

  1. Multiple choice questions.

I am keen on these as a quick way of engaging the whole class and they do not need to take a lot of preparation. When you come to a point when you would like to ask the class a question, you can instead write 3 or 4 possible answers on the board. They don’t all need to be plausible, you are just trying to encourage engagement. Then ask the students to show 1, 2, 3 or 4 fingers in front of their chest. That way you can see all the answers, the students cannot easily see each other’s answers (to avoid copying or humiliation) and every student is required to try to think of an answer. You may need to encourage them to guess and their answer doesn’t count for anything.

Another trap that people sometimes fall into:

If a student answers a question wrong, do not ask them to justify their answer. Ask someone else or explain it yourself.

  1. More challenging questions.

If you want to ask more challenging questions you will need to give the students more time to think about their answer, perhaps reread their notes or discuss with their neighbour. You should find out about “think-pair-share” or peer instruction. You can also use online quizzes which are popular with students but more time consuming to set up. Another rule:

Do not ask difficult or open-ended questions without giving the students time to think about, research or discuss an answer.

Also, make sure that your questions make sense and have well-defined answers. Check with a colleague to make sure that they are clear.

  1. Old fashioned teaching.

It may seem old fashioned but when I was teaching in China I asked the students to read a sentence in turn from the slides and fill in some simple gaps and copy text from the board. In the feedback, some of the students liked this approach, having an opportunity to practice speaking English and answer simple questions.

I would welcome more ideas for engaging all students while not humiliating anyone. Please leave a comment.

Posted in Academia, Climate, Teaching & Learning | Leave a comment

What sets the pattern of dynamic sea level change in the Southern Ocean?

By: Matthew Couldrey

Figure 1a: Multi-model mean projection of dynamic and steric (i.e. due to thermal and/or haline expansion/contraction) sea level rise averaged over 2081-2100 relative to 1986-2005 forced with a moderate emissions scenario (RCP4.5), including 0.18 m +/- 0.05 m of global mean steric sea level change. b: Root-mean square spread (deviation) of projections from the 21 model ensemble. (From Church et al 2013, their Figure 13.16)

Greenhouse gas forced climate change is expected to cause the global mean sea level to rise over the coming century, which will affect millions of people (Brown et al 2018) and cost trillions of US dollars (Jevrejeva et al. 2018). However, local factors are important in determining how much sea level change any particular place will experience, and these regional effects can double or entirely counteract the global mean change (Figure 1a). Furthermore, regional patterns of sea level change are challenging to predict, and climate models differ in their projections of this spatial pattern (Figure 1b). My research as part of the FAFMIP project (Flux Anomaly Forced Model Intercomparison Project, aims to better understand why models disagree on the distribution of future sea level change.

Dynamic sea level (ζ) is the local sea surface height (above a geopotential surface) deviation from its global mean. Dynamic sea level is zero when averaged over the whole ocean surface, and its change over time (Δζ) shows the local change relative to the global mean. Therefore, positive values of Δζ indicate locations where sea level rise is larger than the global mean. Note that negative values of Δζ can correspond to locations of sea level rise (where the local change is smaller than the global mean, but still a rise) as well as sea level fall.

The hotspots in Figure 1b show locations where models from the previous generation of coupled climate (CMIP5) models disagree on the spatial pattern of sea level rise. The Southern Ocean is one of the regions where the pattern is uncertain, owing to a mixture of inter-model spread in 1) the ocean response to wind forcing, 2) changes in circulation, and 3) the redistribution of heat and freshwater. In an attempt to disentangle these causal processes, my research makes use of simulations where the oceans of several different models are forced with exactly the same changes in air-sea fluxes of heat, momentum (wind) and freshwater.

Figure 2: Thermal and haline contributions to dynamic sea level change across five Atmosphere-Ocean models, rows correspond to different models (named in left hand legends). Left panels: Zonally integrated change in ocean heat content per degree of latitude. Right panels: Zonal mean dynamic sea level change (Δζ, solid lines), and contributions from thermal expansion alone (dotted lines) and thermal plus haline effects (dashed lines).

The Southern Ocean dynamic sea level response is characterised by a strong north-south gradient, with relatively little change near the Antarctic continent and a northward-increasing rise (Figure 2, solid lines of right panels). This change arises partly because more heat gets added to lower latitudes of the Southern Ocean, peaking around 40 ˚S: note the ‘hump’ in the zonal ocean heat content change (left panels of Figure 2). However, the zonal dynamic sea level change (Δζ) shows a gradient then plateau (Figure 2, solid lines of right panels), rather than a ‘hump’, unlike the zonal heat content change. This is because of two reasons: the tendency of seawater to expand or contract changes markedly as you move from 70 ˚S to 45 ˚S. This means that the same heat input causes more dynamic sea level change at lower latitudes (where seawater is warmer) than at higher latitudes (where the temperature is lower). This ‘thermosteric’ or thermal expansion effect alone (Figure 2, dotted lines of right panels) would act to emphasise the ‘hump’ in sea level change suggested by the heat content change. In fact, the ‘haline contraction’ effect is what works against the thermal effects and flattens the hump into the gradient-plateau feature that we observe (Figure 2, dashed lines of right panels closely match the solid lines).

This work highlights that while ocean heat uptake sets the broad patterns of sea level change in the Southern Ocean, it’s the salinity changes that set the details. Furthermore, all the different models shown in Figure 2 were forced with the same pattern and magnitude of air-sea heat flux change. This means that the diversity in patterns of dynamic sea level change across different models largely arises due to differing ocean responses to climate change, rather than each model’s climate sensitivity (i.e. how much a particular model warms per unit of greenhouse gas emitted).


Brown, S., R. J. Nicholls, P. Goodwin, I. D. Haigh, D. Lincke, A. T. Vafeidis, & J. Hinkel, 2018, Quantifying Land and People Exposed to Sea-Level Rise with no Mitigation and 1.5∘C and 2.0∘C Rise in Global Temperatures to Year 2300, Earth’s Future, 6, 583-600, DOI 10.1002/2017EF000738

Church, J. A. , Clark, P. U., Cazenave, A., Gregory, J. M., Jevrejeva, S., Levermann, A., Merrifield, M. A., Milne,  G. A., Nerem, R. S., Nunn, P. D., Payne, A. J., Pfeffer, W. T., Stammer, D., and Unnikrishnan, A. S.: Sea Level Change, in: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, 2013. DOI 10.1017/CBO9781107415324.026

Jevrejeva, S., L. P. Jackson, A. Grinsted, D. Lincke, and B. Marzeion, 2018: Flood damage costs under the sea level rise with warming of 1.5 ∘C and 2 ∘C, Environ. Res. Lett., 13, DOI 10.1088/1748-9326/aacc76


Posted in antarctica, Climate, Climate change, Climate modelling, Oceans | Leave a comment

What do we do with weather forecasts?

By: Peter Clark

As I sat in the Kia Oval in Kennington having taken a day off to watch the first One Day International between England and Pakistan, I had plenty of time to appreciate the accuracy and utility of weather forecasts. The afternoon proved to be a microcosm of both the successes of modern weather forecasting and issues surrounding the use of forecasts in more serious applications (though I may well join in with the cries of “there’s nothing more serious than Cricket!”).

First question: to go to the match or not? When we bought the tickets 6 months ahead, we just had climatology to go on. Early May is a risk, but not very different from later in the season. By the time forecasts become available the question is then “is it worth turning up?” By the Friday five days before, there was a very strong consensus amongst computer forecasts that a cyclone would be tracking across England on the day, most likely during the first half of the day. In fact, the Met Office’s ‘deterministic’ forecast proved very accurate, with the continuous heavy rain passing through London by midday. However, behind the surface front close to the cyclone centre, cold air aloft was overrunning warmer air at the surface, which was given an additional boost as it came from the Atlantic and passed over land.  Warm (and moist) air beneath colder air leads to the likelihood of dreaded convective showers in the afternoon!

There have been real ‘revolutions’ in forecasting over the last few decades. At the centre lies the combination of vast improvements to computer power, more accurate computer models, vast increases in observations to ‘correct’ the data in the models, and development of much more powerful methods to use (or ‘assimilate’) those observations. An extratropical cyclone, or ‘low-pressure system’, is relatively large and long-lived. In this case, the system was at the small end of the scale and quite intense, roughly the size of England – say 500 km across with a life cycle of at least a day. 30 years ago, our computer models had to represent these systems with a grid of points not much better than 100 km apart (see the Met Office’s history of NWP, for example). Today our forecast models have little problem actually representing a cyclone. In practice, they are often predicted in forecast models even before there’s any clear sign of them in observations. While there will still be uncertainty in track and intensity, on the whole they are astonishingly well forecast several days ahead.

Here lies the problem. Showers are much smaller, say 10 km across with the core less than 1 km, and have a lifetime of an hour or so. These cannot even be directly represented in our global models. The most recent ‘revolution’ in forecasting has been the development of so-called ‘convection-permitting’ models (Clark et al. 2016). Regional models (with a grid spacing around 2 km) at last can represent showers, but not well. Something resembling showers can form and give us some very useful guidance on the probability that we’ll be affected by a shower. Such models are now helping produce more accurate flood forecasts, especially for smaller, faster reacting catchments (Dance et al. 2019). Within the ParaCon project we are working hard to find ways to improve the models.

Figure: Radar estimates of the surface rainfall rate at 17:00, 18:00 and 19:00 BST with inset showing the hail storm that hit the Kia Oval at 17:00 BST. (Courtesy of the Met Office). Showers are triggered along a ‘peninsular convergence’ line extending from Cornwall all the way to London that is present for several hours. Clearly, much depended on whether one was beneath or to one side of this.

The message was the same in the morning before the game. As the rain from the cyclone cleared, a high probability of seeing one or two showers or even thunderstorms during the afternoon – which is precisely what happened. We had a couple of flurries of not very intense rain, which did little to interrupt play, plus two hail storms; pea-sized hail fairly typical of a British summer shower. Each lasted about 5 minutes. The inset in the figure shows the hail storm that hit the oval around 17:00 BST. A mere speck on the scale of England, but locally extremely intense. A perfect forecast! However, a computer model run even a couple of hours before could not predict the precise shower hitting our precise location.

What more could we do? I spent the afternoon trying to look at the Met Office’s weather radar composites on my phone. A new rainfall picture is produced every 5 minutes. On the intermittent occasions when I could access data, the showers were very clearly tracked; interestingly they were forming along a broad ‘peninsular convergence’ line that could be tracked back to Land’s End. Along this line, air coming from either side of the south west peninsula meets and so is forced upwards, triggering showers (Golding et al, 2005). This is shown in the three radar images in the figure. Each is an hour apart, but this convergence line is very persistent. These lines were the topic of the COPE field campaign in 2013 (Leon, et al. 2016). This organisation by topography radically changed the overall predictability of the showers. The sharp-eyed reader might also notice an arc of showers moving east from central England into East Anglia, and it is probably no coincidence that the heaviest storm happened where this met the convergence line. Nevertheless, as we sat on the edge of this line, the best we could hope for several hours ahead was a realistic assessment of the probability of having a shower.

This example illustrates very well that the weather forecast is not the only piece in the jigsaw. First, and foremost, there is the investment in resilience; the Oval ground is very well prepared and drained, but there is a limit to what it can cope with. Similarly, investment in flood defences is often controversial, and the Environment Agency have recently announced that climate change is forcing a ‘new approach to flood and coastal resilience’ that may mean not investing in flood defences in some regions.

Second, there is preparedness. The available forecasts had prepared us well for the likelihood of showers. We equipped ourselves as well as we could. I kept a ‘weather eye’ on the radar, at least as far as technology allowed me. I could see the hail storms coming. In this case, the covers were deployed fast enough to protect the pitch and run-ups. Use of forecasts could enable the deployment of defences that take longer to deploy but ultimately save playing time. Currently, forecasts are used by the authorities to help emergency services prepare for likely (but rarely certain) flooding. How best to educate and prepare users including the public to respond to forecasts is one of the leading questions driving research, for example the World Meteorological Organisation’s ‘HIWeather Project’, which recognises the key importance of “better understanding by social scientists of the challenges to achieving effective use of forecasts and warnings” (HIWeather Impact plan). A key part of this is understanding the inevitability of false alarms. We have to be prepared to see play stopped because a forecast (in this case with a very short lead time) says there is a probability of a heavy shower. The price for not being pre-emptive may be the abandonment of the match. Which happened two and a half hours after the rain and hail stopped.

The modern challenge of forecasting is not just to improve the forecast (which may be an exercise in diminishing returns) but also to find ways to make sure that systems are in place to make full use of them and users are well-prepared to take action and understand the actions of others.


Golding, B.W., Clark, P.A. and May, B., 2005, The Boscastle Flood: Meteorological Analysis of the Conditions Leading to Flooding on 16 August 2004, Weather60, 230-235,

Clark, P., Roberts, N., Lean, H., Ballard, S. P. and Charlton-Perez, C., 2016: Convection-permitting models: a step-change in rainfall forecasting. Meteorological Applications, 23 (2). 165-181. ISSN 1469-8080 doi:

Dance, S. L., Ballard, S. P., Bannister, R. N.Clark, P.Cloke, H. L., Darlington, T., Flack, D. L. A.Gray, S. L., Hawkness-Smith, L., Husnoo, N., Illingworth, A. J., Kelly, G. A., Lean, H. W., Li, D., Nichols, N. K.Nicol, J. C., Oxley, A., Plant, R. S., Roberts, N. M., Roulstone, I., Simonin, D., Thompson, R. J. and Waller, J. A., 2019: Improvements in forecasting intense rainfall: results from the FRANC (forecasting rainfall exploiting new data assimilation techniques and novel observations of convection) project. Atmosphere, 10 (3). 125. ISSN 2073-4433 doi:

Leon, D. C., French, J. R., Lasher-Trapp, S., Blyth, A. M., Abel, S. J., Ballard, S., Barrett, A., Bennett, L. J., Bower, K., Brooks, B., Brown, P., Charlton-Perez, C., Choularton, T., Clark, P., Collier, C., Crosier, J., Cui, Z., Dey, S., Dufton, D., Eagle, C., Flynn, M. J., Gallagher, M., Halliwell, C., Hanley, K., Hawkness-Smith, L., Huang, Y., Kelly, G., Kitchen, M., Korolev, A., Lean, H., Liu, Z., Marsham, J., Moser, D., Nicol, J., Norton, E. G., Plummer, D., Price, J., Ricketts, H., Roberts, N., Rosenberg, P. D., Simonin, D., Taylor, J. W., Warren, R., Williams, P. I. and Young, G., 2016: The COnvective Precipitation Experiment (COPE): investigating the origins of heavy precipitation in the southwestern UK. Bulletin of the American Meteorological Society, 97 (6). 1003-1020. ISSN 1520-0477 doi:

Posted in Climate, Predictability, Weather, Weather forecasting | Leave a comment

Rescuing the Weather

By: Ed Hawkins

Over the past 12 months, thousands of volunteer ‘citizen scientists’ have been helping climate scientists rescue millions of lost weather observations. Why?

Figure 1: Data from Leighton Park School in Reading from February 1903.

If we are to inform decisions about adapting to a changing climate we need to better understand the risk from extreme weather events, and whether this risk is changing. This requires long and detailed records of the weather. In the UK we are fortunate that meteorologists have recorded the weather across the country for over 150 years. However, most of their observations are still only available as the original paper copies, stored in large archives (Figure 1).

Currently, the only way to transform these observations into useful data is to manually transcribe them from paper to computer. This is an enormous task and would be much easier if it was performed by thousands of people, rather than just a single PhD student.

The website has been set up to enable anyone to help. The first phase of the project recovered 1.5 million observations that were taken on the summit of Ben Nevis and in the nearby town of Fort William between 1883 and 1904. The volunteers then transcribed 1.8 million observations from more than 50 locations across Europe taken between 1900 and 1910. They are now digitising observations taken in the 1860s and 1870s.

So, what can we do with all this data?

Figure 2: Map of pressure observations in the ISPD database for 27th February 1903, including from ships (yellow), with newly rescued data (black) and locations where we have images of the observation logbooks, but the data has not yet been digitised (red).

As a case study, there was a very intense storm on February 26th-27th 1903 which hit Ireland and northern England, uprooting thousands of trees, causing significant structural damage and several fatalities. Hundreds of pressure observations taken across the UK during this storm are not in our digital climate databases. Figure 2 shows the existing data (yellow), newly rescued data (black) and potential data still waiting to be rescued (red) for the period of the intense storm.

Figure 3: The 26th-27th February 1903 storm in the 20th Century Reanalysis (left) and an estimate of how it would look with the new observations (right). The black contours are isobars, and the green shading shows confidence in their position.

The new data allows us to better reconstruct the path and intensity of the storm. Figure 3 shows how the storm appears in the new 20th Century Reanalysis (left) – it is too weak to cause the damage that we know occurred, and the image appears fuzzy because there is much uncertainty about the storm’s location. The right-hand panel shows how the storm should appear with the newly rescued observations (black dots in figure above) – more intense and more certain, with strong winds over eastern Ireland and northern England where the damage occurred. The minimum central pressure is now simulated to be around 955mb.

Severe windstorms are relatively rare but cause significant damage. We need to learn as much about them as possible which means delving back into the past. Thousands of volunteers are helping us determine how the weather changed hour-by-hour over a century ago and to learn about such extreme events. Anyone can help at


Posted in Climate, data assimilation, Data processing, Historical climatology, Outreach, Weather | Leave a comment

Mapping bio-UV products from space

By: Michael Taylor

Solar radiation arriving at the Earth’s surface in the UV part of the spectrum modulates photosynthetically-sensitive life on the land and in the oceans. UV radiation also drives important chemical reaction pathways in the atmosphere that impact air quality. It can cause DNA-damage in the epithelial cells of our skin and is a key factor for tuning the rate of Vitamin D production in our metabolism.

Solar UV radiation may be measured in radiometric units or spectrally-weighted to account for biologically-effective UV radiation doses. The Commission Internationale de l’Éclairage (CIE) defines the reference action spectrum for the ability of UV radiation as a function of wavelength to produce just perceptible erythema (colour change from the Greek word “ερυθρός” for red) in human skin. The standard erythemal dose (Jm-2) is equivalent to an erythemal radiant exposure of 100 Jm-2 (ISO 17166:1999). According to the Bunsen-Roscoe law of reciprocity (Bunsen & Roscoe, 1859), a given biological effect due to UV radiant exposure is directly proportional to the total energy dose given by the product of irradiance (Wm-2) and exposure time (s).

Figure 1: TEMIS erythemal UV dose products (kJ m-2) from KNMI/ESA (Van Geffen et al, 2017): daily “Clear sky” UV from SCIAMACHY/GOME-2, daily “cloud-modified” UV from SCIAMACHY/GOME-2 revealing the impact of a weather system over Sicily, and the global climatological “clear sky” June mean from GOME showing the impact of desert dust as revealed (lower right) by the global climatology of aerosol mixtures (Taylor et al, 2015).

Satellites like GOME, GOME-2 and SCIAMACHY have operational processing algorithms that retrieve erythemal UV dose (kJ m-2) once daily from top of the atmosphere irradiance measurements which are strongly affected by both cloud and atmospheric aerosol (Fig. 1).

Recent studies performed in the context of solar energy (Kosmopoulos et al., 2017; 2018), have revealed that atmospheric aerosol, and desert dust in particular, strongly attenuates solar radiation and the UV component arriving at the ground. In the context of increasing our global capacity for renewable energy with solar power as a major component, this is important. Since atmospheric aerosols reduce solar radiation by absorbing and scattering light and reduce the strength of the direct beam from which solar power generation is most efficient, they also cause forecast uncertainty. As a result, electricity supplied to national grids from solar power must balance the demand by coping with these unexpected fluctuations.

Figure 2: (a) Window functions used by KNMI/ESA to derive Bio-UV dose products from UV spectral irradiances and (b) the back-propagation neural network used to perform ground-based validation. See Zempila et al (2017) for details.

Nevertheless, it is straightforward to obtain biological ultraviolet (“Bio-UV”) products from the UV spectra retrieved at the ground. By applying weighting functions to the ultraviolet part of the irradiance spectra over the range 285-400 nm, important Bio-UV products like Vitamin D dose, DNA-damage dose and photosynthetically active radiation can be calculated. Satellite Bio-UV products from TEMIS (KNMI/ESA) have been successfully validated with high temporal resolution (1-minute) ground-based measurements by Zempila et al. (2017). Fig. 2 shows how the weighting functions vary with wavelength together with the neural network model developed to convert combinations of UV irradiances (I) and solar zenith angle (sza) to Bio-UV products for the ground-based validation.

Figure 3: Zoom sequence showing how the surface solar radiance spectra (280-2500 nm) retrieved under cloudy conditions from space with fast neural network radiative transfer solvers (Taylor et al., 2016) can be used to extract erythemal UV spectral irradiances for calculation of Bio-UV products as per Zempila et al., (2017).

While polar orbiting satellites like GOME, GOME-2, SCIAMACHY and OMI allow global maps of Bio-UV products to be generated, geostationary satellites like Meteosat Second Generation (MSG) provide high spatial resolution images of the Earth disc (3 km x 3 km) every few minutes and allow us to dramatically increase the frequency of the data. In support of this, an operational algorithm capable of retrieving the UV part of the solar spectrum at the surface was recently developed (Taylor et al., 2016). This was achieved with a synergistic model that uses both machine learning with neural networks and a look-up table of radiative transfer simulations to help unravel the complexity of the atmosphere. The model includes the effects of clouds, aerosols, ozone, elevation and surface albedo (Taylor et al; 2016; Kosmopoulos et al., 2017) and provides the surface global horizontal irradiance (GHI), direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI) spectrum over the broad wavelength range 285-2700 nm. Application of the weighting functions of Fig. 2 to the UV part of the solar radiation spectrum can then provide global maps of Bio-UV products at high frequency. Fig. 3 illustrates how various UV products can be obtained from surface solar radiation spectra retrieved from space.

One of the most exci­ti­ng appl­icati­ons of being able to map UV spectral information ­from space is the potential for creating mob­ile appli­cat­ions that pull data from surface UV spectra data cloud computing resources and combi­ne them wi­th users’ GPS i­nformati­on to produce real-ti­me UV alerts to the general publi­c with unprecedented precision. By improving our capacity to map UV impact on the quality of life in the global ecosystem from space at high frequency, we will be better placed to monitor progress towards achievement of the UN’s sustainable development goals as we proceed to a more climate resilient society.


Bunsen, R., Roscoe, H. E., 1859: Photochemische untersuchungen. Annalen der Physik  184(10), 193-273, DOI: 10.1002/andp.18591841002

Kosmopoulos, P., S. Kazadzis, H. El-Askary, M. Taylor, A. Gkikas, E. Proestakis, C. Kontoes, M. M. El-Khayat, 2018: Earth-Observation-Based Estimation and Forecasting of Particulate Matter Impact on Solar Energy in Egypt. Remote Sens. 10(12), 1870, DOI:

Kosmopoulos, P. G., S. Kazadzis, M. Taylor, E. Athanasopoulou, O. Speyer, P. I. Raptis, E. Marinou, E. Proestakis, S. Solomos, E. Gerasopoulos, V. Amiridis, 2017: Dust impact on surface solar irradiance assessed with model simulations, satellite observations and ground-based measurements. Atmos. Meas. Tech., 10(7), 2435-2453, DOI: 10.5194/amt-10-2435-2017

Taylor, M., P. G. Kosmopoulos, S. Kazadzis, I. Keramitsoglou, C. T. Kiranoudis, 2016: Neural network radiative transfer solvers for the generation of high resolution solar irradiance spectra parameterized by cloud and aerosol parameters. J. Quant. Spectrosc. Radiat. Transfer, 168, 176–192, DOI: 10.1016/j.jqsrt.2015.08.018

Taylor, M., S. Kazadzis, V. Amiridis, R. A. Kahn, 2015: Global aerosol mixtures and their multiyear and seasonal characteristics. Atmos. Environ., 116, 112–129, DOI: 10.1016/j.atmosenv.2015.06.029

Van Geffen, J., Van Weele, M., Allaart, M. and Van der A, R., 2017: TEMIS UV index and UV dose operational data products, version 2, KNMI Dataset, DOI:

Zempila, M. M., J. H. van Geffen, M. Taylor, I. Fountoulakis, M. E. Koukouli, M. van Weele, R. J. van der A, A. Bais, C. Meleti, D. Balis, 2017: TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece. Atmos. Chem. Phys., 17(11), 7157–7174, DOI: 10.5194/acp-17-7157-2017


I am very grateful to colleagues from the Tropospheric Emission Monitoring Internet Service (TEMIS) at KNMI and ESA for kindly making available plots of UV radiation monitoring products generated from the v2 processing algorithm, and to colleagues from the Greek national network for the measurement of ultraviolet solar radiation ( for permission to present results from Zempila et al., (2017) using their ground-based NILU-UV multi-filter radiometer measurement data and associated UV dose data obtained from a Brewer MKIII spectrophotometer. I would also like to acknowledge colleagues from with whom I collaborated with to develop the solar radiation neural network modeling aspects presented.

Posted in Climate, earth observation, Remote sensing, Solar radiation | Leave a comment

The sky is the limit – How tall buildings affect wind and air quality

By: Denise Hertwig

Based on current UN estimates, by 2050 over 6.6 billion people (68% of the total population) will be living in cities. Across the world, tall (> 50 m height) and super-tall (> 300 m) buildings already define the skylines of many large cities and will become increasingly more common outside of city centres to accommodate growing urban populations, especially when horizontal urban sprawl is geographically limited. For London, 2019 was declared the “year of the tall building” (NLA London Tall Buildings Survey 2019). At the moment, 541 buildings over 20 storeys (approx. 60 m in height) are planned or already under construction in the UK capital. Tall buildings are currently being built in 22 out of the 33 London boroughs and 76 of them are expected to be completed this year.

Tall buildings, in isolation or as clusters, affect the urban micro-climate of the local surroundings and the neighbouring region. The impact on aerodynamics (e.g. local flow distortions, long-range wake effects), radiation budget (e.g. shadowing, radiative trapping) and components of the surface energy balance (e.g. storage of heat in building materials, anthropogenic heat emissions) can be large compared to low-rise buildings. Such modifications challenge current modelling frameworks for urban areas. Urban land-surface models used in numerical weather prediction, for example, typically do not account for building-height variations. They also rely on the concept that the flow within the urban canopy is sufficiently decoupled from the flow aloft, which is not the case if tall buildings protrude deep into the urban boundary layer.

Figure 1: Normalised pollutant concentrations (a,b) in an idealised building array. Pollutants are released from point sources located (a) in the street canyon behind a tall building, (b) in an intersection upwind of the tall building. (c) Mean-flow streamlines near the tall building with colours showing the mean vertical velocity. The black arrow indicates the upwind flow direction. Data are results from large-eddy simulations by Fuka et al. (2018) for the DIPLOS project.

Similarly, operational urban air quality and dispersion models do not usually account for tall-building effects (Hertwig et al. 2018). Tall buildings strongly change pedestrian-level winds in the surrounding streets and the flow field above the roofs of the low-lying buildings. This affects pollutant pathways and the overall ventilation potential of cities. Pollutants released near the ground in a street canyon on the leeward side of a tall building (Fig. 1a) can be rapidly lifted out of the building canopy by updrafts (Fig. 1c). Although the pollutants are emitted at the ground, the tall building causes a large proportion of the released mass to be transported above the roofs of the low-rise neighbourhoods, thereby reducing street-level pollution. A pollutant source located in an upwind intersection leads to drastically different results (Fig. 1b). The downdrafts on the windward side of the tall building result in strong horizontal flow out of the upwind street canyon (Fig. 1c). This outflow shifts the pollutants away from their release point in the intersection, creating a virtual source location in the adjacent street canyon and deteriorating air quality in the streets downwind.

Figure 2: (a) Building heights and (b) wind-tunnel model buildings of the neighbourhood between Waterloo station and Elephant & Castle in London (MAGIC project study area). Wind-tunnel measurements of the wake behind the central tall building (81 m height) in isolation and together with the low-rise building canopy shown in terms of (c) height profiles of flow speeds at several sites downwind of the tall building, (d) velocity differences to the ambient (undisturbed) flow with downwind distance at several heights. Details in Hertwig et al. (2019).

Flow interactions between tall and low-rise buildings also change the structure of the momentum deficit region (wake) that forms behind tall buildings. Wake models used for local air-quality predictions currently do not account for such interactions as they were derived for isolated buildings. Wind-tunnel experiments in a realistic scale model of the area between the Waterloo and Elephant & Castle stations in London (Fig. 2a,b) documented the strong impact of the canopy on tall-building wakes (Hertwig et al. 2019). Compared to tall buildings in isolation, the presence of a low-rise canopy displaces the wake vertically (Fig. 2c), so that flow speeds are reduced over longer distances downwind well-above the canopy (Fig. 2d). In the case shown, the wake extends over distances larger than 5 times the height of the tall building (i.e. > 400 m). The increasing spatial resolution (of the order of 100 m) of mesoscale and microscale atmospheric models means that tall-building wakes no longer are subgrid-scale phenomena, but have an impact at the grid-scale. Understanding and quantifying tall-building impacts on the boundary layer over cities is essential to identify needs for model refinements.


Fuka, V., Z.-T. Xie, I.P. Castro, P. Hayden, M. Carpentieri, A.G. Robins, 2018: Scalar fluxes near a tall building in an aligned array of rectangular buildings. Boundary-Layer Meteorology 167, 53–76, DOI: 10.1007/s10546-017-0308-4

Hertwig, D., L. Soulhac, V. Fuka, T. Auerswald, M. Carpentieri, P. Hayden, A. Robins, Z.-T. Xie and O. Coceal, 2018: Evaluation of fast atmospheric dispersion models in a regular street network. Environmental Fluid Mechanics 18, 1007–1044, DOI: 10.1007/s10652-018-9587-7

Hertwig, D., H.L. Gough, S. Grimmond, J.F. Barlow, C.W. Kent, W.E. Lin, A.G. Robins and P. Hayden, 2019: Wake characteristics of tall buildings in a realistic urban canopy. Boundary-Layer Meteorology, DOI: 10.1007/s10546-019-00450-7 (in press)

Posted in Boundary layer, Climate, Urban meteorology | Leave a comment

Balloon measurements at Stromboli suggest radioactivity contributes charge in volcanic plumes

By: Martin Airey

Volcanic lightning is an awe-inspiring and humbling display of nature’s power. It results from the breakdown of large electric fields that are generated within the volcanic plume. The processes that result in the accumulation of charge are varied and complex and by no means fully understood. Current knowledge of the key established mechanisms that are known to contribute to plume charging centre around the role played by ash. These mechanisms fall broadly into the categories of fractoemission and triboelectrification (Mather and Harrison, 2006). Fractoemission is the release of neutral and charged (electrons, positive ions, and photons) particles from fracture surfaces as magma fragments upon eruption (James et al., 2000); these particles may then interact with ash and aerosols to impart a net charge. Triboelectrification is a mechanism by which charge is transferred between the ash particles as they collide.

When charge has been produced, it must then be separated in order for an electric field to develop and a discharge to occur. The plume is a dynamic and chaotic environment, where primitive constituents of the magma, such as solid particles, gases, and metal species are mixed with atmospheric material as it is entrained by the plume. Above the initial jet region, thermal buoyancy-driven dynamics enable the plume to grow to an altitude at which neutral buoyancy is attained. Within this setting, charged aerosols and charged ash grains settle differently resulting in the separation of positively and negatively charged regions in the plume (Mather and Harrison, 2006), which can ultimately cause a discharge to occur.

But what if there are other additional mechanisms that contribute to either the charging or separation processes? As it is a complex, rapidly evolving, multiphase environment, there is the potential for many other chemical and physical interactions occurring within the plume that may currently be overlooked by this simplistic view. To test this, sensors and instrumentation developed at Reading over many years for deployment on weather balloons was combined through a NERC-funded project into a disposable modular payload called VOLCLAB (VOLCano LABoratory). The range of sensors that can be incorporated into the VOLCLAB package includes an optical backscatter droplet detector, a charge sensor, a sulphur dioxide sensor, an oscillating microbalance particle collector, and a turbulence sensor.

View from Stromboli’s summit into the vent complex showing the gas-rich plumes

In September 2017, a team of scientists from the University of Reading, Ludwig Maximillians Universität (Munich), and the University of Bath set off to Stromboli on fieldwork funded by National Geographic, equipped with VOLCLAB sensors, radiosondes, balloons, a thunderstorm detector, and lots of helium. Stromboli was an ideal choice for this expedition as it erupts frequently (several times an hour) and produces a wide range of plume types ranging from ash-rich to predominately gaseous. By launching these instruments directly into the plumes, in situ measurements may be acquired from all these plume types. The two-week long campaign required a daily hike to the summit at 900 m, often with very heavy kit. Many sensor-equipped balloons were launched from the summit with a range of success in encountering a plume, and VOLCLAB packages were deployed in fixed locations around the summit to continually record passing plumes.

   Martin Airey (holding VOLCLAB package) and Corrado Cimarelli

                 Keri Nicoll, Kuang Koh, and Martin Airey

Most interesting was the discovery of significant electric charge in plumes that contained negligible or no ash. This led to the investigation of what might be causing this unexpected charging mechanism. It is widely known that volcanoes emit a broad range of chemical products (Allard et al, 2000), one of which is radon, which is produced in high concentrations from all volcanoes. Radon is routinely monitored at many volcanoes, including Stromboli, which is known to constantly emit very large quantities through the soil near the vents, and even more during eruptions (Cigolini et al, 2009). As radon radioactively decays, it increases the charge present by ionising the air. This additional source of charge, inferred for the first time with these new direct measurements inside gaseous plumes, will inevitably contribute to the overall charge structure and may affect the likelihood of lightning strikes.

The original open access article, published in Geophysical Research Letters, may be found at:

And additional press can be found in the following links:
New York Times:
Atlas Obscura:
The VOLCLAB package covered in Meteorological Technology magazine:
Some footage of the fieldwork was also included in the Arte documentary “Living with Volcanoes” from around 7 minutes: 


Allard, P., Aiuppa, A., Loyer, H., Carrot, F., Gaudry, A., Pinte, G., et al. (2000). Acid gas and metal emission rates during long‐lived basalt degassing at Stromboli volcano. Geophysical Research Letters, 27(8), 1207–1210.

Cigolini, C., Poggi, P., Ripepe, M., Laiolo, M., Ciamberlini, C., Delle Donne, D., et al. (2009). Radon surveys and real‐time monitoring at Stromboli volcano: Influence of soil temperature, atmospheric pressure and tidal forces on 222Rn degassing. Journal of Volcanology and Geothermal Research, 184(3–4), 381–388.

James, M. R., Lane, S. J., & Gilbert, J. S. (2000). Volcanic plume electrification—Experimental investigation of fracture charging mechanism. Journal of Geophysical Research, 105(B7), 16,641–16,649.

Mather, T. A., & Harrison, R. G. (2006). Electrification of volcanic plumes. Surveys in Geophysics, 27(4), 387–432.‐006‐9007‐2


Posted in Climate, Convection, Measurements and instrumentation, Volcanoes | Leave a comment