The Future of Arctic Sea Ice

By: Rebecca Frew

It is well documented in scientific studies and the news (recent example here) that the summer extent of Arctic sea ice has been declining rapidly in response to global warming. As the summer sea ice shrinks and retreats Northward, the summer marginal ice zone (MIZ) has been widening and making up a larger proportion of the summer sea ice cover (Ralph et al. 2020).

The MIZ is typically defined as the area in which sea ice is influenced by waves. A more convenient definition often used in studies is the area where the sea ice concentration is between 15% and 80%. With the area above 80% defined as the ice pack where the sea ice floes are more densely packed together, blocking direct atmosphere-ocean interaction. The MIZ is typically small in the winter and grows to maximum extent in the summer as the ice pack fragments and melts, creating smaller and less densely packed floes.

Figure 1: Sea Ice floes. Image Credit:  Kevin Woods, NOAA Pacific Marine Environmental Laboratory. 

This trend of an increasingly MIZ dominated ice cover is projected to continue (Strong & Rigor 2013, Aksenov et al. 2017), transitioning to sea ice free Arctic summers. The relative rates and importance of sea ice processes in the MIZ differs to those in the ice pack. This has consequences for the exchange of heat and salt between the atmosphere and ocean, and ultimately the date at which the Arctic becomes ice free in summer.

Three processes that differ between the MIZ and ice pack are the lateral melt rate (melting on the side of the floes), basal/bottom melting of the floes and breakup of floes caused by waves. The average floe size in the MIZ is smaller than in the ice pack, which means the increases the perimeter to area ratio and promotes faster lateral melting. Ice thickness also tends to be thinner, which increases the rate of basal ice melting in the summer. Smaller, less densely packed sea ice floes in the MIZ are more susceptible to wave breakup, creating smaller floes which tend to melt at a quicker rate.

Figure 2: Arctic sea ice and MIZ extent in the 1980s and the 2010s, from a sea ice model simulation.

In my research, I am investigating the relative importance of growth and melt processes in the MIZ and whether they might change in the future. As part of this, I am considering how they are currently represented in climate models, whether this is accurate and how sensitive the processes are to parameters that are difficult to constrain from observations. For example, a relatively recent area of development in sea ice models is the inclusion of a floe size distribution (Roach et al. 2018). Previously sea ice floes were all one size or ignored in models, now a distribution floe sizes across a range of sizes is calculated within each grid cell, better representing the variation of cm to 100s of kms that is observed. This is important when modelling the MIZ because floe sizes are smaller, and the floe size influences the lateral melt rate.

How lateral melt rate differs in the MIZ from the ice pack, and how it might change in the future are a couple of the questions I am trying to answer. Answering these questions about processes in the MIZ helps to improve projections of Arctic sea ice, and better represent the response of Arctic sea ice to different future scenarios of warming.


Aksenov, Y., Popova, E. E., Yool, A., Nurser, A. J. G., Williams, T. D., Bertino, L., and Bergh, J., 2017: On the future navigability of Arctic sea routes: High-resolution projections of the Arctic Ocean and sea ice, Mar. Pol., 75, 300–317,,

Roach, L. A., Horvat, C., Dean, S. M., and Bitz, C. M., 2018: An Emergent Sea Ice Floe Size Distribution in a Global Coupled Ocean-Sea Ice Model, J. Geophys. Res.-Ocean, 123, 4322–4337,

Rolph, R. J., Feltham, D. L., and Schröder. D., 2020: Changes of the Arctic marginal ice zone during the satellite era, The Cryosphere, 14, 1971–1984,

Strong, C., and Rigor, I. G., 2013: Arctic marginal ice zone trending wider in summer and narrower in winter, Geophys. Res. Lett., 40, 4864–4868,

Posted in Arctic, Climate, Climate change, Cryosphere, Polar | Leave a comment

Three Flavours of Pykrete

By: David Livings

Three Flavours of Pykrete

A few years ago, Giles Foden published a novel called Turbulence. Most of the book is about a young meteorologist in the second world war, but there’s a framing story set in the 1980s, in which the same man is sailing from Antarctica to Saudi Arabia in a ship made from a mixture of ice and frozen wood pulp called Pykerete. Pykerete was named after Geoffrey Pyke, who proposed building giant aircraft carriers from such a material. Some of the characters in the book are real people, some are fictionalised versions of real people, and some are completely made up. Pyke and Pykerete were obviously made up …

Or so I thought. I subsequently learnt that Geoffrey Nathaniel Joseph Pyke (1893–1948) really did exist or is else a very elaborate hoax, of which the Oxford Dictionary of National Biography is either a victim or a perpetrator. Not only did Pyke propose building aircraft carriers from ice, but he got taken seriously (at least for a while). Pykrete (sometimes spelt Pykerete or Pykecrete) was named after him, but was not actually his invention. The initial idea of adding wood pulp to ice to increase its strength came from two researchers at the Brooklyn Polytechnic, and its properties were investigated at Pyke’s request by the chemist Max Perutz, who would go on to win the Nobel Prize for Chemistry for his work on the structure of haemoglobin. Perutz published a paper on pykrete in the Journal of Glaciology in 1948.

Last year, in a change of career direction, I moved from meteorological research to software engineering on a sea ice model. As part of my familiarisation with the new field, I thought it would be a good idea to carry out some experiments on the substances being modelled. The first experiment was to investigate the difference between fresh water ice and salt water ice. I made samples of both in plastic pots that originally contained desserts from a supermarket (dimensions: 45 mm diameter at bottom, 70 mm at top, height 88 mm, but only filled to 66 mm for the experiment). The salt water ice contained enough table salt to cover the bottom of the pot to a depth of 1–2 mm before adding the water. Both samples were frozen in a domestic freezer for over 24 h, and then taken out and attacked from the top with a blunt-ended table knife. The knife didn’t penetrate the fresh ice, but just sent up some ice chips. It did penetrate the salt ice, which had a mushier texture.

It was at this point that I remembered Pyke and pykrete, and decided to make some for myself. A good place to start an investigation of pykrete is the web page of Peter Goodeve, which takes a critical look at some of the myths that have grown up about the substance. It also contains links to other sources (some of which perpetuate the myths).

Sources differ over whether the magic ingredient in pykrete is wood pulp, wood powder, sawdust, or wood chips. I had none of these available, but I did have a bag of what described itself as Oatbran & Wheatbran Porridge Oats, so I improvised with that. In one of the pots I mixed dry porridge with just enough water to cover it. I filled the other pot with plain water to the same depth, which was about 30 mm. After freezing both samples, I turned them out of their pots and hit them with a hammer. The plain ice shattered after one blow. The porridge ice survived three blows, only denting. This substance was definitely tougher than plain ice.

This experiment with frozen porridge left a couple of things to be desired. Firstly, the additive wasn’t one of the classic pykrete additives. Secondly, the way in which the amount of additive was determined was rather crude. Perutz reports good results with 4–14% wood pulp.

Recently I was able to obtain some fine sawdust, and decided to repeat the experiment using this and other additives. As well as sawdust and porridge, I followed Goodeve’s suggestion of reverse engineering wood pulp by using torn up newspaper. Rather than tearing up the newspaper (actually three pages from the LRB) I cut it into tiny pieces a few millimetres across. If doing this yourself, allow at least two hours.

I used 20 g of each additive to 200 ml of water. One quarter of the mixture was used to make small samples as in the previous experiment, and the rest was used to make larger samples in another type of dessert pot (sample dimensions: 60 mm diameter at bottom, 77 mm at top, height 40 mm). On making the mixtures, it became clear that the additive settling to the bottom was going to be a problem and also that the experiment last year had used much more than 10% porridge. To guard against settling, I took the mixtures out of the freezer and stirred them every half hour for the first three and a half hours. The following figures show the large samples before and after being hit with a hammer.

Figure 1. Samples of plain ice and the three flavours of pykrete beside their additives. Top left: plain ice. Top right: sawdust. Bottom left: porridge. Bottom right: newspaper.

Figure 2. The results of hitting the samples with a hammer. Top left: the plain ice split after two blows. Top right: the sawdust pykrete survived six blows with little damage. Bottom left: the porridge pykrete split after five blows. Bottom right: the newspaper pykrete survived six blows.

Results from the small samples were similar. The plain ice shattered after one blow, sending fragments flying across the room. The porridge pykrete split after two blows. The sawdust and newspaper pykretes survived three blows.

Conclusion: Sawdust pykrete and newspaper pykrete are tougher than plain ice. Porridge pykrete at the same density is intermediate in strength, but at higher densities is impressive.


The author thanks Debbie Turner and Ian Shankland for providing the sawdust.


Perutz, M. F., 1948: A description of the iceberg aircraft carrier and the bearing of the mechanical properties of frozen wood pulp upon some problems of glacier flow. J. Glaciol.1, 95–104,

Posted in Climate, Cryosphere, History of Science | Leave a comment

Can You Guess The Ingredients Of A Cake?

By: Amos Lawless

“Mmm this cake is lovely, what’s in it?” “Try to guess!” How often have we had that response from a friend or colleague who is proud of the cake they have just baked? And we usually try to guess the main ingredients – “I think there must be ginger or cinnamon. And can I taste lemon?”. But what if that friend persisted and asked you to try to guess all the ingredients – how many eggs they have used, how many grams of sugar are in the cake and how much butter it contains? Maybe you’d think they’d gone a bit crazy! Surely it is impossible to work out all the ingredients just by tasting it? It may sound unreasonable, but this is effectively what we try to do each day to interpret satellite measurements for our weather forecasts.

Weather satellites, besides giving us the nice pictures that we see on television, provide a wealth of other information about the atmosphere. Satellites actually measure the radiation emitted from the atmosphere at different frequencies, and these measurements depend on the properties of that part of the atmosphere that the satellite is looking at, such as its temperature, humidity and winds. It is as if these “ingredients” of the atmosphere are brought together into a “cake” that the satellite can taste. But what we are really interested in knowing is these ingredients. So how can we split the satellite measurement back into its atmospheric ingredients?

Thankfully we have a mathematical technique for doing this, which we call data assimilation. Each satellite instrument can measure at many different frequencies (as if they have many “taste buds” sensitive to different ingredients), so by combining measurements from different satellites in an intelligent way, as well as other more conventional measurements made on the ground, data assimilation helps us to build up a complete picture of the atmosphere all around the globe. This is done every day as part of modern weather forecasting, since knowing what the atmosphere is like now is essential if we are to make accurate forecasts. Most data assimilation techniques are based on finding an optimal combination of what we think is the current state of the atmosphere and our measurements, taking into account the precision of the different pieces of information we have. Writing down the theory of how to do this is fairly easy, but putting into practice is usually much harder.

Scientists of the Data Assimilation Research Centre (DARC) at the University of Reading work on a variety of problems related to data assimilation, from developing new approaches to applying it in practice. Each year, jointly with the National Centre for Earth Observation (NCEO), we organise a training course for scientists round the world to learn about the theory of data assimilation and how to apply it in practice. Lectures from DARC scientists are combined with computer practical exercises, so that participants can learn the theory of data assimilation and get a feel for how different methods perform in practice. Normally the course is held in-person, but this year there was the challenge of whether it was possible to hold it online. So it was that at the start of May our first ever training course on data assimilation using Microsoft Teams took place. Joining were 29 scientists from the UK, Belgium, Bulgaria, Denmark, Germany, Greece, Italy, Spain and the USA, working in universities, research institutes and meteorological forecasting centres.

Figure 1: Lecture by DARC scientist Dr Javier Amezcua

So how did we do it? By now we are already used to giving and listening to talks online, so the lecture part of the course was fairly straightforward. However, an important aspect of a course such as this is that it is interactive, with the possibility to ask questions. Thankfully the chat function worked well here, with participants putting questions in the chat continually and other DARC scientists responding if it wasn’t necessary to interrupt the lecture. Then computer practical exercises took place in breakout rooms, with groups of three participants working together. And during the breaks informal discussions took place using Gather.Town (a very impressive tool that I have only just discovered), including use of a virtual whiteboard to discuss further the mathematics. So what did the participants say about the online delivery? Comments included “I think the format worked really well”, “the arrangements for the remote delivery of the course were excellent”, “I think the practicals were organised well with lecturers rotating and coming to different rooms. That made me feel like I was in a classroom with having constant access to help”. Running this course certainly taught us a lot about how to teach data assimilation online, with lots of lessons learnt for the future. But everybody also realised that there are limitations to such a format. Hopefully next year we will be able to run the course in person again, with the opportunity for more informal discussions over coffee … and plenty of cake!

Figure 2: Online group photo of some of the lecturers and participants.


Data Assimilation Research Centre (n.d.), What is data assimilation?

Data Assimilation Research Centre (2019). Online lecture notes from 2019 training course.

Lawless, A.S. (2013), Variational data assimilation for very large environmental problems. In Large Scale Inverse Problems: Computational Methods and Applications in the Earth Sciences (2013), Eds. Cullen, M.J.P., Freitag, M. A., Kindermann, S., Scheichl, R., Radon Series on Computational and Applied Mathematics 13. De Gruyter, pp. 55-90.

Nichols, N.K. (2009), Mathematical concepts of data assimilation. Preprint MPS_2009-04. Department of Mathematics, University of Reading.


Posted in data assimilation, earth observation, Teaching & Learning | Leave a comment

Data Assimilation Improves Space Weather Forecasting Skill

By: Matthew Lang

Over the past few years, I have been working on using data assimilation methodologies that are prevalent in meteorology to improve forecasts of space weather events (Lang et al. 2017; Lang and Owens 2019). Data assimilation does this by incorporating observations from spacecraft orbiting the Sun into numerical solar wind models, allowing for estimates of the solar wind to be updated. These updated solar wind conditions are then used to drive a solar wind model that produces forecasts of the solar wind at Earth. I have shown that over the lifetime of the STEREO-B spacecraft (2007-2014), data assimilation is able to reduce errors in solar wind forecasts by about 31% compared to forecasts performed without (Lang et al. 2020). Furthermore, these data assimilated forecasts can compensate for systematic errors in forecasts produced from in-situ observations alone.

Space weather is the study of the changing environmental conditions in near-Earth space and its impacts on humans and our technologies, both in space and on Earth. One of the major drivers of space weather events is the solar wind, the constant outflow of plasma (the fourth state of matter that can be thought of as a hot, highly magnetised gas) from the Sun’s surface. The solar wind fills the solar system with particles and magnetic field and is constantly bombarding the Earth’s magnetic field.

Coronal Mass Ejections (CMEs) are huge eruptions of plasma from the Sun’s atmosphere that can travel from the Sun to Earth, through the solar wind, in as little as 18 hours and can drive the most severe space weather events. These include depletion of a part of the ionosphere that is responsible for bouncing radio signals around the planet, hence hampering long-distance communication systems.

Figure 1: Transformer damage from a CME that caused a blackout in Quebec in 1989.

Another major impact on human technologies is that the solar wind and CMEs drive changes to the Earth’s magnetic field, inducing electrical currents in the Earth’s atmosphere that have the potential to overload power systems causing transformer fires and widespread blackouts (this occurred in Quebec in 1989 (see Figure 1) and Sweden in 2003). Most of the impacts of a severe space weather event can be mitigated against if accurate forecasts are available. And that’s where data assimilation comes in.

Data assimilation is the combination of information from forecasts and observations of a system to produce an optimal estimate for the true state of that system. It is an invaluable tool in many aspects of modern life, with applications ranging from course correction during the Apollo Moon landing missions, satellite navigation in areas of poor GPS coverage and oil reservoir modelling. The most notable application for this blog, however, is its use in numerical weather prediction where it is a necessary step for producing more accurate starting points for weather forecasts. This reduces the impact of the “butterfly effect”, where a small change can lead to a vastly different outcomes in the future (the famous hypothetical example being that the titular butterfly flaps it’s wings in Japan leading to a tornado forming in the USA). By ensuring that weather forecasts are started as close to the truth as possible, the resulting forecasts will be more accurate over longer periods.

For consecutive 27-day periods (the time taken for the Sun to rotate once at its equator, relative to the Earth) between 2007 and the end of 2014, an empirical solar wind model called MAS (Magnetospherics Around a Sphere) (Linker et al. 1999) was used to generate a prediction of the solar wind conditions, which I call the prior solar wind. Data assimilation is then performed using data from the STEREO-A, STEREO-B and ACE spacecraft to generate a new set of solar wind conditions which I shall refer to as the posterior solar wind. The STEREO spacecraft orbit the Sun at approximately the same radial distance as Earth, however STEREO-A orbits at a rate of 22⁰ faster and STEREO-B 22⁰ slower. The ACE spacecraft is in near-Earth space, between the Earth and the Sun. The prior and posterior solar winds are then input into the simplified solar wind model, HUXt (Owens et al. 2020), which was developed at the University of Reading to produce forecasts for the subsequent 27-days. Finally, the prior and posterior forecasts were compared with a forecast from the STEREO-B spacecraft (the closest in-situ observation of the solar wind behind the Earth during this time), generated by assuming that the solar wind speed observed at STEREO-B will occur at Earth with a time-lag defined by the distance of the spacecraft behind Earth and the rotational speed of the Sun.

Figure 2: Plot showing the Root Mean Squared Errors (± one standard error) of the prior (blue), posterior(red) and STEREO-B corotation (orange) mean 27-day forecast of the solar wind speed over the lifetime of STEREO-B.

The results of these forecasts are summarised in Figure 2, where the mean 27-day solar wind speed forecast from the prior, posterior and STEREO-B corotation forecast are shown over the lifetime of STEREO-B. The posterior and corotation forecasts have lower Root Mean Squared Errors (RMSEs) than the prior forecasts, showing that both are good improvements over the prior forecast at all lead-times. It is understandable that the STEREO-B corotation and posterior forecast are similar, as
both use the observations from the STEREO-B spacecraft in their forecast.

Figure 3:  Solar wind speed forecasts using the HUXt mode where the Sun is in the centre and Earth is in the same location as the ACE spacecraft (black circle). The left one is initialised from the MAS empirical model without data assimilation and the right one is initialised from a data assimilation analysis, where STEREO (black triangles) and ACE observations have been assimilated. A coronal mass ejection (CME) initialised with the same characteristics and released from the Sun at the same time and propagated through the two ambient solar winds yielding very different evolutions of the CME.

A major difference between the STEREO-B corotation and the posterior forecast, however, is that the corotation produces a forecast at a single point as opposed to the posterior forecast which produces a forecast at every point in the model domain (at all radii and longitudes of interest). This is an especially useful feature as accurate specification of the solar wind can influence how CMEs evolve on their way to Earth. Figure 3 shows two CMEs initialised with the same properties (obtained from (Barnard et al. 2020)); the left one is propagated through a ‘prior’ solar wind that has not had data assimilation performed on it, compared to the ‘posterior’ on the right which does include data assimilation. The CME evolution is changed greatly by the different ambient solar winds, with the CME arriving 19 minutes earlier than it was observed at Earth with a data assimilated solar wind, compared to 41 hours late in a solar wind without data assimilation. By comparison, for the same CME, the operational solar wind model used by the Met Office arrived at Earth 10 hours before it was observed (Barnard et al. 2020). This shows that there is great potential in this field for data assimilation to improve forecasts of not only the solar wind, but also the more hazardous coronal mass ejections.


Barnard, L., M. J. Owens, C. J. Scott, and C. A. Koning, 2020: Ensemble CME Modeling Constrained by Heliospheric Imager Observations. AGU Adv., 1,

Lang, M., and M. J. Owens, 2019: A Variational Approach to Data Assimilation in the Solar Wind. Sp. Weather, 17, 59–83,

——, P. Browne, P. J. van Leeuwen, and M. Owens, 2017: Data Assimilation in the Solar Wind: Challenges and First Results. Sp. Weather, 15, 1490–1510,

——, J. Witherington, H. Turner, M. Owens, and P. Riley, 2020: Improving solar wind forecasting using Data Assimilation.

Linker, J. A., and Coauthors, 1999: Magnetohydrodynamic modeling of the solar corona during Whole Sun Month. J. Geophys. Res. Sp. Phys., 104, 9809–9830,

Owens, M., and Coauthors, 2020: A Computationally Efficient, Time-Dependent Model of the Solar Wind for Use as a Surrogate to Three-Dimensional Numerical Magnetohydrodynamic Simulations. Sol. Phys., 295, 43,

Posted in Climate, data assimilation, space weather, Weather forecasting | Leave a comment

Cold Winter Weather: Despite or Because of Global Warming?

By: Marlene Kretschmer

This year’s winter was cold. There was heavy snowfall across the UK, Europe and parts of the United States including Texas. This severe weather came with significant societal and economic impacts.

Every time cold extremes like this occur, one can almost predict the media headlines.  On the one hand, dubious media will use a regional cold snap to sow doubt about human-made global warming by deliberately misunderstanding the difference between weather and climate. In a similarly absurd manner, other newspapers will state that climate change was responsible for the cold snap. In between, there are debates among scientists about the role of climate change in causing cold extremes. This is where it gets complicated and, hence, interesting.

Climate change manifests itself in different ways. While the increase of CO2 in the atmosphere leads to warmer temperatures globally, there might be indirect mechanisms causing opposite effects regionally. In recent years, researchers have hypothesised that the melting of Arctic sea ice – a direct result of global warming – favours winter cold extremes in the Northern Hemisphere mid-latitudes. In particular, it has been suggested that the decline in Barents and Kara sea ice weakens the stratospheric polar vortex, a band of fast blowing westerly winds circling the Arctic during winter at approximately 15-50 km altitude. Weak phases of the vortex are linked to cold winter weather in Eurasia and North America. In other words, it was proposed that climate change indirectly leads to colder weather. The polar vortex this year was extremely weak, and therefore likely to be the culprit of the cold weather. But are Arctic changes also making these weak vortex phases more likely?

Figure 1: Schematic overview of the different plausible causal mechanisms making it difficult to quantify the influence of autumn Barents and Kara sea ice concentrations (BK-SIC) on the winter stratospheric polar vortex (SPV); sea level pressure over the Ural Mountains (Ural-SLP) and over the North Pacific (NP-SLP), lower-stratospheric poleward eddy heat flux (vT), North Pacific sea ice concentrations (NP-SIC) and El Niño–Southern Oscillation/Madden–Julian Oscillation (ENSO/MJO). The arrows represent assumed causal relationships. (Taken from Kretschmer et al, 2020)

The scientific debate regarding a causal role of Arctic sea ice loss is controversial (see e.g. Cohen et al. 2020, Screen et al. 2018). Scientists face a dilemma. In observational data, a statistically significant signal has been detected. Given the large natural variations in climate data and different possible mechanisms which are difficult to disentangle, it is hard to tell if this signal reflects a causal influence (see also Fig. 1). This is further compounded by partly opposing results from climate model simulations. So far all that can be said conclusively, is that the question of whether the decline of Arctic sea ice is causing a weakening of the polar vortex cannot be answered conclusively.

But should we ignore the potential risk the decline of the Arctic holds for our future weather and climate, just because the current data do not allow a clear statement? The short answer is: No!

We explore this aspect in our latest study (see Kretschmer et al. 2020). In contrast to previous studies, which examined whether the decrease in sea ice causes a weakening of the polar vortex (and thus severe winter weather), we pose a different question. We ask: Assuming there is a causal influence of sea ice loss, what does this imply?

To address this question we use different climate model simulations of the next 100 years. All climate projections agree that sea ice will continue to melt as climate change progresses. This is a sad but unsurprising fact highlighting the need to evaluate possible consequences of a changing Arctic. Based on the model simulation data and using methods from causal inference, we further conclude that the causal effect of Arctic sea ice on the polar vortex is, if it exists, plausibly only very small. However, given that the decrease of sea ice will be huge, this small effect can have large implications. In fact, the climate models project a weakening of the polar vortex as long as the autumn sea ice in the Barents and Kara Sea melts. Whilst this is no definitive proof for a causal influence of sea ice loss, it is consistent with the initial hypothesis. Moreover, we find that once all sea ice is gone, the vortex strengthens again, suggesting there are other, poorly understood mechanisms by which global warming affects the polar vortex and thereby our weather in the mid-latitudes.

More generally, our study calls for more focus on understanding plausible climate-change related risks. Absolute statements about the regional effects of global warming are often not possible, given the complexity of the climate system and often contradictory climate predictions. This forces decision makers to act under large uncertainties. It is therefore necessary for climate scientists to evaluate different causal possibilities (such as an influence of the sea ice loss on the polar vortex) to gain a better understanding of regional climate risks. This also requires the use of different statistical tools and techniques – some of which we apply and discuss in our study.

The next time a cold snap hits Europe the same oversimplistic media headlines can be expected. Hopefully, however, the scientific debate will then have shifted towards a more conditional risk-based understanding of the plausible impacts of the changing Arctic.


Cohen, J., Zhang, X., Francis, J. et al. Divergent consensuses on Arctic amplification influence on midlatitude severe winter weather. Nat. Clim. Chang. 10, 20–29 (2020).

Screen, J.A., Deser, C., Smith, D.M. et al. Consistency and discrepancy in the atmospheric response to Arctic sea-ice loss across climate models. Nature Geosci 11, 155–163 (2018).

Kretschmer, M., Zappa, G., and Shepherd, T. G.: The role of Barents–Kara sea ice loss in projected polar vortex changes, Weather Clim. Dynam., 1, 715–730.


Posted in Arctic, Climate, Climate change, Cryosphere, Polar | Leave a comment

What Did You Get For Number 9?

By: Todd Jones

A common way to check your work in school is to turn to your neighbour and ask, “What did you get for this one?”  With a little extra effort, though, students end up having productive discussions and learning to solve problems they didn’t fully understand or discovering new, clearer routes to the solution.  Even answering broadly defined questions, a group of comparisons could lead to consensus or a narrowing of the possibilities.

Particularly effective teachers encourage and schedule these comparison sessions. Scientists, ever the continual students, bring this technique to their research, aspiring to uncover solutions to challenging problems they have not previously considered by comparing their research with that of others. 

While the methods of solving 23÷1.4 with pen and paper varies little, questions about the motions of the atmosphere are not always so well constrained.  The many sensitive equations that describe these motions can often only be solved approximately, and scientists may reasonably choose from a number of approximations (with varying levels of accuracy) based on practical issues, such as how powerful their computers are.  For example, these calculations are often very large problems that require division of the atmosphere to be divided into a number of points where calculations about temperature, wind, and rain can be performed. Between the points, these parameters must be approximated with something like a “best guess.”  Each of these justifiable choices will lead to differing solutions that can generate years of classroom-style “compare and discuss” activity.

For example, we could compare solutions to simple models of the atmosphere.  One can remove complications of the real world and create close approximations that allow an easier solution.  For instance, picture a non-orbiting, non-rotating world that is entirely warm ocean, where the oscillations of night and day replaced by constant moderate sunshine.  The “world” doesn’t even have to be a sphere!  Modelling the atmosphere of this world, we would see that the atmosphere cools off gradually, radiating energy to space.  As the lower atmosphere warms from the ocean’s heat, moist convective bubbles begin to rise to then cool, forming clouds and rain.  Over time, the heat from condensation of water vapour into clouds and rain balances out the radiative cooling of the air.  We call this energetic balancing “radiative-convective equilibrium,” or RCE.  This model is a close approximation to Earth’s climate, and it can be used as a “toy Earth” to learn how the climate might change in response to parameter changes. 

Figure 1.  A scattered deep convective cloud scene from a simulation of the climate of a simplified world in radiative-convective equilibrium with an ocean constantly at 22°C in the UK Met Office model.  The back left wall shows a slice of relative humidity (hur). The back right wall shows a slice of specific humidity (water vapour concentration, hus).  The bottom surface shows the total amount of water vapour in the column of air above each point (prw).  Cloud surfaces are coloured for various levels of frozen cloud particles (cli), liquid cloud droplets (clw), and rain (plw).  Orange arrows show the velocity of the wind near the model surface.

Playing with these models over the past few decades, scientists have noticed some intriguing behaviour.  Choosing different global temperatures, we can investigate how clouds respond to global warming: will more reflective clouds spread and counter the warming?  Much of the time, the deep convective clouds that are generated in these models appear as one might guess: randomly scatted, sputtering across the little world (Figure 1).  However, when oceans are warm enough or the modeled worlds are sufficiently large, the deep convective clouds can spontaneously cluster into isolated locations, with very dry regions in between (Figure 2).  News of this phenomenon spurred tens of independent studies for comparison [1], and scientists began to uncover that phenomena like interactions between radiation and clouds can lead to this convective clustering.

Figure 2.  A clustered deep convective cloud scene from a simulation of the climate of a simplified world in radiative-convective equilibrium with an ocean constantly at 32°C in the UK Met Office model.  Features are as described in Figure 1.

For a fair test, the parameters used by each model should be the same, so a group of scientists gathered to define a specific set of parameters to test warming in RCE climates in models of many geometries and scales.  These “rules” were codified and shared [2], and volunteers reported their solutions for comparison [3].  Formally, this is an intercomparison of models simulating RCE, known as RCEMIP.  The 30+ representations varied between areas on the scale of 100 km to the full globe and between levels of detail (resolution) from 200 m to 50 km.

Though there were many small differences between the model results, there was broad agreement over the formation of aggregating clusters. Over small areas, only one model (that in Figure 2) developed convective clusters, whereas over large areas, all but a few models developed convective clusters.  The deep clouds in most models showed that, as the world is warmed, anvil tops become warmer, are located higher in the atmosphere, and cover smaller areas.  This means that the effect of high cloud tops on climate would vary little under global warming.  Instead, changes in low cloud properties and the degree of convective clustering can influence this [4].

Compared to high-resolution models, lower resolution global models show a change in clustering with global warming that indicates a smaller amount of warming for a given greenhouse gas forcing.  Because the higher resolution models tend to be more correct, it’s possible that coarser climate models have painted too rosy a picture about future warming. 

Though there is disagreement, there is much to be said for comparing solutions.  Many investigations comparing model patterns are underway, ultimately steering toward a better-understood solution of the climate system problem.


[1] Wing, A. A., K. Emanuel, C. E. Holloway, and C. Muller, 2017: Convective self-aggregation in numerical simulations: A review. Surveys in Geophysics, 38 (6), 1173–1197, doi:10.1007/ s10712-017-9408-4, URL

[2] Wing, A. A., K. A. Reed, M. Satoh, B. Stevens, S. Bony, and T. Ohno, 2018: Radiative– convective equilibrium model intercomparison project. Geoscientific Model Development, 11 (2), 793–813, doi:10.5194/gmd-11-793-2018, URL 793/2018/.

[3] Wing, A. A., and Coauthors, 2020: Clouds and convective self-aggregation in a multimodel ensemble of radiative-convective equilibrium simulations. Journal of Advances in Modeling Earth Systems, 12 (9), e2020MS002138, doi:, URL

[4] Becker, T., and A. A. Wing, 2020: Understanding the extreme spread in climate sensitivity within the radiative-convective equilibrium model intercomparison project. Journal of Advances in Modeling Earth Systems, 12 (10), e2020MS002 165, doi:, URL

Posted in Climate, Convection | Leave a comment

April Flowers – A story of bluebells and frosts

By: Pete Inness

Figure 1: Bluebells in a wood near Reading on the 16th of April 2020 (left) and the same date in 2021 (right). In 2021 the flowers are yet to emerge and there are no leaves on the trees.

Bluebells regularly come out top in surveys of Britain’s favourite wildflower. From mid-April to mid-May they form carpets of lilac-coloured and strongly scented flowers in woodlands from Southern England to Scotland. Reading is particularly well placed for seeing these flowers with the beech woods of the Chilterns to the north of Reading being a favoured location for them. In fact, you don’t even need to leave the University campus as there are several good locations for them within a couple of minutes’ walk from the Meteorology Department.

Since I first came to Reading as a student in the late 80’s I’ve tried to get out into the countryside most years to see the bluebells at their best. This involves careful timing. Back in those early days I would have said that the May Day Bank Holiday weekend was the best time to catch them, but in the intervening 30 years that date has moved forward somewhat and I’d now say that going out a week or so earlier than this gives you a better chance of seeing them in their prime.

The main cause of year-to-year variations in the flowering date of bluebells is temperature variability. Whilst, like most woodland flowers, they are primed to get through much of their above-ground life cycle before the leaf canopy gets too thick and cuts down the sunlight reaching the forest floor, flowering can be accelerated or delayed by warmer or colder temperatures through the Spring months.

A few years ago, I decided to turn my interest in bluebells, and the annual cycle of nature in general, into something more productive by running undergraduate projects looking at relationships between weather patterns and the occurrence of events in the natural world.  This has been made possible by an excellent citizen science project called Nature’s Calendar which is run jointly by the Centre for Ecology and Hydrology and the Woodland Trust. This project encourages members of the public to report their sightings of a wide range of natural events such as first flowering of flowers and shrubs, first nest building of common birds, or first appearance of certain species of butterfly and other insects. Using the data recorded by this project, together with weather data such as the Met Office’s Central England Temperature record, students can explore relationships between weather and the annual cycle of the natural world and then relate them to specific weather events such as “the Beast from the East” in 2018 or longer-term changes in climate.

These studies by our students have shown that the flowering date of bluebells is sensitive to the average temperature through February and March – the months when the leaves emerge from the ground and the flower stalks and buds form. Every 1 degree Celsius rise in mean temperature across these months leads to bluebells flowering about 5 days earlier. The average temperature in April seems to have very little impact on the flowering date and this makes sense. Because bluebells produce their first flowers in mid-April (earlier in sheltered spots and in the south of the country), the temperature of the remainder of the month is immaterial to the flowering date.

2021 seems to be an exception to that rule. Whilst February 2021 was quite a bit colder than 2020, March 2021 was actually warmer than 2020. The differences in temperature between these 2 years in February and March are nowhere near large enough to explain the difference in the state of the bluebells in the pictures above, both taken on the 16th of April, in 2020 and 2021 respectively. April 2021 has been one of the coolest Aprils in recent years and in Reading has been the frostiest April since 1917. There have been 11 air frosts recorded at our Atmospheric Observatory through the month and only 5 nights in the month when there wasn’t a ground frost. To put these numbers in context, in a typical April in Reading we would expect 2 air frosts.

These frosts effectively slammed the brakes on the flowering process. Bluebells have evolved to avoid exposing the delicate reproductive parts of their flowers to frost and so during the first half of April the flower buds remained closed.  Even now, at the start of May, the bluebells in our local area are still some way behind the dense carpets of flowers that we saw in mid-April last year.

So, this year’s project students will be studying the effect of these exceptional frosts on UK wildlife and looking for their impacts on other plants, trees, birds and insects.

Posted in Climate, Phenology | Leave a comment

Some thoughts on future energy supply, such as an “Instantaneous Energy Market”

By: Peter Cook

We all know that it’s time to stop using fossil fuels, due to the greenhouse gasses emitted and the finite amount of these fuels.  Many renewable sources of energy are now being adopted but a lot of work and ingenuity will be needed for these to become the only sources of energy, and most people will need to be involved to make this happen.

A very different energy grid will be needed with multiple supplies (see figure), instead of the few large power companies at present, plus a lot of storage rather than just the National Grid balancing the load.  However, this should be seen as an opportunity not a problem.

There will be many opportunities for small companies and individuals to get involved, by generating their own electricity to sell, or by storing energy for other people, or by using energy in more efficient ways.  This could encourage a new entrepreneurial society, speeding up the adoption of new technology and the transition from fossil fuels to renewable energy.

A possible way to create the new energy grid would be to set up an “Instantaneous Energy Market”

Sources of renewable energy are often criticised for being intermittent, and their widespread adoption is dismissed as impractical because of the problems in matching energy supply to demand. These critics claim we need large-scale energy storage or backup sources of energy.  But is this way of thinking correct?  What about matching the demand to the supply instead?

Like other products, electricity can be priced according to supply and demand, and in many places, electricity is already cheaper at night than during the day.  Many of us make use of this, charging our storage heaters and running our washing machines and dishwashers at night, but this has the potential to be taken much further.  Prices could be adjusted second by second according to the instantaneous supply and demand.  Many uses such as heating, water heating and charging do not need to be on continuously and could be stopped for short periods, if demand (and price) became particularly high, without causing much inconvenience.

To do this the electricity supply would need to include a signal to show the price.  At present, the mains alternating current frequency is usually 60 Hz, but the frequency is reduced if the supply is low, so small changes to the frequency could be used to show the price.  There could also be information on how the supply, demand and price are changing in the short term, which would be used to predict the price in the very near future (minutes) to help people manage the changing price.  While on longer timescales (days) there could be electricity price forecasts, which would depend on the weather (sun and wind for supply, extra demand in cold weather), problems with supply, and large demands (during popular TV shows), which people could use to plan their electricity use and so reduce costs.

People who generate their own electricity (e.g. from solar panels) could sell their excess power, using large batteries to store electricity when it’s cheap and then sell it when the price increases.  Others could just have a large battery to buy electricity cheap and sell dear.  With this control of electricity demand and supply, adding new sources of energy would be easier, and energy suppliers would have less need for backup sources.

With many people adjusting their demand according to price, changes would be smoothed and variations in the price kept to a minimum.  When electricity is cheap, the resulting increase in energy use would lead to a price rise, whilst when electricity is expensive, the resulting drop in demand and increased electricity supply from people selling their own electricity would lead to a price fall.  People would also set their own thresholds of when to use electricity or not so that abrupt jumps in the overall demand would be avoided.  Attempts at profiteering (storing energy to raise the price) would be difficult because of the large amount of storage that would be needed.

The use of instantaneous energy pricing might work better at a local rather than at a national level, and modelling studies are required to see how it would work in practice, identify potential problems and to investigate the extent to which such a system could be scaled up.


The attached figure (but not any of the above text) is from the paper “Smart management system for improving the reliability and availability of substations in smart grid with distributed generation”, by Shady S. Refaat and Amira Mohamed, January 2019, The Journal of Engineering (17), DOI:10.1049/joe.2018.8215


Posted in Climate, Energy meteorology | Leave a comment

TerraMaris: Plans, Progress And Setbacks Of Atmospheric Research In Indonesia

By: Emma Howard 

To some of us weather enthusiasts, there’s nothing more exciting than a good tropical thunderstorm. For the best storms, you need a good source of humid air from a warm ocean and a hot land surface. If you can find some mountains to push air upwards and initiate convection (the intense vertical motion of air in updrafts and downdrafts which drive storms) all the better.

Figure 1: Development of convection offshore of West Papua. Photo credit: Megan Howard 

As a volcanic archipelago centred right on the equator, Indonesia has all of this and more. So it’s no surprise that Indonesia is the largest of the three major tropical convective hotspots on Earth. Local lore says that rain comes like clockwork during the wet season, occurring every day at the same time for weeks on end. This is borne out in quantitative rainfall observations, which show that after forming over mountains and land during the mid-afternoon and evening, storms tend to move offshore, with regular night-time and early-morning showers over the oceans and seas adjacent to islands. At present, most atmospheric forecast models (which parameterise atmospheric convection rather than resolving it) don’t represent these diurnally propagating systems very well. This makes it challenging to use these models to predict the timing and intensity of convection in Indonesia.

Unfortunately, some of the more intense thunderstorms can have severe impacts on local communities, particularly when associated with large-scale forcing such as Tropical Cyclone Seroja, which struck Timor-Leste and the Indonesian Nusa Tenggara provinces just two weeks ago. Beyond their immediate impact, these storms have subtle impacts further afield. By condensing water vapour into ice and liquid miles above the earth’s surface, intense storms cause latent heat to be released in the upper atmosphere. This heat source drives the Hadley and Walker cells, global scale atmospheric circulation systems which influence weather and climate across the world, including the UK. For these reasons, scientific research into the convection that occurs in thunderstorms in Indonesia is critical for our understanding of the Earth system and improving climate models.

TerraMaris is a large, collaborative research project that is furthering scientific understanding of atmospheric convection in the Indonesian region. The project involves researchers from three UK universities (East Anglia, Reading and Leeds), the UK Met Office and Indonesia’s weather and space agencies (BMKG and LAPAN). TerraMaris aims to transform our understanding of convective processes in Indonesia and their interactions with the largescale flow through an intensive observational and modelling campaign focussed on the circulation systems associated with the daily development and offshore propagation of convection.

Thankfully, the modelling component of our project hasn’t been so affected by the pandemic and is chugging away as normal. We’re generating a set of very high-resolution model simulations over the whole of Indonesia that are able to (at least partially) resolve the convective updrafts and downdrafts in the daily-repeating storms. Unlike many lower resolution models, these simulations are capable of accurately simulating offshore propagating convection. We intend to run 10 simulations, with one during the long-awaited field campaign season, covering the entire December – February rainy season. A wide range of weather conditions will be represented in this sample, and we’ll be able to study the simulated thunderstorms during all of them.

We are able to compare the role these storms play in heating the upper atmosphere to that in more conventional, lower resolution models, which aren’t able to resolve the updrafts and downdrafts and instead have to parameterise them. These models generally don’t represent Indonesian convection very well. It’s early days, but we’re finding that there’s a lot more variability in the height above the ground where heating occurs in the high-resolution models than the low resolution models. Our high-resolution models also simulate the daily formation of storms in the afternoon/evening and their overnight propagation into the oceans really well (see video).

Figure 2: Mean diurnal cycle of precipitation in early TerraMaris simulations.

Because interactions between the atmosphere and the warm tropical oceans are really important in this part of the world, we’re using a carefully designed coupled atmosphere-ocean model to run all these simulations. Full ocean models are very computationally expensive to run, so we’re using a multi-column KPP ocean model in order to simulate turbulent vertical mixing in the near-surface mixed layer. This is the oceanic process that interacts most strongly with the atmosphere, as it transports heat and freshwater fluxes from the atmosphere at the sea surface further down through the upper ocean. The role of ocean currents and other processes are represented by imposing “corrective” sources and sinks of heat and salt which ensure that in the long run, our simulated ocean matches up with observations of the real ocean.

We’re hoping that these simulations will be able to answer some really fundamental questions about how large-scale weather conditions modulate the vertical distribution of convective heating and how important the daily propagating systems are for providing the heat that drives global circulation. This will be useful for improving the representation of Indonesian convection in lower resolution models. If we can improve that, we hope that weather forecasts will improve both locally in Indonesia and globally through interactions with the Hadley and Walker cells. With any luck, by the time we finally step onto that plane, we’ll know a lot more about the storms that we’re trying to observe than we do now!


Posted in Atmospheric circulation, Climate, Convection, Rainfall, Thunder Storms, Tropical convection | Leave a comment

Pacific and Atlantic Conversations

By: Daniel Hodson

The Earth is a world of water – oceans spread out across much of the planet and they exert a profound influence over the climate. Ascending from the Earth, the churning waves and surf shrink away and the oceans relax into seemingly silent, passive bodies of water. But this seeming passivity belies a complex network of currents and flows hidden beneath the surface, driven by heat at the equator flowing to the colder poles, but being frustrated in doing so by the spin of the Earth.

Figure 1: The Atlantic Meridional Overturning Circulation

In the Atlantic, an immense flow of water drives northwards towards Greenland and Iceland in the top kilometre of ocean, before plunging down kilometres and returning southwards at depth, towards Antarctica (Figure 1). This is the deep Atlantic Meridional Overturning Circulation (AMOC). This circulation involves such large flows of water that oceanographers had to invent a new unit of measurement to think about the volumes involved: the Sverdrup (Sv) is a million metres cubed per second – that’s a cube of water 100m on a side, flowing past every second. This northward flowing water carries heat with it, sometimes speeding up, sometimes slowing down – bringing more or less heat as it does so, leading to warming or a cooling of the surface of the ocean. This heat can then be carried away by the atmosphere leading to warmer air temperature, or perhaps driving changes in surface wind patterns.

If, whilst orbiting over the Pacific, you tuned your eyes away from the blue of the Pacific and into the infrared, you would see what the satellites see: a vast pattern of warm and cold spread out across the expanse of the Pacific Ocean. Over the years, you would see this pattern pulse warm and then cold; in the semi-regular cycle of El Nino: the heartbeat of the climate system which dominates the tropics.Figure 2: The Pacific Decadal Oscillation pattern

El Nino is driven by complex interactions between the winds blowing over the Pacific Ocean, and the waters sloshing between Asia and the Americas. It leads to a 3-6-year cycle of warming and cooling in the equatorial Pacific Ocean. In the warm phase, large pulses of heat are released from the ocean into the atmosphere, shifting climate patterns leading to droughts and deluges across the globe. Over many decades of watching, a more widespread pattern of warming and cooling emerges across the Pacific – a pattern known as the Pacific Decadal Oscillation (PDO) (Figure 2). The connection between the PDO and El Nino remains to be fully understood.

Both the AMOC and the PDO play a key role in storing and moving heat around; their variations over time, in turn, modulate our climate system, potentially in profound ways. The way these climate features respond to external factors like changing levels of greenhouse gases or industrial pollution may affect the medium-term trajectory of anthropogenic climate change.Figure 3: The Pacific and Atlantic Oceans

Figure 4: The Tropical Walker Circulation

For a long time, it was thought that these two siblings (AMOC and PDO) continued their existence in ignorance of the other; bounded by Africa and Eurasia but divided by the Americas (Figure 3). They may hear distant echoes of each other, mediated by the turbulent Southern Ocean around Antarctica, or the icy Arctic Ocean – but signals in the ocean are ponderous, slow and noisy. New simulations with modern complex climate models suggest that they hear and feel each other’s presence over, rather than around, the wall of the Americas; mediated by the atmosphere. The Walker circulation is the large-scale pattern of ascending and descending air one encounters when travelling around the equator (Figure 4). Air heated and pushed upwards by a warm ocean in one place, must be replaced by descending air elsewhere in the tropics. This circulation seems to allow the two oceans to talk to and influence each other. Climate model simulations [1][2] seem to show that, over many decades, a warmer Atlantic can nudge a cooler Pacific Ocean, whilst a Warmer Pacific ocean can lead to a warmer Atlantic.

Whilst we are seeing a clearer picture of how these two oceans coordinate their climate modulations, challenges remain. Many decades of observations are needed to understand the slow influences of these twin oceans – but whilst the 21st-century ocean is well observed, ocean observations before 1950 are much scarcer. Remarkable efforts are underway, however, to utilize the vast datasets buried in old ships logs. We also rely on climate models to tease apart the complex interactions in the climate system. Are the models we use accurate enough? Are we doing the right experiments with these models to understand how these features of climate interact? If we can begin to understand the conversation between these two oceans better, we may be better able to predict their future influences on climate and, in turn, on us.


[1] Meehl, G.A., and Coauthors, 2021: Atlantic and Pacific tropics connected by mutually interactive decadal-timescale processes. Nat. Geosci. 14, 36–42 .

[2] Ruprich-Robert, Y., Msadek, R., Castruccio, F., Yeager, S., Delworth, T., & Danabasoglu, G., 2017: Assessing the Climate Impacts of the Observed Atlantic Multidecadal Variability Using the GFDL CM2.1 and NCAR CESM1 Global Coupled Models, Journal of Climate, 30(8), 2785-2810.

Posted in Climate | Leave a comment