Cold Winter Weather: Despite or Because of Global Warming?

By: Marlene Kretschmer

This year’s winter was cold. There was heavy snowfall across the UK, Europe and parts of the United States including Texas. This severe weather came with significant societal and economic impacts.

Every time cold extremes like this occur, one can almost predict the media headlines.  On the one hand, dubious media will use a regional cold snap to sow doubt about human-made global warming by deliberately misunderstanding the difference between weather and climate. In a similarly absurd manner, other newspapers will state that climate change was responsible for the cold snap. In between, there are debates among scientists about the role of climate change in causing cold extremes. This is where it gets complicated and, hence, interesting.

Climate change manifests itself in different ways. While the increase of CO2 in the atmosphere leads to warmer temperatures globally, there might be indirect mechanisms causing opposite effects regionally. In recent years, researchers have hypothesised that the melting of Arctic sea ice – a direct result of global warming – favours winter cold extremes in the Northern Hemisphere mid-latitudes. In particular, it has been suggested that the decline in Barents and Kara sea ice weakens the stratospheric polar vortex, a band of fast blowing westerly winds circling the Arctic during winter at approximately 15-50 km altitude. Weak phases of the vortex are linked to cold winter weather in Eurasia and North America. In other words, it was proposed that climate change indirectly leads to colder weather. The polar vortex this year was extremely weak, and therefore likely to be the culprit of the cold weather. But are Arctic changes also making these weak vortex phases more likely?

Figure 1: Schematic overview of the different plausible causal mechanisms making it difficult to quantify the influence of autumn Barents and Kara sea ice concentrations (BK-SIC) on the winter stratospheric polar vortex (SPV); sea level pressure over the Ural Mountains (Ural-SLP) and over the North Pacific (NP-SLP), lower-stratospheric poleward eddy heat flux (vT), North Pacific sea ice concentrations (NP-SIC) and El Niño–Southern Oscillation/Madden–Julian Oscillation (ENSO/MJO). The arrows represent assumed causal relationships. (Taken from Kretschmer et al, 2020)

The scientific debate regarding a causal role of Arctic sea ice loss is controversial (see e.g. Cohen et al. 2020, Screen et al. 2018). Scientists face a dilemma. In observational data, a statistically significant signal has been detected. Given the large natural variations in climate data and different possible mechanisms which are difficult to disentangle, it is hard to tell if this signal reflects a causal influence (see also Fig. 1). This is further compounded by partly opposing results from climate model simulations. So far all that can be said conclusively, is that the question of whether the decline of Arctic sea ice is causing a weakening of the polar vortex cannot be answered conclusively.

But should we ignore the potential risk the decline of the Arctic holds for our future weather and climate, just because the current data do not allow a clear statement? The short answer is: No!

We explore this aspect in our latest study (see Kretschmer et al. 2020). In contrast to previous studies, which examined whether the decrease in sea ice causes a weakening of the polar vortex (and thus severe winter weather), we pose a different question. We ask: Assuming there is a causal influence of sea ice loss, what does this imply?

To address this question we use different climate model simulations of the next 100 years. All climate projections agree that sea ice will continue to melt as climate change progresses. This is a sad but unsurprising fact highlighting the need to evaluate possible consequences of a changing Arctic. Based on the model simulation data and using methods from causal inference, we further conclude that the causal effect of Arctic sea ice on the polar vortex is, if it exists, plausibly only very small. However, given that the decrease of sea ice will be huge, this small effect can have large implications. In fact, the climate models project a weakening of the polar vortex as long as the autumn sea ice in the Barents and Kara Sea melts. Whilst this is no definitive proof for a causal influence of sea ice loss, it is consistent with the initial hypothesis. Moreover, we find that once all sea ice is gone, the vortex strengthens again, suggesting there are other, poorly understood mechanisms by which global warming affects the polar vortex and thereby our weather in the mid-latitudes.

More generally, our study calls for more focus on understanding plausible climate-change related risks. Absolute statements about the regional effects of global warming are often not possible, given the complexity of the climate system and often contradictory climate predictions. This forces decision makers to act under large uncertainties. It is therefore necessary for climate scientists to evaluate different causal possibilities (such as an influence of the sea ice loss on the polar vortex) to gain a better understanding of regional climate risks. This also requires the use of different statistical tools and techniques – some of which we apply and discuss in our study.

The next time a cold snap hits Europe the same oversimplistic media headlines can be expected. Hopefully, however, the scientific debate will then have shifted towards a more conditional risk-based understanding of the plausible impacts of the changing Arctic.


Cohen, J., Zhang, X., Francis, J. et al. Divergent consensuses on Arctic amplification influence on midlatitude severe winter weather. Nat. Clim. Chang. 10, 20–29 (2020).

Screen, J.A., Deser, C., Smith, D.M. et al. Consistency and discrepancy in the atmospheric response to Arctic sea-ice loss across climate models. Nature Geosci 11, 155–163 (2018).

Kretschmer, M., Zappa, G., and Shepherd, T. G.: The role of Barents–Kara sea ice loss in projected polar vortex changes, Weather Clim. Dynam., 1, 715–730.


Posted in Arctic, Climate, Climate change, Cryosphere, Polar | Leave a comment

What Did You Get For Number 9?

By: Todd Jones

A common way to check your work in school is to turn to your neighbour and ask, “What did you get for this one?”  With a little extra effort, though, students end up having productive discussions and learning to solve problems they didn’t fully understand or discovering new, clearer routes to the solution.  Even answering broadly defined questions, a group of comparisons could lead to consensus or a narrowing of the possibilities.

Particularly effective teachers encourage and schedule these comparison sessions. Scientists, ever the continual students, bring this technique to their research, aspiring to uncover solutions to challenging problems they have not previously considered by comparing their research with that of others. 

While the methods of solving 23÷1.4 with pen and paper varies little, questions about the motions of the atmosphere are not always so well constrained.  The many sensitive equations that describe these motions can often only be solved approximately, and scientists may reasonably choose from a number of approximations (with varying levels of accuracy) based on practical issues, such as how powerful their computers are.  For example, these calculations are often very large problems that require division of the atmosphere to be divided into a number of points where calculations about temperature, wind, and rain can be performed. Between the points, these parameters must be approximated with something like a “best guess.”  Each of these justifiable choices will lead to differing solutions that can generate years of classroom-style “compare and discuss” activity.

For example, we could compare solutions to simple models of the atmosphere.  One can remove complications of the real world and create close approximations that allow an easier solution.  For instance, picture a non-orbiting, non-rotating world that is entirely warm ocean, where the oscillations of night and day replaced by constant moderate sunshine.  The “world” doesn’t even have to be a sphere!  Modelling the atmosphere of this world, we would see that the atmosphere cools off gradually, radiating energy to space.  As the lower atmosphere warms from the ocean’s heat, moist convective bubbles begin to rise to then cool, forming clouds and rain.  Over time, the heat from condensation of water vapour into clouds and rain balances out the radiative cooling of the air.  We call this energetic balancing “radiative-convective equilibrium,” or RCE.  This model is a close approximation to Earth’s climate, and it can be used as a “toy Earth” to learn how the climate might change in response to parameter changes. 

Figure 1.  A scattered deep convective cloud scene from a simulation of the climate of a simplified world in radiative-convective equilibrium with an ocean constantly at 22°C in the UK Met Office model.  The back left wall shows a slice of relative humidity (hur). The back right wall shows a slice of specific humidity (water vapour concentration, hus).  The bottom surface shows the total amount of water vapour in the column of air above each point (prw).  Cloud surfaces are coloured for various levels of frozen cloud particles (cli), liquid cloud droplets (clw), and rain (plw).  Orange arrows show the velocity of the wind near the model surface.

Playing with these models over the past few decades, scientists have noticed some intriguing behaviour.  Choosing different global temperatures, we can investigate how clouds respond to global warming: will more reflective clouds spread and counter the warming?  Much of the time, the deep convective clouds that are generated in these models appear as one might guess: randomly scatted, sputtering across the little world (Figure 1).  However, when oceans are warm enough or the modeled worlds are sufficiently large, the deep convective clouds can spontaneously cluster into isolated locations, with very dry regions in between (Figure 2).  News of this phenomenon spurred tens of independent studies for comparison [1], and scientists began to uncover that phenomena like interactions between radiation and clouds can lead to this convective clustering.

Figure 2.  A clustered deep convective cloud scene from a simulation of the climate of a simplified world in radiative-convective equilibrium with an ocean constantly at 32°C in the UK Met Office model.  Features are as described in Figure 1.

For a fair test, the parameters used by each model should be the same, so a group of scientists gathered to define a specific set of parameters to test warming in RCE climates in models of many geometries and scales.  These “rules” were codified and shared [2], and volunteers reported their solutions for comparison [3].  Formally, this is an intercomparison of models simulating RCE, known as RCEMIP.  The 30+ representations varied between areas on the scale of 100 km to the full globe and between levels of detail (resolution) from 200 m to 50 km.

Though there were many small differences between the model results, there was broad agreement over the formation of aggregating clusters. Over small areas, only one model (that in Figure 2) developed convective clusters, whereas over large areas, all but a few models developed convective clusters.  The deep clouds in most models showed that, as the world is warmed, anvil tops become warmer, are located higher in the atmosphere, and cover smaller areas.  This means that the effect of high cloud tops on climate would vary little under global warming.  Instead, changes in low cloud properties and the degree of convective clustering can influence this [4].

Compared to high-resolution models, lower resolution global models show a change in clustering with global warming that indicates a smaller amount of warming for a given greenhouse gas forcing.  Because the higher resolution models tend to be more correct, it’s possible that coarser climate models have painted too rosy a picture about future warming. 

Though there is disagreement, there is much to be said for comparing solutions.  Many investigations comparing model patterns are underway, ultimately steering toward a better-understood solution of the climate system problem.


[1] Wing, A. A., K. Emanuel, C. E. Holloway, and C. Muller, 2017: Convective self-aggregation in numerical simulations: A review. Surveys in Geophysics, 38 (6), 1173–1197, doi:10.1007/ s10712-017-9408-4, URL

[2] Wing, A. A., K. A. Reed, M. Satoh, B. Stevens, S. Bony, and T. Ohno, 2018: Radiative– convective equilibrium model intercomparison project. Geoscientific Model Development, 11 (2), 793–813, doi:10.5194/gmd-11-793-2018, URL 793/2018/.

[3] Wing, A. A., and Coauthors, 2020: Clouds and convective self-aggregation in a multimodel ensemble of radiative-convective equilibrium simulations. Journal of Advances in Modeling Earth Systems, 12 (9), e2020MS002138, doi:, URL

[4] Becker, T., and A. A. Wing, 2020: Understanding the extreme spread in climate sensitivity within the radiative-convective equilibrium model intercomparison project. Journal of Advances in Modeling Earth Systems, 12 (10), e2020MS002 165, doi:, URL

Posted in Climate, Convection | Leave a comment

April Flowers – A story of bluebells and frosts

By: Pete Inness

Figure 1: Bluebells in a wood near Reading on the 16th of April 2020 (left) and the same date in 2021 (right). In 2021 the flowers are yet to emerge and there are no leaves on the trees.

Bluebells regularly come out top in surveys of Britain’s favourite wildflower. From mid-April to mid-May they form carpets of lilac-coloured and strongly scented flowers in woodlands from Southern England to Scotland. Reading is particularly well placed for seeing these flowers with the beech woods of the Chilterns to the north of Reading being a favoured location for them. In fact, you don’t even need to leave the University campus as there are several good locations for them within a couple of minutes’ walk from the Meteorology Department.

Since I first came to Reading as a student in the late 80’s I’ve tried to get out into the countryside most years to see the bluebells at their best. This involves careful timing. Back in those early days I would have said that the May Day Bank Holiday weekend was the best time to catch them, but in the intervening 30 years that date has moved forward somewhat and I’d now say that going out a week or so earlier than this gives you a better chance of seeing them in their prime.

The main cause of year-to-year variations in the flowering date of bluebells is temperature variability. Whilst, like most woodland flowers, they are primed to get through much of their above-ground life cycle before the leaf canopy gets too thick and cuts down the sunlight reaching the forest floor, flowering can be accelerated or delayed by warmer or colder temperatures through the Spring months.

A few years ago, I decided to turn my interest in bluebells, and the annual cycle of nature in general, into something more productive by running undergraduate projects looking at relationships between weather patterns and the occurrence of events in the natural world.  This has been made possible by an excellent citizen science project called Nature’s Calendar which is run jointly by the Centre for Ecology and Hydrology and the Woodland Trust. This project encourages members of the public to report their sightings of a wide range of natural events such as first flowering of flowers and shrubs, first nest building of common birds, or first appearance of certain species of butterfly and other insects. Using the data recorded by this project, together with weather data such as the Met Office’s Central England Temperature record, students can explore relationships between weather and the annual cycle of the natural world and then relate them to specific weather events such as “the Beast from the East” in 2018 or longer-term changes in climate.

These studies by our students have shown that the flowering date of bluebells is sensitive to the average temperature through February and March – the months when the leaves emerge from the ground and the flower stalks and buds form. Every 1 degree Celsius rise in mean temperature across these months leads to bluebells flowering about 5 days earlier. The average temperature in April seems to have very little impact on the flowering date and this makes sense. Because bluebells produce their first flowers in mid-April (earlier in sheltered spots and in the south of the country), the temperature of the remainder of the month is immaterial to the flowering date.

2021 seems to be an exception to that rule. Whilst February 2021 was quite a bit colder than 2020, March 2021 was actually warmer than 2020. The differences in temperature between these 2 years in February and March are nowhere near large enough to explain the difference in the state of the bluebells in the pictures above, both taken on the 16th of April, in 2020 and 2021 respectively. April 2021 has been one of the coolest Aprils in recent years and in Reading has been the frostiest April since 1917. There have been 11 air frosts recorded at our Atmospheric Observatory through the month and only 5 nights in the month when there wasn’t a ground frost. To put these numbers in context, in a typical April in Reading we would expect 2 air frosts.

These frosts effectively slammed the brakes on the flowering process. Bluebells have evolved to avoid exposing the delicate reproductive parts of their flowers to frost and so during the first half of April the flower buds remained closed.  Even now, at the start of May, the bluebells in our local area are still some way behind the dense carpets of flowers that we saw in mid-April last year.

So, this year’s project students will be studying the effect of these exceptional frosts on UK wildlife and looking for their impacts on other plants, trees, birds and insects.

Posted in Climate, Phenology | Leave a comment

Some thoughts on future energy supply, such as an “Instantaneous Energy Market”

By: Peter Cook

We all know that it’s time to stop using fossil fuels, due to the greenhouse gasses emitted and the finite amount of these fuels.  Many renewable sources of energy are now being adopted but a lot of work and ingenuity will be needed for these to become the only sources of energy, and most people will need to be involved to make this happen.

A very different energy grid will be needed with multiple supplies (see figure), instead of the few large power companies at present, plus a lot of storage rather than just the National Grid balancing the load.  However, this should be seen as an opportunity not a problem.

There will be many opportunities for small companies and individuals to get involved, by generating their own electricity to sell, or by storing energy for other people, or by using energy in more efficient ways.  This could encourage a new entrepreneurial society, speeding up the adoption of new technology and the transition from fossil fuels to renewable energy.

A possible way to create the new energy grid would be to set up an “Instantaneous Energy Market”

Sources of renewable energy are often criticised for being intermittent, and their widespread adoption is dismissed as impractical because of the problems in matching energy supply to demand. These critics claim we need large-scale energy storage or backup sources of energy.  But is this way of thinking correct?  What about matching the demand to the supply instead?

Like other products, electricity can be priced according to supply and demand, and in many places, electricity is already cheaper at night than during the day.  Many of us make use of this, charging our storage heaters and running our washing machines and dishwashers at night, but this has the potential to be taken much further.  Prices could be adjusted second by second according to the instantaneous supply and demand.  Many uses such as heating, water heating and charging do not need to be on continuously and could be stopped for short periods, if demand (and price) became particularly high, without causing much inconvenience.

To do this the electricity supply would need to include a signal to show the price.  At present, the mains alternating current frequency is usually 60 Hz, but the frequency is reduced if the supply is low, so small changes to the frequency could be used to show the price.  There could also be information on how the supply, demand and price are changing in the short term, which would be used to predict the price in the very near future (minutes) to help people manage the changing price.  While on longer timescales (days) there could be electricity price forecasts, which would depend on the weather (sun and wind for supply, extra demand in cold weather), problems with supply, and large demands (during popular TV shows), which people could use to plan their electricity use and so reduce costs.

People who generate their own electricity (e.g. from solar panels) could sell their excess power, using large batteries to store electricity when it’s cheap and then sell it when the price increases.  Others could just have a large battery to buy electricity cheap and sell dear.  With this control of electricity demand and supply, adding new sources of energy would be easier, and energy suppliers would have less need for backup sources.

With many people adjusting their demand according to price, changes would be smoothed and variations in the price kept to a minimum.  When electricity is cheap, the resulting increase in energy use would lead to a price rise, whilst when electricity is expensive, the resulting drop in demand and increased electricity supply from people selling their own electricity would lead to a price fall.  People would also set their own thresholds of when to use electricity or not so that abrupt jumps in the overall demand would be avoided.  Attempts at profiteering (storing energy to raise the price) would be difficult because of the large amount of storage that would be needed.

The use of instantaneous energy pricing might work better at a local rather than at a national level, and modelling studies are required to see how it would work in practice, identify potential problems and to investigate the extent to which such a system could be scaled up.


The attached figure (but not any of the above text) is from the paper “Smart management system for improving the reliability and availability of substations in smart grid with distributed generation”, by Shady S. Refaat and Amira Mohamed, January 2019, The Journal of Engineering (17), DOI:10.1049/joe.2018.8215


Posted in Climate, Energy meteorology | Leave a comment

TerraMaris: Plans, Progress And Setbacks Of Atmospheric Research In Indonesia

By: Emma Howard 

To some of us weather enthusiasts, there’s nothing more exciting than a good tropical thunderstorm. For the best storms, you need a good source of humid air from a warm ocean and a hot land surface. If you can find some mountains to push air upwards and initiate convection (the intense vertical motion of air in updrafts and downdrafts which drive storms) all the better.

Figure 1: Development of convection offshore of West Papua. Photo credit: Megan Howard 

As a volcanic archipelago centred right on the equator, Indonesia has all of this and more. So it’s no surprise that Indonesia is the largest of the three major tropical convective hotspots on Earth. Local lore says that rain comes like clockwork during the wet season, occurring every day at the same time for weeks on end. This is borne out in quantitative rainfall observations, which show that after forming over mountains and land during the mid-afternoon and evening, storms tend to move offshore, with regular night-time and early-morning showers over the oceans and seas adjacent to islands. At present, most atmospheric forecast models (which parameterise atmospheric convection rather than resolving it) don’t represent these diurnally propagating systems very well. This makes it challenging to use these models to predict the timing and intensity of convection in Indonesia.

Unfortunately, some of the more intense thunderstorms can have severe impacts on local communities, particularly when associated with large-scale forcing such as Tropical Cyclone Seroja, which struck Timor-Leste and the Indonesian Nusa Tenggara provinces just two weeks ago. Beyond their immediate impact, these storms have subtle impacts further afield. By condensing water vapour into ice and liquid miles above the earth’s surface, intense storms cause latent heat to be released in the upper atmosphere. This heat source drives the Hadley and Walker cells, global scale atmospheric circulation systems which influence weather and climate across the world, including the UK. For these reasons, scientific research into the convection that occurs in thunderstorms in Indonesia is critical for our understanding of the Earth system and improving climate models.

TerraMaris is a large, collaborative research project that is furthering scientific understanding of atmospheric convection in the Indonesian region. The project involves researchers from three UK universities (East Anglia, Reading and Leeds), the UK Met Office and Indonesia’s weather and space agencies (BMKG and LAPAN). TerraMaris aims to transform our understanding of convective processes in Indonesia and their interactions with the largescale flow through an intensive observational and modelling campaign focussed on the circulation systems associated with the daily development and offshore propagation of convection.

Thankfully, the modelling component of our project hasn’t been so affected by the pandemic and is chugging away as normal. We’re generating a set of very high-resolution model simulations over the whole of Indonesia that are able to (at least partially) resolve the convective updrafts and downdrafts in the daily-repeating storms. Unlike many lower resolution models, these simulations are capable of accurately simulating offshore propagating convection. We intend to run 10 simulations, with one during the long-awaited field campaign season, covering the entire December – February rainy season. A wide range of weather conditions will be represented in this sample, and we’ll be able to study the simulated thunderstorms during all of them.

We are able to compare the role these storms play in heating the upper atmosphere to that in more conventional, lower resolution models, which aren’t able to resolve the updrafts and downdrafts and instead have to parameterise them. These models generally don’t represent Indonesian convection very well. It’s early days, but we’re finding that there’s a lot more variability in the height above the ground where heating occurs in the high-resolution models than the low resolution models. Our high-resolution models also simulate the daily formation of storms in the afternoon/evening and their overnight propagation into the oceans really well (see video).

Figure 2: Mean diurnal cycle of precipitation in early TerraMaris simulations.

Because interactions between the atmosphere and the warm tropical oceans are really important in this part of the world, we’re using a carefully designed coupled atmosphere-ocean model to run all these simulations. Full ocean models are very computationally expensive to run, so we’re using a multi-column KPP ocean model in order to simulate turbulent vertical mixing in the near-surface mixed layer. This is the oceanic process that interacts most strongly with the atmosphere, as it transports heat and freshwater fluxes from the atmosphere at the sea surface further down through the upper ocean. The role of ocean currents and other processes are represented by imposing “corrective” sources and sinks of heat and salt which ensure that in the long run, our simulated ocean matches up with observations of the real ocean.

We’re hoping that these simulations will be able to answer some really fundamental questions about how large-scale weather conditions modulate the vertical distribution of convective heating and how important the daily propagating systems are for providing the heat that drives global circulation. This will be useful for improving the representation of Indonesian convection in lower resolution models. If we can improve that, we hope that weather forecasts will improve both locally in Indonesia and globally through interactions with the Hadley and Walker cells. With any luck, by the time we finally step onto that plane, we’ll know a lot more about the storms that we’re trying to observe than we do now!


Posted in Atmospheric circulation, Climate, Convection, Rainfall, Thunder Storms, Tropical convection | Leave a comment

Pacific and Atlantic Conversations

By: Daniel Hodson

The Earth is a world of water – oceans spread out across much of the planet and they exert a profound influence over the climate. Ascending from the Earth, the churning waves and surf shrink away and the oceans relax into seemingly silent, passive bodies of water. But this seeming passivity belies a complex network of currents and flows hidden beneath the surface, driven by heat at the equator flowing to the colder poles, but being frustrated in doing so by the spin of the Earth.

Figure 1: The Atlantic Meridional Overturning Circulation

In the Atlantic, an immense flow of water drives northwards towards Greenland and Iceland in the top kilometre of ocean, before plunging down kilometres and returning southwards at depth, towards Antarctica (Figure 1). This is the deep Atlantic Meridional Overturning Circulation (AMOC). This circulation involves such large flows of water that oceanographers had to invent a new unit of measurement to think about the volumes involved: the Sverdrup (Sv) is a million metres cubed per second – that’s a cube of water 100m on a side, flowing past every second. This northward flowing water carries heat with it, sometimes speeding up, sometimes slowing down – bringing more or less heat as it does so, leading to warming or a cooling of the surface of the ocean. This heat can then be carried away by the atmosphere leading to warmer air temperature, or perhaps driving changes in surface wind patterns.

If, whilst orbiting over the Pacific, you tuned your eyes away from the blue of the Pacific and into the infrared, you would see what the satellites see: a vast pattern of warm and cold spread out across the expanse of the Pacific Ocean. Over the years, you would see this pattern pulse warm and then cold; in the semi-regular cycle of El Nino: the heartbeat of the climate system which dominates the tropics.Figure 2: The Pacific Decadal Oscillation pattern

El Nino is driven by complex interactions between the winds blowing over the Pacific Ocean, and the waters sloshing between Asia and the Americas. It leads to a 3-6-year cycle of warming and cooling in the equatorial Pacific Ocean. In the warm phase, large pulses of heat are released from the ocean into the atmosphere, shifting climate patterns leading to droughts and deluges across the globe. Over many decades of watching, a more widespread pattern of warming and cooling emerges across the Pacific – a pattern known as the Pacific Decadal Oscillation (PDO) (Figure 2). The connection between the PDO and El Nino remains to be fully understood.

Both the AMOC and the PDO play a key role in storing and moving heat around; their variations over time, in turn, modulate our climate system, potentially in profound ways. The way these climate features respond to external factors like changing levels of greenhouse gases or industrial pollution may affect the medium-term trajectory of anthropogenic climate change.Figure 3: The Pacific and Atlantic Oceans

Figure 4: The Tropical Walker Circulation

For a long time, it was thought that these two siblings (AMOC and PDO) continued their existence in ignorance of the other; bounded by Africa and Eurasia but divided by the Americas (Figure 3). They may hear distant echoes of each other, mediated by the turbulent Southern Ocean around Antarctica, or the icy Arctic Ocean – but signals in the ocean are ponderous, slow and noisy. New simulations with modern complex climate models suggest that they hear and feel each other’s presence over, rather than around, the wall of the Americas; mediated by the atmosphere. The Walker circulation is the large-scale pattern of ascending and descending air one encounters when travelling around the equator (Figure 4). Air heated and pushed upwards by a warm ocean in one place, must be replaced by descending air elsewhere in the tropics. This circulation seems to allow the two oceans to talk to and influence each other. Climate model simulations [1][2] seem to show that, over many decades, a warmer Atlantic can nudge a cooler Pacific Ocean, whilst a Warmer Pacific ocean can lead to a warmer Atlantic.

Whilst we are seeing a clearer picture of how these two oceans coordinate their climate modulations, challenges remain. Many decades of observations are needed to understand the slow influences of these twin oceans – but whilst the 21st-century ocean is well observed, ocean observations before 1950 are much scarcer. Remarkable efforts are underway, however, to utilize the vast datasets buried in old ships logs. We also rely on climate models to tease apart the complex interactions in the climate system. Are the models we use accurate enough? Are we doing the right experiments with these models to understand how these features of climate interact? If we can begin to understand the conversation between these two oceans better, we may be better able to predict their future influences on climate and, in turn, on us.


[1] Meehl, G.A., and Coauthors, 2021: Atlantic and Pacific tropics connected by mutually interactive decadal-timescale processes. Nat. Geosci. 14, 36–42 .

[2] Ruprich-Robert, Y., Msadek, R., Castruccio, F., Yeager, S., Delworth, T., & Danabasoglu, G., 2017: Assessing the Climate Impacts of the Observed Atlantic Multidecadal Variability Using the GFDL CM2.1 and NCAR CESM1 Global Coupled Models, Journal of Climate, 30(8), 2785-2810.

Posted in Climate | Leave a comment

Satellite data used to provide life-saving weather forecasts in tropical Africa

By: Peter Hill

Much of the population of tropical Africa are vulnerable to severe weather, often caused by intense storms that can generate heavy rainfall, strong winds and flooding. For instance, thousands of fishermen drown each year in Lake Victoria as a result of accidents caused by storms. As a result, improved weather forecasting systems in tropical Africa could save lives and protect livelihoods.

The Global Challenges Research Fund (GCRF) African Science for Weather Information and Forecasting Techniques (SWIFT) project aims to enable African weather forecasting services to develop such improved weather forecasting systems. A partnership between meteorologists from Senegal, Ghana, Nigeria, Kenya and the UK, including several scientists at the University of Reading, SWIFT is striving to improve forecasts from timescales of a few hours to a few weeks ahead.

Much of my work in the SWIFT project involves very short-range predictions – from 0 to 12 hours ahead – based directly on observations, something meteorologists term “nowcasting”. The simplest nowcasts take weather observations and extrapolate them forwards in time, using the assumption that the weather will continue to develop along the same trajectory as the recent past. Nowcasts can be crucial for severe weather events, providing timely information to enable authorities and the public to respond appropriately to safeguard lives and livelihoods.

One of the major obstacles to nowcasting in tropical Africa is the lack of rainfall radar observations, which are used for nowcasting in other parts of the world, including the UK. Passive satellite observations, which measure the naturally occurring energy at the top of the atmosphere, provide a less direct measure of weather systems. Yet in the absence of other observations, this satellite data can provide vital information for nowcasting purposes.

To this end, the SWIFT project has made satellite-based nowcasts for tropical Africa freely available from a new website. Figure 1 provides examples of two such products. These nowcasts are based on software provided by the European Nowcasting Satellite Applications Facility (NWCSAF). However, these products have been calibrated and validated for mid-latitude European weather systems and it is therefore necessary to evaluate how well they perform for tropical Africa.

Figure 1: Examples of two NWCSAF products over tropical Africa. (a) shows the convective rainfall rate in different regions (b) shows the rapidly developing thunderstorms convection-warning product over the Guinea Coast region.

To understand the suitability of this NWCSAF software for tropical Africa, I compared the two products shown in Figure 1 to higher quality satellite rainfall estimates that incorporate data from multiple sources including direct rainfall estimates from rain gauges at the surface. This higher quality data cannot be used for nowcasting because it is not available sufficiently quickly.

The comparison demonstrates that both NWCSAF products provide useful information, despite some limitations. For instance, the convective rain rate product has valuable skill for predictions at least 90 minutes ahead (Figure 2). The rapidly developing thunderstorms product can also identify the occurrence of heavy precipitation, correctly identifying around 60% of strong (5 mm of rain per hour) events at least one hour before they occur. These products could be used to inform flood warnings, disaster response, or provide warnings to fishermen.

Figure 2: Skill metrics for the convective rainfall rate product, compared to predictions based on the historical occurrence of rainfall events. Hit rate means the proportion of true rainfall events that are successfully identified, and false alarm ratio means the proportion of the predicted rainfall events that do not occur in reality. “Retrieval” here is the skill of the satellite products, versus higher quality data. This higher quality data is regarded as “truth” but is not available sufficiently quickly to be useful for nowcasts. The “extrapolation” is the forecast made by projecting the observed storms forward in time. The “climatology” skill is skill from assuming today’s storms can be predicted using previous years storms on the same time of day and time of year.

This analysis is crucial in providing forecasters with confidence in the products which GCRF African SWIFT has made available to them to issue warnings. It has also highlighted some aspects of both the convective rain rate and rapidly developing thunderstorm – convection warning products that could be improved upon. Future work will aim to further develop these products to provide better nowcasts for tropical Africa.

Ongoing work within the African SWIFT project is also enabling African groups to generate these products locally, as well as supporting forecasters to understand and use these products effectively to minimise adverse impacts of severe weather on lives and livelihoods in Africa.


Roberts, A.J., Fletcher, J.K., Groves, J., Marsham, J.H., Parker, D.J., Blyth, A.M., Adefisan, E.A., Ajayi, V.O., Barrette, R., de Coning, E., Dione, C., Diop, A.-L., Foamouhoue, A.K., Gijben, M., Hill, P.G., Lawal, K.A., Mutemi, J., Padi, M., Popoola, T.I., Rípodas, P., Stein, T.H.M., Woodhams, B.J. 2021. Nowcasting for Africa: advances, potential and value. Weather (In press).

Hill, P. G., Stein, T. H. M., Roberts, A. J., Fletcher, J. K., Marsham, J. H. & Groves, J. 2020. How skilful are Nowcasting Satellite Applications Facility products for tropical Africa? Meteorological Applications, 27(6). DOI:

Posted in Africa, Climate, Rainfall, Remote sensing | Leave a comment

Flood forecasting for the Negro River in the Amazon Basin

By: Amulya Chevuturi

Figure 1: Photograph of the Negro River and the Amazon rainforest.

The Amazon is the largest river basin in the world, with large free-flowing rivers, draining about one-sixth of global freshwater to the ocean. The Amazonian floodplains have been long settled and used by indigenous populations, providing essential ecosystem services and natural resources for human needs (Junk et al., 2014). Increasing frequency and magnitude of floods in the last two decades has caused considerable environmental and socio-economic losses in many regions of the Amazon basin (Marengo and Espinoza, 2016). Although some studies have estimated flood risk for the Amazon basin (de Andrade et al., 2017), most towns and cities in this region still lack operational flood forecasts and integrated flood risk management plans.

The main aim of the PEACFLOW (Predicting the Evolution of the Amazon Catchment to Forecast the Level Of Water) project was to develop skilful forecasting systems for high water levels of Amazonian rivers, at sufficiently long lead time, for effective implementation of disaster risk management actions. In this project, we focused on developing forecast models for annual maximum water level for the Negro River at Manaus, Brazil (Figure 1), as a pilot case study, using a multiple linear regression approach. We used various potential predictors from preceding months: rainfall, water level, Pacific and Atlantic Ocean conditions and linear trend, all of which strongly influence the water levels in the Amazon basin. Flood levels in the Negro River occur between May and July and are strongly influenced by the rainfall during November to February, as its large floodplains delay the flood wave by months (Schöngart and Junk, 2007). This delay and the regularity between the rainfall and peak water level allows for the development of skilful statistical forecast models that can issue forecasts by March or earlier.

Figure 2: The Negro, Solimões and Madeira Rivers (blue lines) and their catchment basins (regions bounded by black lines) contributing to the river water level at Manaus (yellow circle; 3.14°S, 60.03°W).

In collaboration with Brazilian scientists, from various partner institutes, our team developed forecast models of the annual maximum water level (flood level) for the Negro River at Manaus by finding the best model fit over the training period of 1903 to 2004. For our models, rainfall over the catchment of the Negro River as well as from the catchment of nearby Solimões and Madeira Rivers (Figure 2) is the predominant predictor. We developed three models in this project, which use observations as input and can be implemented operationally to provide flood forecasts for Manaus. We compared the models developed in this project against current operational forecasts, provided by Brazilian agencies (CPRM and INPA), for the period of 2005 to 2019. The three PEACFLOW models issue forecasts of flood levels in the middle of January, February and March each year, but the skill of the models increase with decreasing lead-time (Figure 3a). Our results show that the models developed in this study can provide forecasts with the same skill as existing operational models one month in advance.

We also gained an additional month of lead-time when we relaced the observed input data with the ECMWF seasonal ensemble forecast. We developed two operational models using this data, which provide probabilistic forecasts at the beginning of January and February (Figure 3b). The probabilistic forecasts for the maximum water level, using ECMWF input, show good skill for extreme flood likelihood.

Figure 3: Comparison of models developed in PEACFLOW project and existing models (CPRM and INPA) and observed values for annual maximum water level at Manaus for models using (a) observations and (b) seasonal forecasts as input.

The methods developed in this project can also be used to develop forecast models for flood and drought levels over other regions of the Amazon basin. We provide the fully automated PEACFLOW models in a GitHub repository at We retrospectively forecasted the annual maximum water levels at Manaus for 2020, and we are actively forecasting for 2021 (Table 1). Our forecasts this year show the maximum water levels crossing 29m, which is the extreme flood threshold for Manaus, at which the government declares emergency conditions.

Table 1: Forecasts for 2020 and 2021 using PEACFLOW models at different lead-times. Observed annual maximum water level at Manaus for 2020 was 28.52m.



de Andrade MMN et al. (2017) Flood risk mapping in the Amazon. Flood Risk Management, 41. DOI:

Junk WJ et al. (2014) Brazilian wetlands: their definition, delineation, and classification for research, sustainable management, and protection. Aquatic Conservation: marine and freshwater ecosystems, 24, 5–22. DOI:

Marengo JA and Espinoza JC (2016) Extreme seasonal droughts and floods in Amazonia: causes, trends and impacts. International Journal of Climatology, 36, 1033–1050. DOI:

Schöngart J and Junk WJ (2007) Forecasting the flood-pulse in Central Amazonia by ENSO-indices. Journal of Hydrology, 335(1),124–132. DOI:

Posted in Amazon, Climate, Flooding | Leave a comment

Can We Use Artificial Intelligence To Improve Numerical Models Of The Climate?

By: Alberto Carrassi

Numerical models of the climate are made of many mathematical equations that describe our knowledge of the physical laws governing the atmosphere, the ocean, the sea-ice etc. These equations are solved using computers that “see” the Earth system at discrete points only, for instance at the vortexes of a grid where the physical quantities are defined. The density of the grid defines the model resolution: the denser the grid the higher the resolution and, in principle, the better the match between the simulated and the real climate.

Resolution is inevitably finite and to a large extent constrained by computer power. As a consequence, our numerical climate models do not see what occurs in between grid points and offer only a partial description of the reality. This source of model error is called “subgrid” or “unresolved scale” model error.  Reducing or correcting for this is a major endeavour of our scientific community, and a lot has been achieved in the past decades thanks to increased computational power and the improvement of our understanding of the subgrid processes and on their effects on the resolved scale.

Inspired by the astonishing success of artificial intelligence in so many different areas of science and social life, in our recent study (Brajard et al., 2021) we investigated whether artificial intelligence could also be used to improve current numerical climate models by estimating and correcting for the unresolved scale error. Artificial intelligence, and machine learning, in particular, extracts and emulates behavioural patterns from observed data. Being driven by data alone, the forecasts based on machine learning predict behaviour based on behaviour that has previously been observed. Therefore, the quality and completeness of the data used in the training is extremely important.

To overcome this limitation our approach relies on data assimilation, another key component of the nowadays operational weather or ocean prediction routine. Data assimilation is the process by which data are incorporated into models to get a more accurate description of reality. After many years of research and development, data assimilation now provides a range of methods that handle noisy and sparse data with great efficiency.

In our approach, we combine data assimilation and machine learning in the following way. First, we assimilate the raw (sparse and noisy) data into the physical model. This step outputs a sequence of pictures, like a “movie”, showing the climate over the given observed period, whose accuracy depends on the unresolved scale error in the model. The difference between this movie and the model contains information about the unresolved scale error that we wish to correct. In the machine learning step these differences are used to train a neural network to estimate the model error. At the end of the training, we have a neural network that has been optimised to produce an estimate of the model error given the model state as input. The final step consists of constructing a new, possibly more accurate, hybrid numerical model of the climate, made of the original physical model plus the data-driven model obtained using this method.

Figure 1: Shows the model prediction error as a function of time: the longer the time horizon (time length of the forecast) the larger the error. The dashed black line shows the original physical model. The solid lines refer to hybrid (physical plus data-driven) models based on a complete and perfect dataset (black) or on a different amount (p) of noisy observations. The hybrid models perform much better than the original model.  *MTU – Model Time Unit

The data assimilation-machine learning approach has been tested in idealised models and observational scenarios with very encouraging results. A key advantage of the method is that it relies on data assimilation methods that are already routinely applied in weather and ocean prediction centres: we expect this type of approach to be widely implemented operationally in the future.


Brajard, J., A. Carrassi, M. Bocquet, and L. Bertino, 2021. Combining data assimilation and machine learning to infer unresolved scale parametrization. Philosophical Transactions of the Royal Society A379(2194), 20200086. doi: 

Posted in data assimilation, Machine Learning | Leave a comment

Putting a 120-Year-Old Barograph To The Test

By: Kieran Hunt

Cast your mind back to 1900. The World’s Fair. Great Britain has just won 48 medals at the Summer Olympics including a clean sweep in the steeplechase. Queen Victoria’s reign continues through an unprecedented 63rd year. British heroes, the Queen Mother and Douglas Jardine, are being born. At 43 Market Street, Manchester (now the home of a massive Urban Outfitters, apparently), an aging Italian immigrant, Joseph Casartelli, owns a workshop specialising in the construction of measuring instruments. Now, forward 120 years (I’ll spare you the scene-setting this time), and I was delighted to receive one such instrument, a barograph, as a Christmas gift from my convivial father-in-law.

Barographs of this era comprise two basic components. On one hand, there is an aneroid barometer – typically a stack of partially-evacuated alloy cells that expand and contract as the pressure decreases or increases. On the other, a clockwork drum is set to rotate about once per week. The two are connected by a scribing arm holding an ink nib. When operating, the nib rests against a paper chart wrapped around the drum, marking pressure changes with time. (Figures 1-3)

Figure 1: Close-up photos of the Casartelli & Son barograph. Top left: inside the clockwork drum. Top right: The spindle on which the drum sits. Bottom: the drum in place, with the scribing arm and ink bottle visible. The aneroid cells are conveniently sealed inside the oak casing and thus not shown here.

Figure 2: The barograph operational setup, showing the drum with paper affixed, scribing arm, and connection to the aneroid in the base.

Figure 3: A page from Percy Jameson’s “Weather and Weather Instruments”, published by Taylor in 1908, showing an engraving of a similar barograph. He’s also not happy about the “concealed works”.

The Storm

As luck would have it, the arrival of Storm Bella (Figure 4) on Christmas night meant that I could test the barograph immediately. With a coffee to steady the post-Christmas hangover (note: it did not steady my hands), I carefully filled the nib with ink, attached the paper to the drum, and woke the clockwork from its multi-decade slumber. It wasn’t that easy of course, it actually took me two hours to figure out that the clockwork wasn’t working, but increasingly firm shaking (the instructions called for “rotation about the horizontal plane”, make of that what you will) soon set it in motion.

Figure 4: Photo of Storm Bella irritating British residents, in this case the owner of a Rolls Royce. Credit: PavementsForThePeople via BBC.

The Results

Figure 5 shows the barograph trace from just after the initial fall in pressure associated with Storm Bella through its development, and eventual recovery by New Year’s Day. Now, a confession in two parts: Boxing Day was a Saturday and the log papers start on Mondays – not wanting to reset the equipment two days into the experiment, I took the liberty of adjusting the calendar. I also confused 12pm with 12am during the initial setup. Bearing these in mind, I overlaid pressure data from the atmospheric observatory at the University (shown in red in Figure 5).

So, how did it do? Well, there are two major differences compared with the observatory record – the first is an initial offset of about 5 hPa, the second is an overestimate of the minimum pressure: 979 hPa on the barograph compared with 963 hPa at the observatory (a difference of 11 hPa when accounting for the initial offset). I had hoped the initial offset was due to elevation differences, but the observatory is only 20 m higher than my house, accounting for just 2 hPa. The rest was almost certainly due to clumsy alignment, a regrettable by-product of my unsteady hands and a remarkably sensitive scribing arm lever. I suspect a similar alignment problem caused the overestimated pressure minimum – in setting the scribing arm position, too much force between the nib and drum results in friction that prevents the scribing arm from moving freely. If we adjust the observatory data to account for these issues (Figure 6), by shifting it up and squashing it a bit, the barograph does a clearly exceptional job of capturing the hour-to-hour pressure changes, keeping within 1 hPa of the observatory values for the whole week.

Figure 5: The barograph trace from Storm Bella (dark blue). Overlaid is the pressure reading from the automatic sensor at the University Observatory (red). If you look carefully at the beginning of the trace, you’ll see my various attempts to get the clockwork moving.

Figure 6: As Figure 5, but with the observatory data shifted and compressed to take into account various barograph calibration errors.


Calibration issues could probably be overcome with a bit of practice, but I probably wouldn’t recommend using it to land a plane. In the right hands, though, it probably could still be used operationally. Amazing.


Science Museum Group 

Posted in Climate, History of Science, Measurements and instrumentation | Leave a comment