How to improve a climate model: a 24-year journey from observing melt ponds to their inclusion in climate simulations

By: David Schroeder

Melt ponds are puddles of water that form on top of sea ice when the snow and ice melts (see Figure). Not all the water drains immediately into the ocean, but it can stay and accumulate on top of the sea ice for several weeks or months (Ref:

Figure: Melt ponds on sea ice (Credit: Don Perovich)

A momentous field campaign was carried out in 1998 on the Arctic sea ice: the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment ( – a role model for the latest and largest Arctic expedition MOSAIC in 2019/2020 ( One aim was to understand and quantify the sea ice-albedo feedback mechanism on scales ranging from meters to thousands of kilometers. The differences in albedo (fraction of shortwave radiation reflected at the surface and, thus, not used to heat the surface) between snow-covered sea ice (~85%), bare sea ice (~60-70%), ponded sea ice (~30%) and open water (<10%) are huge and cause the most important feedback for sea ice melt: The more and the earlier snow and ice melts, the larger the pond and open water fraction, the more shortwave radiation will be absorbed increasing the melting. Melt ponds play an important part in the observed reduction and thinning of Arctic sea ice during last decades.

Continuous SHEBA measurements over the whole melt season in 1998 allowed the development of models representing the melting cycle: from the onset of melt pond formation, spreading, evolution and drainage over late spring and summer, towards freeze-up in the late summer and autumn. Starting with a one-dimensional heat balance model (Taylor and Feltham, 2004), it took about 10 years to develop a pond model suitable for a Global Climate Model (GCM) (Flocco et al., 2010; 2012). Melt pond formation is controlled by small-scale sea ice topography. This is not available in a GCM with coarser resolution. However, we could use the sub-gridscale ice thickness distribution (5 different ice thickness categories for each grid cell) as a proxy for topography and simulate the evolution of pond fraction assuming melt water runs from the thicker ice to the thinner ice. With further adjustments to the albedo scheme (Ridley et al., 2018), the pond model could finally be used in the UK Climate Model HadGEM3. The HadGEM3 simulations for the latest IPPC report ( include our pond model.

What is the impact of the melt pond model on the performance of the HadGEM3 simulations? It is noteworthy that HadGEM3  has a stronger climate sensitivity (global warming with respect to CO2 increase) compared to its predecessor HadGEM2  or most other climate models (Mehl et al., 2020). But is this due to the melt ponds? Lots of model components were changed at the same time, so it is impossible to specify the individual impact. To address this, Diamond et al. (2023) carried out HadGEM3 simulations with 3 configurations which only differ with respect to melt pond treatment (our pond scheme, simple albedo tuning to account for the impact of melt ponds and no melt ponds). Historical or future projections would require an ensemble simulation to distinguish between internal variability and impact of pond scheme. Thus, 100 year long constant forcing simulations have been chosen.

While Arctic sea ice results between the simple albedo tuning and our full pond scheme do not differ significantly for pre-industrial conditions, the impact on near future conditions are remarkable: The simple tuning never yields an ice-free summer Arctic, whilst our pond scheme yields an ice-free Arctic 35% of years and raises autumn Arctic air temperatures by 5 to 8 °C.  Thus, the pond treatment has a large impact on projections when the Arctic will become ice-free. This is a striking example of the impact


Diamond, R., Schroeder, D., Sime, L.C., Ridley, J., and Feltham, D.L.: Do melt ponds matter? The importance of sea-ice parametrisation during three different climate periods. J. of Climate, under review.

Flocco, D., D. L. Feltham, and A. K. Turner, 2010: Incorporation of a physically based melt pond scheme into the sea ice component of a climate model. Journal of Geophysical Research: Oceans, 115 (C8).

Flocco, D., D. Schroeder, D. L. Feltham, and E. C. Hunke, 2012: Impact of melt ponds on arctic sea ice simulations from 1990 to 2007. Journal of Geophysical Research: Oceans, 117 (C9).

Mehl, G. A., C. A. Senior, V. Eyring, G. Flato, J.-F. Lamarque, R. J. Stouffer, K. E. Taylor, and M. Schlund, 2020: Context for interpreting equilibrium climate sensitivity and transient climate response from the cmip6 earth system models. Science Advances, 6 (26).

Ridley, J. K., E. W. Blockley, A. B. Keen, J. G. Rae, A. E. West, and D. Schroeder, 2018b: The sea ice model component of hadgem3-gc3. 1. Geoscientific Model Development, 11 (2), 713–723.

Taylor, P., and D. Feltham, 2004: A model of melt pond evolution on sea ice. Journal of Geophysical Research: Oceans, 109 (C12).

Posted in Arctic, Climate modelling, Cryosphere, IPCC, Numerical modelling, Polar | Leave a comment

Cycling In All Weathers

By: David Brayshaw

In a few weeks’ time, I’ll be taking some time off for an adventure: spending 3-weeks cycling the entire 3,400 km of this year’s Tour de France (TdF) route.  I’ll be with a team riding just a few days ahead of the professional race, aiming to raise £1M for charity.  Although this is a purely personal challenge – unrelated to my day job here in the department – being asked to write this blog set me thinking about the connections between cycling and my own research in weather and climate science.

Weather is obviously important to anyone cycling outdoors: be it extremes of rain, wind or temperature.  Cycling in the rain can be miserable but, more than that, it can lead to accidents on slippery roads and poor visibility for riders.   Cold temperatures and wind chill pose challenges particularly when descending at speeds of up to 50 mph in the high mountains (in years gone by professional cyclists often took a newspaper from a friendly spectator at the top of a climb to shove it down the front of their cycling jersey to protect themselves from the worst of the wind chill).  Air resistance and wind play a major role more generally: the bunching up of the peloton occurs as riders save energy by staying out of the wind and riding close behind the cyclist in front.  While, while headwinds sap riders’ energy and lower their speed, it’s crosswinds that blow races apart.  In that situation, the wind-shielding effect runs diagonally across the road, shredding the peloton into diagonal lines as riders fight for position and cover.

Photo: Grim conditions on a training ride in the Yorkshire Wolds, April 2023.

Last year’s TdF race, however, took place in a heat wave.  The athletes did their work in air temperatures approaching 40 oC, stretching the limits of human performance in extreme temperatures.  On some days the roads were sprayed with water to stop the tarmac melting (road temperatures were often closer to 60 oC), and extreme weather protocols were called upon (potential adjustments include changes to the start time or route, making more food and water available, even cancelling whole stages).  All this comes with risks and costs (human, environmental, financial) for a range of people and organisations (the riders and spectators; the organisers and sponsors; and the towns and communities the ride goes through).  Moreover, heatwaves can only be expected to become more common in the years to come.

From a meteorological perspective, the “good news” is that tools are available to help quantify, understand and manage weather risks.  High-quality short-range (hours to days) forecasting is obviously essential during the event itself but subseasonal to seasonal (S2S) forecasts or longer-term climate change projections may also help to manage risk over a longer horizon (e.g., hire of water trucks, anticipating the need for route modification, use of financial products to mitigate losses if stages are cancelled or adjusted, even reconsidering the timing of the event itself if July temperatures become intolerable in the decades to come).

The specifics of the decisions and consequences described here for this particular race are simply speculation on my part (I have not done any in-depth research on climate services for cycling!).  However, the nature of the “climate impact problem” should be familiar to anyone working in the field.  As an example, some recent work I was involved in which produced a proof-of-concept demonstration of how weeks-ahead forecasts could be used to improve fault management and maintenance scheduling in telecommunications (see figure below and full discussion here), but many more examples can be found (see here for a recent review).  In such work, there are usually two core challenges.  Firstly, to link quantitative climate data (say, skillful probabilistic predictions of air temperature weeks ahead) with the impact of concern (say, the need to cancel part of a stage and the financial losses incurred by the host town that is then not visited).  Then, secondly, to identify the mitigating actions that can take place (say, the purchase of insurance or a financial hedge) and a strategy for their uptake (say, a decision criteria for when to act and at what cost).  The broad process is discussed in two online courses offered here in the department (“Climate Services and Climate Impact Modelling” and “Climate Intelligence: Using Climate Data to Improve Business Decision-Making”).

Figure: Use of week-ahead sub-seasonal forecasts to anticipate and manage line faults.  Left panel demonstrates that predictions of weekly fault rates made using a version of ECMWF’s subseasonal forecast system (solid and dashed lines represent two different forecast methods) outperform predictions made using purely historic “climatological” knowledge (dotted line).  The right panel illustrates the improved outcomes possible with the improving forecast information (from red to purple to blue curves): i.e., by using a “better” forecast it is possible to achieve either higher performance for the same resources, or the same performance for fewer resources (here as an illustrative schematic but an application to “real” data is available in the cited paper).  Figures adapted from or based upon Brayshaw et al (2020, Meteorological Applications), please refer to the open-access journal article for detailed discussion.

For this summer, however, I’m just hoping for good weather for my ride.  Thankfully I won’t be trying to “race” the distance (merely survive it!), so a mix of not too hot, not too wet, not too windy would just be perfect.  With a bit of luck, hopefully, I’ll make it all the way from the start line in Bilbao to the finish in Paris!

If you’d like to find out more about my ride or the cause I’m supporting then please visit my personal JustGiving page (


  • Brayshaw, D. J., Halford, A., Smith, S. and Kjeld, J. (2020) Quantifying the potential for improved management of weather risk using subseasonal forecasting: the case of UK telecommunications infrastructure.Meteorological Applications, 27 (1). e1849. ISSN 1469-8080 doi:

  • White, C. J., Domeisen, D. I.V., Acharya, N., Adefisan, E. A., Anderson, M. L., Aura, S., Balogun, A. A., Bertram, D., Bluhm, S., Brayshaw, D. J. , Browell, J., Büeler, D., Charlton-Perez, A., Chourio, X., Christel, I., Coelho, C. A. S., DeFlorio, M. J., Monache, L. D., García-Solórzano, A. M., Giuseppe, F. D., Goddard, L., Gibson, P. B., González, C. R., Graham, R. J., Graham, R. M., Grams, C. M., Halford, A., Huang, W. T. K., Jensen, K., Kilavi, M., Lawal, K. A., Lee, R. W., MacLeod, D., Manrique-Suñén, A., Martins, E. S. P. R., Maxwell, C. J., Merryfield, W. J., Muñoz, Á. G., Olaniyan, E., Otieno, G., Oyedepo, J. A., Palma, L., Pechlivanidis, I. G., Pons, D., Ralph, F. M., Reis, D. S., Remenyi, T. A., Risbey, J. S., Robertson, D. J. C., Robertson, A. W., Smith, S. , Soret, A., Sun, T. , Todd, M. C., Tozer, C. R., Vasconcelos, F. C., Vigo, I., Waliser, D. E., Wetterhall, F. and Wilson, R. G. (2022) Advances in the application and utility of subseasonal-to-seasonal predictions. Bulletin of the American Meteorological Society, 103 (6). pp. 1448-1472. ISSN 1520-0477 doi:

Posted in Climate Services, Environmental hazards, Seasonal forecasting, subseasonal forecasting | Leave a comment

Flying Through Storms To Understand Their Interaction with Sea Ice: The Arctic Summer-time Cyclones Project and Field Campaign

By: Ambrogio Volonté

Arctic cyclones are the leading type of severe weather system affecting the Arctic Ocean and surrounding land in the summer. They can have serious impacts on sea-ice movement, sometimes resulting in ‘Very Rapid Ice Loss Events’, which present a substantial challenge to forecasts of the Arctic environment from days out to a season ahead. Summer sea ice is becoming thinner and more fractured across widespread regions of the Arctic Ocean, due to global warming. As a result, winds can move the ice around more easily. In turn, the uneven surface can exert substantial friction on the atmosphere right above it, impacting the development of weather systems. Thus, a detailed understanding of the two-way relationship between sea ice and Arctic cyclones is crucial to allow weather centres to provide reliable forecasts for the area, an increasingly important issue as the Arctic sees growing human activity.

This is the main goal of the Arctic Summer-time Cyclones project, led by Prof John Methven and funded by the UK Natural Environment Research Council (NERC). To this end, we designed a field campaign aiming to fly into Arctic cyclones developing over the marginal ice zone (that is the transitional area between pack ice and open ocean, where the ice is thinner and fractured, and where leads and melt ponds can be present). The campaign was based in Svalbard (Norwegian Arctic) and took place in July and August 2022, one year later than originally planned due to the Covid pandemic. The field campaign team included scientists from the University of Reading (John Methven, Suzanne Gray, Ben Harvey, Oscar Martinèz-Alvarado, Ambrogio Volonté and Hannah Croad), the University of East Anglia (UEA), and the British Antarctic Survey (BAS). We were joined by researchers from the US and France, funded by the Office of Naval Research (USA).

Figure 1: Some components of the Arctic Summer-time Cyclones field campaign team in front of the Twin Otter aircraft. Photo by Dan Beeden (BAS).

Using the BAS MASIN Twin Otter aircraft, we performed 15 research flights during the campaign, targeting four Arctic cyclones and several other weather features associated with high winds near the surface. Flying at very low levels (even below 100ft when allowed by visibility conditions and safety standards) we were able to detect the turbulent fluxes of heat and momentum characterising the interaction between surface and atmosphere. Vertical profiles and stacks of horizontal legs at different heights were used to sample for the first time the 3D structure of wind jets present in the first km above the surface in Arctic summer cyclones. Our partners from France and US also completed a similar number of flights using their SAFIRE ATR42 aircraft. Although their activities were mainly focused on cloud structure and mixed phase (ice-water) processes higher up, some coordinated flights were carried out, with both aircrafts flying in the same area to maximise data collection. For more details on our campaign activities (plus photos and videos from the Twin Otter!) see the ArcticCyclones Twitter account and the blogs on our project website.

Figure 2: An example of sea ice as seen from the cockpit of the Twin Otter during the flight on 30 July 2022. Photo by Ian Renfrew (UEA).

Now that the field campaign has concluded, data analysis is proceeding apace. Flight observations are being compared against model data from operational weather forecasts and dedicated high-resolution simulations. While our colleagues at the University of East Anglia are analysing the observed turbulent fluxes over sea ice to improve their representation in forecast models, here at Reading we are looking at the detailed 3D structure of Arctic cyclones and at the processes driving their lifecycle. Preliminary results highlight the sharpness of the low-level wind jet present in their cold sector, with observations suggesting that jet cores are stronger and shallower than shown by current models. However, more detailed analysis is still needed to confirm these results. At the same time, novel analysis methods are being implemented on experimental model data, taking advantage of the properties of conservation and inversion of atmospheric variables such as potential vorticity and potential temperature. The aim is to isolate the contributions of individual processes, such as friction and heating, to the dynamics of the cyclone and thus highlight the effects of atmospheric-surface interaction on cyclone development.

Figure 3: Example of flight planner map (software developed by Ben Harvey, Reading) used to set up the flight route of one of the campaign flights. Background data from UK Met Office (Crown copyright).

While we are surely missing the sense of adventure of our Arctic field campaign, the excitement for the scientific challenge is still accompanying us as we analyse the data here in Reading and collaborate with our UK and international partners. Stay tuned if you are interested in how Arctic cyclones work, how they interact with the changing sea ice and how Arctic weather forecast can be improved. Results might soon be coming your way!


Posted in Arctic, Climate, Climate change, Data collection, extratropical cyclones | Leave a comment

Two Flavours of Ocean Temperature Change and the Implication for Reconstructing the History of Ocean Warming

Introducing Excess and Redistributed Temperatures. 

By: Quran Wu

Monitoring and understanding ocean heat content change is an essential task of climate science because the ocean stores over 90% of extra heat that is trapped in the Earth system. Ocean warming results in sea-level rise which is one of the most severe consequences of anthropogenic climate change.

Ocean warming under greenhouse gas forcing is often thought of as extra heat being added to the ocean surface by greenhouse warming and then carried to depths by ocean circulation. This one-way heat transport diagram assumes that all subsurface temperature changes are due to the propagation of surface temperature changes, and is widely used to construct conceptual models of ocean heat uptake (for example, the two-layer model in Gregory 2000).

Recent studies, however, have found that ocean temperature change under greenhouse warming is also affected by a redistribution of the original temperature field (Gregory et al. 2016). The ocean temperature change due to the redistribution is referred to as redistributed temperature change, while that due to propagation of surface warming is referred to as excess temperature change.

A Dye Analogy

To help explain the separation of excess and redistributed temperature, let us consider a dye analogy. Heating the ocean from the surface is like adding a drop of dye into a glass of water that already has a non-uniform distribution of the same dye. After the dye injection, two things happen simultaneously. First, the newly-added dye gradually spreads into the water in the glass (excess temperature). Second, the dye injection disturbs the water and causes water motion that rearranges the original dye (redistributed temperature). Both processes contribute to changes in dye concentrations.

Climate Model Simulation

Figure 1: Time evolution of global-mean ocean temperature change (in Kelvin) under increasing greenhouse gas emission in a climate model simulation (a). Change in (a) is decomposed into excess temperature change (b) and redistributed temperature change (c).

Excess and redistributed temperatures are both derived from thought experiments; neither of them can be directly observed in the real world. Here, we demonstrate their behaviours using a climate model simulation under increasing greenhouse gas emission. The simulation shows that ocean warming starts from the surface, and propagates into depths gradually, reaching 500 m after 50 years (Figure 1a). The ocean warming is mostly driven by excess temperature change (compare Figures 1a with 1b) but strongly disrupted by a downward heat redistribution near the surface (cooling at the surface and warming underneath) (Figure 1c). The downward heat redistribution is caused by a reduction of ocean convection (which pumps heat upward), because surface warming stabilises water columns.


Distinguishing excess from redistributed temperature change is important because they behave in different ways. While one can reconstruct excess temperature at depths by propagating its surface change using ocean transports, the same cannot be done with redistributed temperature. This is because temperature redistribution can potentially happen anywhere in the ocean, unlike extra heat, which can only enter the ocean from the surface (under greenhouse warming). Such a distinction has important implications for estimating the history of ocean warming from surface observations.

Ocean warming is traditionally estimated by interpolating in-situ temperature measurements, which were gathered in discrete locations and times, to the global ocean. This in-situ method suffers a large uncertainty because the ocean remains poorly sampled until the global deployment of Argo floats (a fleet of robotic instruments) in 2005.

A new approach to estimate ocean warming is to propagate its surface signature, that is sea surface temperature change, downward using information of ocean transports (Zanna et al. 2019). This transport method is useful because it relies on surface observations, which have a longer historical coverage than subsurface observations. However, this method ignores the fact that part of surface temperature change is due to temperature redistribution, which does not correspond to subsurface temperature change. In a computer simulation of the historical ocean, we found that propagating sea surface temperature change results in an underestimate of simulated ocean warming due to redistributive cooling at the surface (as shown in Figure 1c) (Wu and Gregory 2022). This result highlights the need for isolating excess temperature change from surface observations when applying the transport method to reconstruct ocean warming.


Thanks to Jonathan Gregory for reading an early version of this article and providing useful comments and suggestions.


Gregory, J. M., 2000: Vertical heat transports in the ocean and their effect on time-dependent climate change. Climate Dynamics, 16, 501–515,

Gregory, J. M., and Coauthors, 2016: The Flux-Anomaly-Forced Model Intercomparison Project (FAFMIP) contribution to CMIP6: investigation of sea-level and ocean climate change in response to CO2 forcing. Geoscientific Model Development, 9, 3993–4017,

Wu, Q., and J. M. Gregory, 2022: Estimating ocean heat uptake using boundary Green’s functions: A perfect‐model test of the method. Journal of Advances in Modeling Earth Systems, 14,

Zanna, L., S. Khatiwala, J. M. Gregory, J. Ison, and P. Heimbach, 2019: Global reconstruction of historical ocean heat storage and transport. Proceedings of the National Academy of Sciences, 116, 1126–1131,


Posted in Climate, Climate change, Climate modelling, Oceans | Leave a comment

Using Old Ships To Do New Science

By: Praveen Teleti

Weather Rescue at Sea: its goals and progress update.

Observing the environment around us is fundamental to learning about and understanding the natural world. Before the Renaissance, everyday weather was thought to be works of divine or supernatural hence beyond human comprehension. Trying to understand the weather was considered so futile that an indecisive or fickle-minded person was called weather-cock, who could turn any way without any reason. In some quarters, efforts to hypothesise rules of atmosphere, let alone forecast the weather, was considered heretical and blasphemous. 

 However, weather played a significant role in day-to-day life from timings of sowing and harvesting, well-being of cattle and other domesticated animals, trade-commerce, even outcomes of conflicts. The treatise written on weather by Greek philosopher, Aristotle in 340 BC was forgotten, and no gains were made on the understanding of the subject until 17th-18th Century. The weather phenomena was too abstract to comprehend without systematic accumulation of weather observations, and it became possible only after invention of weather instruments. Figure 1: The average number of observations recorded per month for each year in the ICOADS (International Comprehensive Ocean-Atmosphere Data Set) dataset, the sizes of data points are proportional to the percent of oceans covered by observations that year. 

Due to the precarious nature of life on sea, mariners started observing and recording weather several times a day, as recognising potential tempests in the vicinity and moving away could save their ship and their lives. Taking precautionary actions also made commercial sense in reducing loss or damage to the goods during transit. Ship owners and insurance providers encouraged and later mandated that weather observations be taken and recorded in an orderly fashion as to derive long-term benefit out of it.  

Sharing weather information was beneficial to all ships irrespective of nationalities, or the nature of companies operating them. However, by then no one common method or units of measuring the weather existed, which made the observations from different ships incompatible. To solve such a problem of incompatibility of information, a maritime conference in Brussels took place between major European powers in 1854.  

In the maritime conference of 1854, it was proposed to standardise methods of observation taking and keeping of logbooks, this led to an increase in the number of usable observations from 1854 onwards. About the same time, the sinking of the Royal Charter ship in a storm off the north coast of Anglesey in October 1859 inspired Vice-Admiral Robert FitzRoy to develop weather charts which he described as “forecasts”, thus the Met Office was born. He used the telegraphic network of weather stations around the British Isles to synthesise the current state of weather.  

There is a scientific interest in understanding the climate of the early industrial era against which our present climate could be measured. Invaluable data from many hundreds of thousands of such ship journeys can be used to inform and to estimate the changes that occurred over many decades. Data rescue (transcribing hand-written observations into computer readable digital format) of historical logbooks has been taking place for decades, but to manually transcribe an almost inexhaustible number of logbooks by individual researchers, would take thousands of human lifetimes. 

As a result, large gaps have remained in our knowledge of the climate, both in space and time. The 19th Century has fewer observations available than the 20th Century in the world’s largest observation meteorological dataset, ICOADS version 3 (International Comprehensive Ocean-Atmosphere Data Set, Freeman et al. 2017). On closer inspection, the average number of monthly observations and percent of global coverage in the 1860s and 1870s is poor compared to other decades after 1850 (Figure 1). 

With this context, the Weather Rescue At Sea project was launched to use the citizen science-based Zooniverse platform to recover some of these observations and make them usable, with a focus on ships travelling through the Atlantic, Indian and Pacific Ocean basins in the 1860s and 1870s. Filling in the gaps in our knowledge will remove ambiguity in how the climate varied historically in many regions where observations are currently poor or non-existent. 

The data generated through this project will help fill many crucial gaps in the large climate datasets (e.g., ICOADS) which will be used to generate new estimates of the industrial and pre-industrial era baseline climate. But more generally, this data and data from other historical sources are used to improve the models and reanalysis systems used for climate and weather research. We need your help to data-rescue these weather observations so that scientists can analyse these observations to better understand changes in the climate since and forecast changes in the future. 

Figure 2: Ship tracks of some of the ships recovered through WRS data-rescue project 

Progress so far: Out of 248 ship logbooks used for this project, 213 logbooks are more than 80% finished, while 35 logbooks are complete. Meaning all positional and meteorological observations (e.g., Sea-level pressure, Air Temperature, Sea water Temperature, Wind speed-direction) in 35 logbooks have been transcribed (Figure 2). To date more than two million dates, positions and weather observations have been transcribed. 

We need your help to get this project across the finish line, let us give a final push to complete all logbooks. Check the poster below to volunteer. 


Freeman, E., S.D. Woodruff, S.J. Worley, S.J. Lubker, E.C. Kent, W.E. Angel, D.I . Berry, P. Brohan, R. Eastman, L. Gates, W. Gloeden, Z. Ji, J. Lawrimore, N.A. Rayner, G. Rosenhagen, and S.R. Smith, 2017: ICOADS Release 3.0: A major update to the historical marine climate record. Int. J. Climatol. (CLIMAR-IV Special Issue), 37, 2211-2237 (doi:10.1002/joc.4775).

Posted in Climate, Data collection, Data rescue, Historical climatology, Reanalyses | Leave a comment

Including Human Behaviour in Models to Understand the Impact of Climate Change on People

By Megan McGrory

In 2020 56% of the global population lived in cities and towns, and they accounted for two-thirds of global energy consumption and over 70% of CO2 emissions. The share of the global population living in urban areas is expected to rise to almost 70% in 2050 (World Energy Outlook 2021). This rapid urbanization is happening at the same time that climate change is becoming an increasingly pressing issue. Urbanization and climate change both directly impact each other and strengthen the already-large impact of climate change on our lives. Urbanization dramatically changes the landscape, with increased volume of buildings and paved/sealed surfaces, and therefore the surface energy balance of a region. The introduction of more buildings, roads, vehicles, and a large population density all have dramatic effects on the urban climate, therefore to fully understand how these impacts intertwine with those of climate change, it is key to model the urban climate correctly.

Modelling an urban climate has a number of unique challenges and considerations. Anthropogenic heat flux (QF) is an aspect of the surface energy balance which is unique to urban areas. Modelling this aspect of urban climate requires input data on heat released from activities linked to three aspects of QF: building (QF,B), transport (QF,T) and human/animal metabolism (QF,M). All of these are impacted by human behaviour which is a challenge to predict, as it changes based on many variables, and typical behaviour can change based on unexpected events, such as transport strikes, or extreme weather conditions, which are both becoming increasingly relevant worries in the UK.

DAVE  (Dynamic Anthropogenic actiVities and feedback to Emissions) is an agent-based model (ABM) which is being developed as part of the ERC urbisphere and NERC APEx projects to model QF and impacts of other emissions (e.g. air quality), in various cities across the world (London, Berlin, Paris, Nairobi, Beijing, and more). Here, we treat city spatial units (500 m x 500 m, Figure 1) as the agents in this agent-based model. Each spatial unit holds properties related to the buildings and citizen presence (at different times) in the grid. QF can be calculated for each spatial unit by combining the energy emissions from QF,B, QF,T, and QF,M within a grid. As human behaviour modifies these fluxes, the calculation needs to capture the spatial and temporal variability of people’s activities changing in response to their ‘normal’ and other events.

To run DAVE for London (as a first test case, with other cities to follow), extensive data mining has been carried out to model typical human activities and their variable behaviour as accurately as possible. The variation in building morphology (or form) and function, the many different transport systems, meteorology, and data on typical human activities, are all needed to allow human behaviour to drive the calculation of QF, incorporating dynamic responses to environmental conditions.

DAVE is a second generation ABM, like its predecessor it uses time use surveys to generate statistical probabilities which govern the behaviour of modelled citizens (Capel-Timms et al. 2020). The time use survey diarists document their daily activities every 10 minutes. Travel and building energy models are incorporated to calculate QF,B and QF,T. The building energy model, STEBBS (Simplified Thermal Energy Balance for Building Scheme) (Capel-Timms et al. 2020), takes into account the thermal characteristics and morphology of building stock in each 500 m x 500 m spatial unit area in London. The energy demand linked to different activities carried out by people (informed by time use surveys) impacts the energy use and from this anthropogenic heat flux from building energy use fluxes (Liu et al. 2022).

The transport model uses information about access to public transport (e.g. Fig. 1). As expected grids closer to stations have higher percentage of people using that travel mode. Other data used includes road densities, travel costs, and information on vehicle ownership and travel preferences to assign transport options to the modelled citizens when they travel.

Figure 1: Location of tube, train and bus stations/stops (dots) in London (500 m x 500 m grid resolution) with the relative percentage of people living in that grid who use that mode of transport (colour, lighter indicates higher percentage). Original data Sources: (ONS, 2014), (TfL, 2022)

An extensive amount of analysis and pre-processing of data are needed to run the model but this provides a rich resource for multiple MSc and Undergraduate student projects (past and  current) to analyse different aspects of the building and transport data. For example, a current project is modelling people’s exposure to pollution, informed by data such as shown in Fig. 2, linked with moving to and between different modes of transport between home and work/school. Therefore the areas that should be used/avoided to reduce risk of health problems by exposure to air pollution.

Figure 2:  London (500 m x 500 m resolution) annual mean NO2 emissions (colour) with Congestion Charge Zone (CCZ, blue) and Ultra Low Emission Zone (ULEZ, pink).  Data source: London Datastore, 2022

Future development and use of the model DAVE will allow for the consideration of many more unique aspects of urban environments and their impacts on the climate and people.

Acknowledgements: Thank you to Matthew Paskin and Denise Hertwig for providing the Figures included.


Capel-Timms, I., S. T. Smith, T. Sun, and S. Grimmond, 2020: Dynamic Anthropogenic activitieS impacting Heat emissions (DASH v1.0): Development and evaluation. Geoscientific Model Development, 13, 4891–4924

London Datastore, 2022: Greater London Authority, London Atmospheric Emissions Inventory 2019.

International Energy Agency, 2021: World Health Organisation 2021, (Accessed January 2023)

Liu, Y., Z. Luo, and S. Grimmond, 2022: Revising the definition of anthropogenic heat flux from buildings: role of human activities and building storage heat flux. Atmospheric Chemistry and Physics, 22, 4721–4735

ONS, 2014: Office for National Statistics, WU03UK – Location of usual residence and place of work by method of travel to work (Accessed August, 2022).

TfL, 2022: Transport for London timetables, (Accessed July 2022)

Posted in Climate, Climate change, Climate modelling, Urban meteorology | Leave a comment

Making Flights Smoother, Safer, and Greener

By: Paul Williams

Atmospheric turbulence is the leading cause of weather-related injuries to air passengers and flight attendants. Bumpy air is estimated to cost the global aviation sector up to $1bn annually, and evidence suggests that climate change is causing turbulence to strengthen. For all these reasons, improving turbulence forecasts is essential for the continued comfort and safety of air travellers.

Clear-air turbulence is particularly hazardous to aviation because it is undetectable by on-board radar. A previously unrecognised mechanism that we proposed is now thought to be a significant source of clear-air turbulence. That mechanism is localised instabilities initiated by gravity waves that are spontaneously emitted by the atmosphere. Several years ago, we set out to use this knowledge to develop a practical turbulence-forecasting algorithm. Our method works by analysing the atmosphere and using a set of equations to identify the regions where the winds are becoming unbalanced, leading to the production of gravity waves and ultimately turbulence.

We conducted some initial tests on the accuracy of our forecasting algorithm, with promising results. At that time, the US Federal Government’s goals for aviation turbulence forecasting were not being achieved, either by automated systems or by experienced human forecasters, but our algorithm came tantalisingly close. We published our results, concluding that major improvements in clear-air turbulence forecasting could result if our method were to become operational.

Rough air has long plagued the global aviation sector. Tens of thousands of aircraft annually encounter turbulence strong enough to throw unsecured objects and people around inside the cabin. On scheduled commercial flights involving large airliners, official statistics indicate that several hundred passengers and flight attendants are injured every year, but because of under-reporting we know that the real injury rate is probably in the thousands.

Turbulence also has consequences for the environment, by causing excessive fuel consumption and CO2 emissions. Up to two-thirds of flights deviate from the most fuel-efficient altitude because of turbulence. This wastes fuel and it contributes to climate change through unnecessary CO2 emissions. At a time when we are all concerned about aviation’s carbon footprint, reducing turbulence encounters represents an attractive opportunity to help make flying greener.

Furthermore, climate change is expected to make turbulence much worse in future. In particular, our published projections indicate that there will be hundreds of per cent more turbulence globally by 2050–2080. These findings underline the increasingly urgent need to develop better aviation turbulence-forecasting techniques.

It is therefore excellent news for air travellers that our improved turbulence-forecasting algorithm is now being used operationally by the Aviation Weather Center (AWC) in the National Weather Service (NWS), which is the US equivalent of the Met Office. The turbulence forecasts are freely available via an official US government website. They forecast turbulence up to 18 hours ahead, updated hourly. Our algorithm is the latest in a basket of diagnostics that are optimally combined to produce the final published forecast.

Every day since 20 October 2015, turbulence forecasts made with our algorithm have been used in flight planning by commercial and private pilots, flight dispatchers, and air-traffic controllers. They are benefiting from advance knowledge of the locations of turbulence, with greater accuracy than ever before, allowing flight routes through smooth air to be planned. Pilots and air-traffic controllers are benefiting from a reduced workload, because unexpected turbulence results in burdensome re-routing requests. Airlines are benefiting from fewer unplanned diversions around turbulence and reduced fuel costs and emissions associated with those diversions.

To date, our algorithm has helped improve the comfort and safety of air travel on billions of passenger journeys. Our algorithm has won several awards recently, but the real prize is the knowledge that it is making a difference to people’s lives every day. In the time it has taken you to read this article, thousands of passengers have taken to the skies and are benefiting from smoother, safer, and greener flights.


Williams, P. D. and Storer, L. N. (2022) Can a climate model successfully diagnose clear-air turbulence and its response to climate change? Quarterly Journal of the Royal Meteorological Society148(744), pp 1424-1438. doi:10.1002/qj.4270

REF (2021) Improved turbulence forecasts for the aviation sector, Research Excellence Framework (REF) Impact Case Study, on-line at

Lee, S. H., Williams, P. D. and Frame, T. H. A. (2019) Increased shear in the North Atlantic upper-level jet stream over the past four decades. Nature572(7771), pp 639-642. doi:10.1038/s41586-019-1465-z

Williams, P. D. (2017) Increased light, moderate, and severe clear-air turbulence in response to climate change. Advances in Atmospheric Sciences34(5), pp 576-586. doi:10.1007/s00376-017-6268-2

McCann, D. W., Knox, J. A. and Williams, P. D. (2012) An improvement in clear-air turbulence forecasting based on spontaneous imbalance theory: the ULTURB algorithm. Meteorological Applications, 19(1), pp 71-78. doi:10.1002/met.260

Posted in aviation, Climate, Environmental hazards, Turbulence | Leave a comment

From Ürümqi to Minneapolis: Clustering City Climates with Self-Organising Maps

By: Niall McCarroll

As a Research Software Engineer, my job involves developing, testing and maintaining software that scientists can use to analyse earth observation and climate data.  Recently I’ve been developing some software that can be used to visualise climate data.  A Self-Organising Map is an artificial neural network algorithm invented in the 1980s by Finnish scientist Teuvo Kohonen.   Artificial neural networks are computer programs which attempt to replicate the interconnection of neurons in the brain in order to learn to recognise patterns in input data.  The Self-Organising Map algorithm helps us compare items that are described by a list of many data values, by plotting them on a two-dimensional map such that items that have similar lists of data values appear closer together on the map.  By doing so, we are clustering similar items together.

To help me test the software I chose a simple example task to solve, in a domain that I can easily understand. Suppose that we would like to compare the climates of many different cities.  City location data was obtained from  We can obtain climate data from the global meteorological dataset ERA5 released by the European Centre for Medium Range Weather Forecasts (ECMWF).  ERA5 includes mean monthly estimates of air temperatures over land (Muñoz Sabater, J., 2019).  From this we can calculate the monthly mean temperatures from a 20km square area containing each city we’d like to compare, for the years from 2000 to 2021.  I prepared a dataset of 120 large cities with the series of 12 monthly mean temperatures at their locations from the ERA5 data.

We could easily base our climate comparison on single data values, for example the mean annual temperature around each city, but that would miss some important differences.  For example, Belo Horizonte (Brazil) and Houston (USA) have very similar annual mean temperatures according to this dataset, but widely different seasonal variations in their temperatures – we could not say that they enjoyed a similar climate.

Instead, we can use the Self-Organising Map algorithm on this data to plot each city onto a “climate map” (Figure 1) where cities that have similar monthly mean temperature patterns should be clustered closer together on the climate map.  The original location of cities on a conventional world map is ignored.  You’ll see that the climate map is divided into hexagonal cells to which cities are allocated by the algorithm.  I have coloured each cell according to the mean annual temperature of the cities placed by the algorithm into that cell. Blank cells happen to have no cities from the test dataset allocated – but cannot be considered to represent areas like oceans or ice caps on a conventional map where cities cannot exist.To test the software, we need to consider whether the algorithm has made a reasonable attempt to place the cities from our dataset into clusters in our climate map.  For those cities with which I am familiar, the map does appear to have clustered cities with similar temperature patterns together. The map colours indicate that we see larger regions made up of multiple cells containing generally warmer or cooler climates.  In most but not all cases, cities from the same original region appear nearby in the new map – intuitively we would expect this.

We can plot the temperature patterns for cities that are clustered close together in the new map and check that the patterns are similar. This gives us some confidence that the software may be working as expected.  Figure 2 shows plots of the two cities, Minneapolis (USA) and Ürümqi (China) located in the same cell (highlighted in Figure 1) in our self-organising map.  You can see that the variation In monthly mean temperatures are similar.

This simple dataset has been useful for testing my implementation of the Self-Organising Map algorithm.  For a more realistic comparison of climates as we experience them, we would need to expand our dataset to consider other variables such as rainfall, snowfall, wind, humidity and consider how temperatures vary between day and night.   I hope this post has helped to explain what Self-Organising Maps can be useful for, in the context of understanding climate data.


Muñoz Sabater, J., (2019) was downloaded from the Copernicus Climate Change Service (C3S) Climate Data Store.

The results contain modified Copernicus Climate Change Service information 2023. Neither the European Commission nor ECMWF is responsible for any use that may be made of the Copernicus information or data it contains.


Muñoz Sabater, J., 2019: ERA5-Land monthly averaged data from 1981 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), accessed 06 January 2023,



Posted in Climate, Data Visualisation, Machine Learning | Leave a comment

How On Earth Do We Measure Photosynthesis?

By: Natalie Douglas

Photosynthesis is a biological process that removes carbon (in the form of carbon dioxide) from the atmosphere and is therefore a key process in determining the amount of climate change. So, how do we measure it so that we can use it in climate modelling? The answer is, in short, we don’t.

Photosynthesis is the process by which green plants absorb carbon dioxide (CO2) and water and use sunlight to synthesise the nutrients required to sustain themselves. Since plants absorb CO2, and generate oxygen as a by-product, the rate at which they do so is a fundamental atmospheric process and plays a critical role in climate change. In climate science, we refer to this rate as Gross Primary Productivity or GPP. It is typically measured in kgm-2s-1 which is kilograms of carbon per square metre per second. But why do we need to know this? Climate models, also known as General Circulation Models (GCMs), divide the Earth’s surface into three-dimensional grid cells that typically have a horizontal spatial resolution of 100km by 150km at mid-latitudes. Using supercomputers, a set of mathematical equations that govern ocean, atmosphere and land processes are solved and the results passed between neighbouring cells to model the exchange of matter (such as carbon) and energy over time [1]. Fundamental to their solution are what we call initial conditions (the state of the climate variables at the start of the model run) and boundary conditions (the state of the required variables at the land surface). Due to the sheer complexity of the processes involved, we require another type of model to provide the latter – land surface models.

It isn’t possible to simply measure photosynthesis; an instrument that quantifies the amount of carbon a plant absorbs from the atmosphere doesn’t actually exist. There are, however, eddy covariance towers that are capable of measuring carbon fluxes at a given location. The locations of these towers are sparse but do provide good estimates for the fluxes at a given location. If it were possible to provide eddy covariance fluxes at all grid locations, say at their centres, this would suffice for a GCM, but since this is completely infeasible, we have the need for land surface models. The Joint UK Land Environment Simulator, or JULES, is the UK’s land surface component of the Met Office’s Unified Model used for both weather and climate applications [2], [3]. Before JULES can model carbon fluxes it requires an ensemble of information including surface type, particulars on weather and soil, model parameter values, and its own initial conditions.  A module within JULES is then able to calculate the carbon uptake at the surface boundary of the grid cell based on the number of leaves within the grid, the differences in CO2 concentrations between the leaf surface and the atmosphere, and several limiting factors such as light availability and soil moisture [4]. Figure 1 shows a representation of the monthly average of GPP for June 2017 as modelled by JULES.

Figure 1.

Earth Observation (EO) plays a crucial role in developing current climate research. There are numerous satellites in space capturing various characteristics of the Earth’s surface at regular intervals and at different spatial resolutions. Scientists cleverly transform this data, using mathematics, into the required variables. For example, NASA’s MODIS (MODerate resolution Imaging Spectroradiometer) satellites measure light in various wavelengths and a team of scientists convert this data into an 8-day GPP product [5]. Neither models nor EO data are 100% accurate when it comes to determining the variables required for land surface and climate change models and so much of today’s research focuses in combining both sets of information in a method called Data Assimilation (DA). Using mathematics again, DA methods take both model estimates and observations as well as information regarding their uncertainty to find an optimal guess of the ‘true’ state of the variables. These methods allow us to get a better picture of the current and future states of our planet.





[4] M. J. Best et al, ‘The Joint UK Land Environment Simulator (JULES), model description – Part 1: Energy and water fluxes’, Geoscientific Model Development, Vol. 4, 2011, (677-699).


Posted in Climate, Climate modelling, earth observation | Leave a comment

Using ChatGPT in Atmospheric Science

By: Mark Muetzelfeldt

ChatGPT is amazing. Seriously. Go try it: So what is it? It is an artificial intelligence language model that has been trained on vast amounts of data, turning this into an internal representation of the structure of the language used and a knowledge base that it can use to answer questions. From this, it can hold human-like conversations through a text interface. But that doesn’t do it justice. It feels like a revolution has happened, and that ChatGPT surpasses the abilities of previous generations of language AIs to the point where it represents a leap forwards in terms of natural interactions with computers (compare it with pretty much any chatbot that answers your questions on a website). It seems to be able to understand not just precise commands, but vaguer requests and queries, as well as having an idea about what you mean when you ask it to discuss or change specific parts of its previous responses. It can produce convincing stories and essays on a huge variety of topics. It can write poemsCVs and cover letterstactful emails, as well as producing imagined conversations. With proper prompting, it can even help generate a fictitious language.

It has one more trick up its sleeve: it can generate functional computer code in a variety of languages from simple text descriptions of the problem. For example, if you prompt it with “Can you write a python program that prints the numbers one to ten?”, it will produce functional code (side-stepping some pitfalls like getting the start/end numbers right in range), and can modify its code if you ask it not to use a loop and use numpy.

But this really just scratches the surface of its coding abilities: it can produce Python astrophoto processing code (including debugging an error message), Python file download code, and an RStats shiny app.

All of this has implications for academia in general, particularly for the teaching and assessment of students. Its ability to generate short essays on demand on a variety of topics could clearly be used to answer assignment questions. As the answer is not directly copied from one source, it will not be flagged as plagiarism by tools such as Turnitin. Its ability to generate short code snippets from simple prompts could be used on coding assignments. If used blindly by a student, both of these would detrimentally shortcut the student’s learning process. However, it also has the potential to be used as a useful tool in the writing and coding processes. Let’s dive in and see how ChatGPT can be used and misused in academia.

ChatGPT as a scientific writing assistant

To get a feel for ChatGPT’s ability to write short answers on questions related to atmospheric science, let’s ask it a question on a topic close to my own interests – mesoscale convective systems:

ChatGPT does a decent job of writing a suitable first paragraph for an introduction to MCSs. You could take issue with the “either linear or circular in shape” phrase, as they come in all shapes and sizes and this wording implies one or the other. Also, “short-lived”, followed by “a couple of days”, does not really make sense.

Let’s probe its knowledge of MCSs, by asking what it can tell us about the stratiform region:I am not sure where it got the idea of “low-topped” clouds from – this is outright wrong. The repetition of “convective” is not ideal as it adds no extra information. However, in broad strokes, this gives a reasonable description about the stratiform region of MCSs. Finally, here is a condensed version of both responses together, that could reasonably serve as the introduction to a student report on MCSs (after it had been carefully checked for correctness).There are no citations – this is a limitation of ChatGPT. A similar language model, Galactica, has been developed to address this and have a better grasp of scientific material, but it is currently offline. Furthermore, ChatGPT has no knowledge of the underlying physics, other than the words it used are statistically likely to describe an MCS. Therefore, its output cannot be trusted or relied upon to be correct. However, it can produce flowing prose, and could be used as a way of generating an initial draft of some topic area.

Following this idea, one more way that ChatGPT can be used is by feeding it text, and asking it to modify or transform it in some way. When I write paper drafts, I normally start by writing a Latex bullet-point paper – with the main points in ordered bullet points. Could I use ChatGPT to turn this into sensible prose?

Here, it does a great job. I can be pretty sure of its scientific accuracy (at least – any mistakes will be mine!). It correctly keeps the Latex syntax where appropriate, and turns these into fluent prose.

ChatGPT as a coding assistant

One other capability of ChatGPT is its ability to write computer code. Given sparse information about roughly the kind of code the user wants, ChatGPT will write code that can perform specific tasks. For example, I can ask it to perform some basic analysis on meteorological data:

It gets a lot right here: reading the correct data, performing the unit conversion, and labelling the clouds. But there is one subtle bug – if you run this code it will not produce labelled clouds (setting the threshold should be done using precipitation.where(precipitation > threshold, 0)). This illustrates its abilities as well as its shortcomings – it will confidently produce subtly incorrect code. When it works, it is magical. But when it doesn’t, debugging could take far longer than writing the code yourself.

The final task I tried was seeing if ChatGPT could manage a programming assignment from an “Introduction to Python” course that I demonstrated on. I used the instructions directly from the course handbook, with the only editing being that I stripped out any questions to do with interpretation of the results:Here, ChatGPT’s performance was almost perfect. This was not an assessed assignment, but ChatGPT would have received close to full marks if it were. This is a simple, well-defined task, but it demonstrates that students may be able to use it to complete assignments. There is always the chance that the code it produces will contain bugs, as above, but when it works it is very impressive.


ChatGPT already shows promise at being able to perform mundane tasks, and generating useful drafts of text and code. However, its output cannot be trusted yet, and must be checked carefully for errors by someone who understands the material. As such, if students use it to generate text or code, they are likely to be able to deceive themselves that what they have is suitable, but it may well fail the test when read by an examiner or a compiler. For examiners, there may well be tell-tale signs that text or code has been produced by ChatGPT. In its base incarnation, it produces text that seems (to me) to be slightly generic and could contain some give-away factual errors. When producing code, it may well produce (incredibly clean and well commented!) code that contains structures or uses libraries that have not been specifically taught in the course. Neither of these is definitive proof that ChatGPT has been used. Even it ChatGPT has been used, it may not be a problem. Provided its output has been carefully checked, it is a tool that has the ability to write fluent English, and might be useful to, for example, foreign language students.

Here, I’ve only scratched the surface of ChatGPT’s capabilities and shortcomings. It has an extraordinary grasp of language, but does not fully understand the meaning behind its words or code, far less the physical explanations of processes that form MCSs. This can lead it to confidently assert the wrong thing. It also has a poor understanding of numbers, presumably built up from statistical inference from its training database, and will fail at standard logical problems. It can however perform remarkable transformations of inputs, and generate new lists and starting points for further refinement. It can answer simple questions, and some seemingly complex ones – but can its answer be trusted? For this to be the case, it seems to me that it will need to be coupled to some underlying artificial intelligence models of: logic, physics, arithmetic, physical understanding, common sense, critical thinking, and many more. It is clear to me that ChatGPT and other language models are the start of something incredible, and that they will be used for both good and bad purposes. I am excited, and nervous, to see how it will develop in the coming months and years.


Posted in Academia, Artificial Intelligence, Climate, Students, Teaching & Learning | Leave a comment