Using ChatGPT in Atmospheric Science

By: Mark Muetzelfeldt

ChatGPT is amazing. Seriously. Go try it: So what is it? It is an artificial intelligence language model that has been trained on vast amounts of data, turning this into an internal representation of the structure of the language used and a knowledge base that it can use to answer questions. From this, it can hold human-like conversations through a text interface. But that doesn’t do it justice. It feels like a revolution has happened, and that ChatGPT surpasses the abilities of previous generations of language AIs to the point where it represents a leap forwards in terms of natural interactions with computers (compare it with pretty much any chatbot that answers your questions on a website). It seems to be able to understand not just precise commands, but vaguer requests and queries, as well as having an idea about what you mean when you ask it to discuss or change specific parts of its previous responses. It can produce convincing stories and essays on a huge variety of topics. It can write poemsCVs and cover letterstactful emails, as well as producing imagined conversations. With proper prompting, it can even help generate a fictitious language.

It has one more trick up its sleeve: it can generate functional computer code in a variety of languages from simple text descriptions of the problem. For example, if you prompt it with “Can you write a python program that prints the numbers one to ten?”, it will produce functional code (side-stepping some pitfalls like getting the start/end numbers right in range), and can modify its code if you ask it not to use a loop and use numpy.

But this really just scratches the surface of its coding abilities: it can produce Python astrophoto processing code (including debugging an error message), Python file download code, and an RStats shiny app.

All of this has implications for academia in general, particularly for the teaching and assessment of students. Its ability to generate short essays on demand on a variety of topics could clearly be used to answer assignment questions. As the answer is not directly copied from one source, it will not be flagged as plagiarism by tools such as Turnitin. Its ability to generate short code snippets from simple prompts could be used on coding assignments. If used blindly by a student, both of these would detrimentally shortcut the student’s learning process. However, it also has the potential to be used as a useful tool in the writing and coding processes. Let’s dive in and see how ChatGPT can be used and misused in academia.

ChatGPT as a scientific writing assistant

To get a feel for ChatGPT’s ability to write short answers on questions related to atmospheric science, let’s ask it a question on a topic close to my own interests – mesoscale convective systems:

ChatGPT does a decent job of writing a suitable first paragraph for an introduction to MCSs. You could take issue with the “either linear or circular in shape” phrase, as they come in all shapes and sizes and this wording implies one or the other. Also, “short-lived”, followed by “a couple of days”, does not really make sense.

Let’s probe its knowledge of MCSs, by asking what it can tell us about the stratiform region:I am not sure where it got the idea of “low-topped” clouds from – this is outright wrong. The repetition of “convective” is not ideal as it adds no extra information. However, in broad strokes, this gives a reasonable description about the stratiform region of MCSs. Finally, here is a condensed version of both responses together, that could reasonably serve as the introduction to a student report on MCSs (after it had been carefully checked for correctness).There are no citations – this is a limitation of ChatGPT. A similar language model, Galactica, has been developed to address this and have a better grasp of scientific material, but it is currently offline. Furthermore, ChatGPT has no knowledge of the underlying physics, other than the words it used are statistically likely to describe an MCS. Therefore, its output cannot be trusted or relied upon to be correct. However, it can produce flowing prose, and could be used as a way of generating an initial draft of some topic area.

Following this idea, one more way that ChatGPT can be used is by feeding it text, and asking it to modify or transform it in some way. When I write paper drafts, I normally start by writing a Latex bullet-point paper – with the main points in ordered bullet points. Could I use ChatGPT to turn this into sensible prose?

Here, it does a great job. I can be pretty sure of its scientific accuracy (at least – any mistakes will be mine!). It correctly keeps the Latex syntax where appropriate, and turns these into fluent prose.

ChatGPT as a coding assistant

One other capability of ChatGPT is its ability to write computer code. Given sparse information about roughly the kind of code the user wants, ChatGPT will write code that can perform specific tasks. For example, I can ask it to perform some basic analysis on meteorological data:

It gets a lot right here: reading the correct data, performing the unit conversion, and labelling the clouds. But there is one subtle bug – if you run this code it will not produce labelled clouds (setting the threshold should be done using precipitation.where(precipitation > threshold, 0)). This illustrates its abilities as well as its shortcomings – it will confidently produce subtly incorrect code. When it works, it is magical. But when it doesn’t, debugging could take far longer than writing the code yourself.

The final task I tried was seeing if ChatGPT could manage a programming assignment from an “Introduction to Python” course that I demonstrated on. I used the instructions directly from the course handbook, with the only editing being that I stripped out any questions to do with interpretation of the results:Here, ChatGPT’s performance was almost perfect. This was not an assessed assignment, but ChatGPT would have received close to full marks if it were. This is a simple, well-defined task, but it demonstrates that students may be able to use it to complete assignments. There is always the chance that the code it produces will contain bugs, as above, but when it works it is very impressive.


ChatGPT already shows promise at being able to perform mundane tasks, and generating useful drafts of text and code. However, its output cannot be trusted yet, and must be checked carefully for errors by someone who understands the material. As such, if students use it to generate text or code, they are likely to be able to deceive themselves that what they have is suitable, but it may well fail the test when read by an examiner or a compiler. For examiners, there may well be tell-tale signs that text or code has been produced by ChatGPT. In its base incarnation, it produces text that seems (to me) to be slightly generic and could contain some give-away factual errors. When producing code, it may well produce (incredibly clean and well commented!) code that contains structures or uses libraries that have not been specifically taught in the course. Neither of these is definitive proof that ChatGPT has been used. Even it ChatGPT has been used, it may not be a problem. Provided its output has been carefully checked, it is a tool that has the ability to write fluent English, and might be useful to, for example, foreign language students.

Here, I’ve only scratched the surface of ChatGPT’s capabilities and shortcomings. It has an extraordinary grasp of language, but does not fully understand the meaning behind its words or code, far less the physical explanations of processes that form MCSs. This can lead it to confidently assert the wrong thing. It also has a poor understanding of numbers, presumably built up from statistical inference from its training database, and will fail at standard logical problems. It can however perform remarkable transformations of inputs, and generate new lists and starting points for further refinement. It can answer simple questions, and some seemingly complex ones – but can its answer be trusted? For this to be the case, it seems to me that it will need to be coupled to some underlying artificial intelligence models of: logic, physics, arithmetic, physical understanding, common sense, critical thinking, and many more. It is clear to me that ChatGPT and other language models are the start of something incredible, and that they will be used for both good and bad purposes. I am excited, and nervous, to see how it will develop in the coming months and years.


Posted in Academia, Artificial Intelligence, Climate, Students, Teaching & Learning | Leave a comment

Tiny Particles, Big Impact?

By Laura Wilcox

Aerosols are tiny particles or liquid droplets suspended in the atmosphere. They can be created by human activities, such as burning fossil fuels or clearing land, or have natural sources, such as volcanoes. Depending on their composition, aerosols can either absorb or scatter radiation. Overall, increases in aerosol concentrations in the atmosphere act to cool the Earth’s surface. This can be the result of the aerosols themselves reflecting radiation back to space (aerosol-radiation interactions), or due to aerosols modifying the properties of clouds so that they reflect more solar radiation (aerosol-cloud interactions).

The cooling effect of aerosols means they have played an important role in climate change over the last 200 years, masking some of the warming caused by increases in greenhouse gases. However, the climate impact of aerosols is much more interesting than a simple offsetting of the effects of greenhouse gases. While greenhouse gases can remain in the atmosphere for hundreds of years, most anthropogenic aerosols are lucky to last two weeks being deposited at the surface. This gives them a unique spatial distribution, with most aerosols being found close to the regions where they were emitted. This is a marked contrast to greenhouse gases, which are evenly distributed in the atmosphere, and makes aerosols very efficient at changing circulation patterns such as the monsoons and the Atlantic Meridional Overturning Circulation. Although aerosols tend to stay close to their source, their influence on atmospheric circulation means that a change in aerosol emissions in one region can result in impacts around the world. Asian aerosols, for example, can influence Sahel precipitation by changing the Walker Circulation, or influence European temperature by inducing anomalous stationary wave patterns.

Figure 1: A snapshot of aerosol in the Goddard Earth Observing System Model. Dust is shown in orange, and sea salt is shown in light blue. Carbonaceous aerosol from fires is shown in green, and sulphate from industry and volcanic eruptions is shown in white. The short atmospheric lifetime of aerosols means they typically stay close to their source so that aerosol concentrations and composition varies dramatically with location. Image from NASA/Goddard Space Flight Center.

The short atmospheric lifetime of anthropogenic aerosols means that changes in emissions are quickly translated into changes in atmospheric concentrations, and changes in impacts on air quality and climate. Increases in European aerosols through the 1970s were one of the main drivers of drought in the Sahel in the 1970s and 80s. As European emissions decreased following the introduction of the clean air acts in 1979, precipitation in the Sahel recovered, and the trend became more strongly influenced by greenhouse gas increases. Meanwhile, the rate of increase of European temperatures accelerated as the cooling influence of anthropogenic aerosol was lost.

Poor air quality has been linked to many health issues, including respiratory and neurological problems, and is a leading cause of premature mortality in countries such as India, where many of the world’s most polluted cities are currently found. In recent decades, China has dramatically reduced its aerosol emissions in an attempt to improve air quality, and other countries are expected to follow suit. However, the timing and rate of reductions of aerosol emissions are dependent on a complex combination of political motivation and technological ability. As a result, our projections of aerosol emissions over the next few decades are highly uncertain. Some scenarios see global aerosol returning to pre-industrial levels by 2050, while different priorities mean that emissions continue to increase in other scenarios. While I expect that some scenarios are more likely than others, this means that for near-future climate projections aerosol may not change very much in the early twenty-first century, or may be reduced so quickly that we see the emission increases that took place over the last 200 years reversed in just 20-30 years. While this would be a great outcome for the health of those living in regions with poor air quality, it may come with rapid climate changes, which need to be considered in adaptation and mitigation efforts.

Figure 2: Global emissions of black carbon and sulphur dioxide (a precursor of sulphate aerosol) from 1850 to 2100, as used in the sixth Coupled Model Intercomparison Project (CMIP6). The rate and sign of future emission changes are still uncertain.

Unfortunately, large differences in emission scenarios aren’t the only uncertainty associated with the role of aerosol in near-future climate change. A lack of observations of pre-industrial aerosol, uncertainties in observations of present-day aerosol, and differences in the way that aerosol and aerosol-cloud interactions are represented in climate models make aerosol forcing the largest uncertainty in the anthropogenic forcing of climate. For regional climate impacts, these are compounded by uncertainties in the dynamical response to aerosol changes. In anthropogenic aerosol, we have something that may be very important for near-future climate, especially at regional scales, that is highly uncertain. For climate change mitigation and adaptation to be effective, we need to improve our understanding of these uncertainties, or, even better, reduce them.

Regional assessments of climate risk often rely on regional climate models or statistical algorithms. However, this often results in the influence of aerosol being lost. Most regional climate models do not include aerosol processes, and statistical approaches typically assume that historical relationships will persist into the future, so that the impacts of changing aerosol types and emission locations are not accounted for. Broader approaches use projections from Earth System Models to tune simple climate models or statistical emulators, which are often only able to account for the global impact of aerosol changes, neglecting their larger impacts on regional climate.

We have designed a set of experiments that we hope will improve our understanding of the climate response to regional aerosol changes, provide a stronger link between emission policies and climate impacts, and support the development of more ‘aerosol-aware’ assessments of regional climate risk. The Regional Aerosol Model Intercomparison Project (RAMIP) includes experiments designed to quantify the effects of realistic, regional, transient aerosol perturbations on policy-relevant timescales, and to explore the sensitivity of these effects to aerosol composition. Simulations are just getting underway now. Will we find that these tiny particles are having a big impact on regional climate in the near future? Watch this space!

For more details of the RAMIP experiment design, take a look at our preprint in GMD

For more thoughts on aerosol and climate risk assessments, see our recent comment

Posted in Aerosols, Air quality, Climate, Climate change | Leave a comment

Uncrewed Aircraft for Cloud and Atmospheric Electricity Research

By: Keri Nicoll

The popularity and availability of Unmanned Aerial Vehicles (UAVs), has led to a surge in their use in many areas, including aerial photography, surveying, search and rescue, and traffic monitoring.  This is also the case for atmospheric science applications, where they are used for boundary layer profiling, aerosol and cloud sampling and even tornado research.  It is often the case that a human pilot is still required for safety reasons (even though many systems are mostly flown under autopilot), but the reliability of satellite navigation and autopilot software now means that fully autonomous flights are now possible, even being used in operational weather forecasting.

In the Department of Meteorology, we have been developing small science sensors to fly on UAVs for cloud and atmospheric electricity research.  Atmospheric electricity is all around us (even in fine weather), and charge plays an important role in aerosol and cloud interactions, but is rarely measured.  Over the past few years, our charge sensors have been flown on several different aircraft as part of two separate research projects to investigate charged aerosol and cloud interactions, briefly discussed in this blog.

The first flight campaign took place in Lindenberg, Germany, with colleagues from the Environmental Physics Group at the University of Tubingen.  This flight campaign was to investigate the vertical charge structure in the atmospheric boundary layer (lowest few km of the atmosphere), and how it varied with meteorological parameters and aerosol.  Four small charge sensors which we developed (see Figure 1(a): 1 and 2) were flown in special measurement pods attached to each wing of a 4 m wingspan fixed wing UAV (known as MASC-3).  MASC-3 also measured temperature, relative humidity, 3D wind speed vector (using a small probe mounted in the nose of the aircraft) and aerosol particle concentration.   Data was logged and saved on board the aircraft at a sampling rate of 100 Hz, and MASC-3 was controlled by an autopilot in order to repeat measurement patterns reliably.  Since charge measurements from aircraft are notoriously difficult to make, it was important to minimise the effect of the aircraft movement on the charge measurement.  This was done by flying carefully planned, straight flight legs, and developing a technique to remove the effect of the aircraft roll on the charge measurements. Multiple flights were performed during fair weather days, at different intervals throughout the day (from sunrise to sunset), to observe how the vertical charge structure changed throughout the day as the boundary layer evolved.  Full results from the campaign are reported in our paper.

Figure 1: (a) Charge sensor pod for MASC-3. Charge sensor (1, 2), painted with conductive graphite paint, and copper foil to reduce the influence of static charge build up on the aircraft. (b) MASC-3 aircraft with charge sensor pods mounted on each wing (8).  The meteorological sensor payload is in the front for measuring the wind vector, temperature, and humidity (9). Figure from Schön et al, 2022.

 The second UAV flight campaign took place as part of our project: “Electrical Aspects of Rain Generation” funded by the UAE Research Program for Rain Enhancement Science. Watch our video on this project here.  This involved instrumenting UAVs with specially-developed charge emitters which could release positive or negative ions on demand.  The UAVs were flown in fog to investigate whether the charge released affected the size and or concentration of the fog droplets.  This is an important first step in determining whether charging cloud droplets might be helpful in aiding rainfall in water stressed parts of the world.  To perform these experiments, we worked with engineers from the Department of Mechanical Engineering at the University of Bath.  Skywalker X8 aircraft with a 1.2 m wingspan were instrumented with our small charge sensors and cloud droplet sensors, along with temperature, and relative humidity sensors (as shown in Figure 2, and discussed in Harrison et al, 2021).  Our specially developed charge emitters were mounted under each wing of the UAV, and under pilot control to be switched on and off whenever required by the flight scientist in a known pattern. The UAV flights took place at a private farm in Somerset, in light fog conditions (making sure that we could see the UAVs at all times, for safety reasons), flying in small circles around a ground based electric field mill, which was used to detect the charge emitted by the aircraft.  Our results (reported recently in Harrison et al, 2022) demonstrated that the radiative properties of the fog differed between periods when the charge emitters were on and off.  This demonstrates that the fog droplet size distribution can be altered by charging, which ultimately means that it may be possible to use charge to influence cloud drops and thus rainfall.

Figure 2:. (a) Skywalker X8 aircraft on the ground. (b) X8 aircraft in flight, with instrumentation labelled. (c) Detail of the individual science instruments: (c1) optical cloud sensor, (c2) charge sensors, (c3a) thermodynamic (temperature and RH) sensors, (c3b) removable protective housing for thermodynamic sensors, and (c4) charge emitter electrode. Figure from Harrison et al, 2021.


 Harrison, R. G., & Nicoll, K. A., 2014: Note: Active optical detection of cloud from a balloon platform. Neview of Scientific Instruments, 85(6), 066104,

Harrison, R. G., Nicoll, K. A., Tilley, D. J., Marlton, G. J., Chindea, S., Dingley, G. P., … & Brus, D., 2021: Demonstration of a remotely piloted atmospheric measurement and charge release platform for geoengineering. Journal of Atmospheric and Oceanic Technology, 38(1), 63-75,

Harrison, R. G., Nicoll, K. A., Marlton, G. J., Tilley, D. J., & Iravani, P., 2022: Ionic charge emission into fog from a remotely piloted aircraft. Geophysical Research Letters, e2022GL099827,

Nicoll, K. A., & Harrison, R. G., 2009: A lightweight balloon-carried cloud charge sensor. Review of Scientific Instruments, 80(1), 014501,

Reuder, J., Brisset, P., Jonassen, M., Muller, M. A. R. T. I. N., & Mayer, S., 2009: The Small Unmanned Meteorological Observer SUMO: A new tool for atmospheric boundary layer research. Meteorologische Zeitschrift, 18(2), 141.

Roberts, G. C., Ramana, M. V., Corrigan, C., Kim, D., & Ramanathan, V., 2008: Simultaneous observations of aerosol–cloud–albedo interactions with three stacked unmanned aerial vehicles. Proceedings of the National Academy of Sciences, 105(21), 7370-7375,

Schön, M., Nicoll, K. A., Büchau, Y. G., Chindea, S., Platis, A., & Bange, J., 2022: Fair Weather Atmospheric Charge Measurements with a Small UAS. Journal of Atmospheric and Oceanic Technology,

Wildmann, N., M. Hofsas, F. Weimer, A. Joos, and J. Bange, 2014: Masc–a small remotely piloted aircraft (rpa) for wind energy research. Advances in Science and Research, 11 (1), 55–61,

Posted in Aerosols, Boundary layer, Climate, Clouds, Fieldwork | Leave a comment

Investigating the Dark Caverns of Antarctica

By: Ryan Patmore

I am an Oceanographer and I occasionally spend my time trying to find the best ways of understanding the point where ice meets the ocean. This naturally draws me to Antarctica – covered in penguins, yes, but also ice. Antarctica is a mountainous land mass with continuous ice overlain and an ice thickness of several kilometres in places. The ice covering Antarctica is estimated to hold the equivalent of 58 m in sea-level rise (Morlighem et al. 2020).  Without it, many parts of the world would be engulfed in water and once inland towns would transform into coastal communities. In a world without Antarctic ice you might, for example, find seaside resorts such as Milton-Keynes-On-Sea. Thankfully, this is an extreme example and an unlikely scenario. Though, whilst we can be fairly comfortable in the knowledge that an ice melt induced wave of 58 m isn’t going to appear on the horizon any time soon, the threat of melting ice around Antarctica is a concern. Understanding the risks is an important endeavour.

Figure 1: Schematic representation of an ice shelf cavity depicting some examples of the available observational tools.

So how do we know whether or not Antarctic ice is here to stay, or more specifically, how much of it is going to stick around? The ice that lies upon Antarctica behaves a bit like gloopy honey. It is very dynamic and often channels off the continent and into the sea. This location, where ice meets ocean, is an important place for understanding potential sea-level rise. Since ice is less dense than water, when glacial ice contacts the ocean, it tends to float, creating a shelf-like layer of ice called an ice shelf with caverns of ocean below – as shown in Figure 1. In certain locations around Antarctica these caverns are filled with water at a balmy temperature of 1 oC (brrrrrr). Although this may sound cold, this water is considered very warm and can lead to significant melting. The process of the ocean melting the ice from beneath an ice shelf is a ‘hot’ topic when it comes to understanding the loss of ice around Antarctica and is considered one of the main drivers for ice loss in recent times (Rignot et al. 2013).

Understanding the problem of ice shelf melt requires data. The cavities formed by ice draining into the sea are immensely interesting and an important part of the climate system, but at the same time they are notoriously difficult to access. To observe this environment is no small feat. A product of these difficulties has been innovation and we now have a variety of tools at our disposal, one of which is the famous Boaty McBoatface! This robotic submarine can be deployed from the UK’s shiny new polar ship, the SDA, and travel to the depths of the ocean, entering territory under ice shelves which until now has been entirely unexplored. Another method of observation is to drill from above. For several decades now, scientists have been coring through ice to mammoth depths in order to access the cavities from the surface, with the capability of drilling up to 2300 m. This means if Yr Wyddfa (Snowdon) was made of ice, it could be drilled through twice over. Observations are continuously pushing the boundaries but there are additional tools that gather insight without setting foot on either a boat or an ice shelf. This option is numerical modelling, which is often my tool of choice. Models can take you where instruments cannot and a theory can be tested at the touch of a button. This may sound like a silver bullet, but caution is needed and observations remain paramount for modelling to be successful. After all, without observations, who knows which reality is being modelled. All in all, some exciting things are happening in ice-ocean research and the ever expanding tool-kit is continuously opening doors for understanding this challenging environment.


Morlighem, M., and Coauthors, 2020: Deep glacial troughs and stabilizing ridges unveiled beneath the margins of the Antarctic ice sheet. Nat. Geosci., 13 (2), 132–137,

Rignot, E., S. Jacobs, J. Mouginot, and B. Scheuchl, 2013: Ice-Shelf Melting Around Antarctica. Science, 341 (6143), 266–270,

Posted in Climate, Oceanography, Polar | Tagged | Leave a comment

Oceanic Influences On Arctic And Antarctic Sea Ice

By: Jake Aylmer

The futures of Arctic and Antarctic sea ice are difficult to pin down in part due to climate model uncertainty. Recent work reveals different ocean behaviours that have a critical impact on sea ice, highlighting a potential means to constrain projections.

 Since the late 1970s, satellites have monitored the frozen surface of the Arctic Ocean. The decline in Arctic sea ice cover—about 12% area lost per decade—is a striking and well-known signal of climate change. As well as long-term retreat of the sea ice edge, the ice is becoming thinner and more fragmented, making it more vulnerable to extreme weather and an increasingly precarious environment for human activities and polar wildlife. At the opposite pole, sea ice surrounding Antarctica has not, on the whole, changed significantly despite global warming—a conundrum yet to be fully resolved.

There is high confidence that Arctic sea ice will continue to retreat throughout the twenty-first century, but uncertainties remain in the specifics. For instance, when will the first ice-free summer occur? Such questions are inherently uncertain due to the chaotic nature of the climate system (internal variability). However, different climate models give vastly different answers ranging from the 2030s to 2100 or beyond, indicating a contribution of model biases in the projected rates of sea ice loss.

My co-authors and I are particularly interested in the role the ocean might play in setting such model biases. Studies show that the ocean circulation has a strong influence on sea ice extent in models and observations, associated with its transport of heat into the polar regions (e.g., Docquier and Koenigk, 2021). If there is variation in this ocean heat transport across climate models, this could have a knock-on effect on the sea ice and thus help explain uncertainties in future projections. To explore this, we must first understand how the relationship between the ocean heat transport and sea ice occurs.

We looked at simulations of the pre-industrial era, which exclude global warming and thus act as control experiments isolating natural, internal variability. In all models examined, when there is a spontaneous increase in net ocean heat transport towards the pole, there is a corresponding decrease in sea ice area. This is intuitive—more heat, less ice. It occurs independently at both poles, but how the ocean heat reaches sea ice is different between the two.

In the Arctic, the heat is released around the sea ice edge. It does not extend far under the bulk of the ice pack because there are limited deep-ocean routes into the Arctic Ocean, which is itself shielded from rising heat by fresh surface water. Nevertheless, the ocean heat transport contributes to sea ice melt nearer the north pole, assisted by atmospheric transport acting as a ‘bridge’ to higher latitudes. For Antarctic sea ice, the process is more straightforward with the heat being simply released under the whole sea ice pack—the Southern Ocean does not have the same oceanographic obstacles as the Arctic, and there is no atmospheric role (Fig. 1). These different pathways result in different sensitivities of the sea ice to changes in ocean heat transport, and are remarkably consistent across different models (Aylmer, 2021; Aylmer et al. 2022).Figure 1: Different pathways by which extra ocean heat transport (OHT) reaches sea ice in the Arctic (red) where it is ‘bridged’ by the atmosphere to reach closer to north pole, compared to the Antarctic (dark blue), where it is simply released under the ice. Schematic adapted from Aylmer et al. (2022).

We can also explain how much sea ice retreat occurs per change in ocean heat transport using a simplified ‘toy model’ of the polar climate system, building on our earlier work developing theory underlying why sea ice is more sensitive to oceanic than atmospheric heat transport (Aylmer et al., 2020; Aylmer, 2021). This work, which is ongoing, accounts for the different pathways shown in Fig. 1, and we have shown it to quantitatively capture the climate model behaviour (Aylmer, 2021).

There is mounting evidence that the ocean plays a key role in the future evolution of Arctic and Antarctic sea ice, but questions remain open. For instance, what role does the ocean play in the sea ice sensitivity to global warming—something that is consistently underestimated by models (Rosenblum and Eisenman, 2017)? Our toy-model theory is currently unable to explore this because it is designed to understand the differences among models, not their offset from observations. As part of a new project due to start in 2023, we will adapt it for this purpose and include more detailed sea ice processes that we hypothesise could explain this bias. As more ocean observations become available, it is possible that our work could help to constrain future projections of the Arctic and Antarctic sea ice.


Aylmer, J. R., D. G. Ferreira, and D. L. Feltham, 2020: Impacts of oceanic and atmospheric heat transports on sea ice extent, J. Clim., 33, 7197–7215, doi:10.1175/JCLI-D-19-0761.1

Aylmer, J. R., 2021: Ocean heat transport and the latitude of the sea ice edge. Ph.D. thesis, University of Reading, UK

Aylmer, J. R., D. G. Ferreira, and D. L. Feltham, 2022: Different mechanisms of Arctic and Antarctic sea ice response to ocean heat transport, Clim. Dyn., 59, 315–329, doi:10.1007/s00382-021-06131-x

Docquier, D. and Koenigk, T., 2021: A review of interactions between ocean heat transport and Arctic sea ice, Environ. Res. Lett., 16, 123002, doi:10.1088/1748-9326/ac30be

Rosenblum, E. and Eisenman, I., 2017: Sea ice trends in climate models only accurate in runs with biased global warming, J. Clim., 30, 6265–6278, doi:10.1175/JCLI-D-16-0455.1

Posted in Antarctic, Arctic, Climate, Climate change, Climate modelling, Cryosphere, Oceans, Polar | Leave a comment

The Devil Is In The Details, Even Below Zero

By: Ivo Pasmans 

An anniversary is coming up in the family and I had decided to create a digital photo collage. In the process I was scanning a youth photo and noticed that the scan looked a lot less refined than the original. The resolution of my scanner, the number of pixels per square inch, is limited and since each pixel can record only one colour, details smaller than a pixel get lost in the digitization process. Now I doubt that the jubilees will really care that their collage isn’t of the highest quality possible, after all, it is the thought that counts. The story is probably different for a program manager of a million-dollar earth-observation satellite project.  


Figure 1: (A) original satellite photo of sea ice. (B) Same photo but after 99.7% reduction in resolution (source: NOAA/NASA). 

Just like my analogue youth photo, an image of sea-ice cover taken from space (Figure 1A) contains a lot of details. Clearly visible are cracks, also known as leads, dividing the ice into major ice floats. At higher zoom levels, smaller leads can be seen to emanate from the major leads, which in turn give rise to even smaller leads separating smaller floats, etc. This so-called fractal structure is partially lost on the current generation of sea-ice computer models. These models use a grid with grid cells and, like the pixels in my digitized youth photo, sea-ice quantities such as ice thickness, velocity or the water/ice-coverage ratio are assumed to be constant over the cells (Figure 1B). In particular, this means that if we want to use satellite observations to correct errors in the model output in a process called data assimilation (DA), we must average out all the subcell details in the observations that the model cannot resolve. Therefore, many features in the observations are lost.

Figure 2: schematic example of how model output is constructed in DG models. In each of the two shown grid cells (separated by the black vertical lines), the model output is the sum of a 0th order polynomial (red), 1st order polynomial (green) and 2nd order polynomial (blue). 

The aim of my research is to find a way to utilise these observations without losing details in the DA process for sea-ice models. Currently, a new sea-ice model is being developed as part the Scale-Aware Sea Ice Project (SASIP). In this model, sea-ice quantities in each grid cell are represented by a combination of polynomials (Figure 2) instead of as constant values. The higher the polynomial order, the more `wiggly` the polynomials become and the better small-scale details can be reproduced by the model. Moreover, the contribution of each polynomial to the model solution does not have to be the same across all of the model domain, a property that makes it possible to represent physical fields that vary very much over the domain. We are interested to make use of the new model’s ability to represent subcell details in the DA process and see if we can reduce the post-DA error in these new models by reducing the amount of averaging applied to the satellite observations.  

As an initial test, we have set up a model without equations. There are no sea-ice dynamics in this model, but it has the advantage that we can create an artificial field mimicking, for example, ice velocity with details at the scales we want and the order of polynomials we desire. For the purpose of this experiment, we set aside one of the artificial fields as our DA target, create artificial observations from this one and see if DA can reconstruct the ‘target’ from these observations. The outcome of this experiment has confirmed our assumptions: when using higher-order polynomials, the DA becomes better in reconstructing the ‘target’ as we reduce the width over which we average the observations. And it is not just the DA estimate of the `target` that is improved, but also the estimate of the slope of the `target`. This is very promising: Forces in the ice scale with the slope of the velocity. We cannot directly observe these forces, but we can observe velocities. So, with the aid of higher-order polynomials we might be able to use the velocities to detect any errors in the in sea-ice forces.  

High-resolution sea-ice observations definitely look better than their low-res counterparts, but to be able to use all details present in the observations DA has to be tweaked. Our preliminary results suggest that it is possible to incorporate scale dependency in the design of the DA scheme thus making it scale aware. We found that this allows us to take advantage of spatially dense observations and helps the DA scheme to deal with the wide range of scales present in the model errors. 

Posted in Arctic, Cryosphere, Numerical modelling | Tagged | Leave a comment

Outlook For The Upcoming UK Winter

By: Christopher O’Reilly

In this post I discuss the outlook for the 2022/23 winter from a UK perspective: what do the forecasts predict and what physical drivers might influence the upcoming winter?

 An important winter

 The price of utilities has risen dramatically over the last year for people, businesses and organisations in the UK. As we move towards winter there is great concern about the effect of these price rises on people’s lives. In the UK, winter temperatures have a strong impact on the demand for gas and electricity. For example, a winter with a 1 degree temperature anomaly results in roughly a daily average gas demand anomaly of 100 GWh over a winter season. In monetary terms, based on the UK October gas price cap (i.e. 10.3p/kWh), this equates to about £1 billion for each 1 degree UK temperature anomaly (though likely much higher due to the higher unit costs for businesses/organisations – not to mention the governments costs to underwrite the price cap). The numbers are pretty big, and the stakes are pretty high.

What do the forecast models predict?

So can we predict what is in store for the UK this winter? Seasonal forecasts out to six months in the future are performed operationally by weather centres across the world. The European Commission’s Copernicus Climate Change Service (or, more snappily, “C3S”) coordinates these long-range forecasts from 7 international centres (including the UK Met Office). When forecasting many months ahead we cannot predict the weather on a particular day, however, forecasts demonstrate some skill in determining average conditions on monthly timescales.

Ideally, we would examine the 2m temperature from the forecasts but these do not demonstrate clear skill over the UK. However, there is skill in the sea-level pressure over the North Atlantic and this can be utilised to provide predictions of the UK temperature (as demonstrated in several previous studies).

Figure 1: (Top) Forecasts of sea-level pressure (SLP) anomaly for the early winter (ND) and late winter (JF) from the C3S multi-model forecasts, initialised at the start of September. (Bottom) Observational SLP anomalies for the early winter (ND) and late winter (JF) during La Nina winters with respect to other years (1954-2022).

For the upcoming winter it is useful to first consider predictions of the large-scale atmospheric circulation because the winter temperatures in the UK are largely determined by wind anomalies (and the associated advection) in the surrounding Euro-Atlantic sector. The multi-model forecasts of the sea-level pressure anomalies for the 2022/23 winter are plotted in Figure 1.

The sea-level pressure anomalies over the North Atlantic exhibit notable changes in characteristics between early winter and late winter. In the early winter period (November-December or “ND”) there are high pressure anomalies across most of the midlatitude North Atlantic, extending into Europe. In the late winter period (January-February or “JF”) there are low pressure anomalies over Iceland and high pressure anomalies further south, more closely resembling the positive phase of the North Atlantic Oscillation or “NAO”, which typically causes warmer winters in the UK. But what is driving these signals?

La Nina conditions in the Tropical Pacific

As we approach this winter, forecasts are confident that we will have La Nina conditions, associated with cooler sea surface temperatures in the eastern/central Tropical Pacific. The observed impact of La Nina on the large-scale atmospheric circulation in the Euro-Atlantic sector shows a clear difference in the early winter compared with late winter. Composite anomalies during observed La Nina years are shown in Figure 1.

The resemblance of this observational composite plot to the predictions from the seasonal forecasts is clear. In the early winter the ridging over the North Atlantic is followed by the emergence of positive NAO conditions in late winter. The La Nina conditions are clearly, and perhaps inevitably, driving the circulation anomalies in the seasonal forecasts and the comparison with observations suggests that this is a sensible forecast.

A possible role for the Quasi-Biennial Oscillation (QBO)? 

Another driver that can confidently be predicted (mostly) several months in advance and can influence the extratropical large-scale circulation is the Quasi-Biennial Oscillation (QBO). The QBO refers to the equatorial winds in the stratosphere that oscillate between eastward and westward phases, which have been shown to influence the large-scale tropospheric circulation in the Euro-Atlantic region. The QBO is currently in a “deep” westerly phase, with strong westerly winds that span the depth of the equatorial stratosphere. Winters with westerly QBO conditions in observations demonstrate a clear signal in early and late winter, both of which project onto the positive phase of the NAO.

A number of studies have shown that seasonal forecasting models capture the correct sign of the relationship between the QBO and the NAO but that it is substantially weaker than in observations. We might therefore reasonably expect/anticipate/suppose that this effect is not adequately represented in the forecasts for this winter.

What does this mean for UK temperatures?

The La Nina and deep QBO-W conditions tend to favour milder winters for the UK, however, there remains significant variability. For example, the record cold period during early winter in 2010/11 occurred during La Nina and deep QBO-W conditions (and was possibly linked to North Atlantic SST anomalies). Nonetheless, the drivers analysed here tend to favour circulation anomalies in both the early and late winter that favour milder UK conditions, and support the signals seen in the seasonal forecast models.

So we can be cautiously optimistic…?

Milder conditions would certainly be welcome this winter in the UK so it’s positive that the forecasts and drivers seem to point in this direction. However, there is of course still a clear possibility for cold conditions. One possible cause would be a sudden stratospheric warming event, in which the stratospheric polar vortex breaks down, favouring the development of negative NAO conditions and associated cold conditions in the UK. An example of this was the “Beast from the East” event in February 2018. Weather geeks get very excited about sudden warmings – and understandably so – but we might hope to forego such excitement this winter. The C3S seasonal models show no clear signal on the probability of a sudden stratospheric warming event occurring at present.

So milder conditions might be on the cards for the UK this winter, which would be good news. But warmer winters also tend to be wetter here in the UK, so at least we’ll still have that to moan about.

A version of this blog is also available with additional figures, references and footnotes here.

Posted in Atmospheric circulation, Climate, Climate modelling, ENSO, North Atlantic, Oceans, Seasonal forecasting, Stratosphere, Teleconnections | Leave a comment

Weather vs. Climate Prediction

By: Annika Reintges

Imagine you are planning a birthday party in 2 weeks. You might check the weather forecast for that date to decide whether you can gather outside for a barbeque, or whether you should reserve a table in a restaurant in case it rains. How much would you trust the rain forecast for that day in 2 weeks? Probably not much. If that birthday was tomorrow instead, you would probably have much more faith in the forecast. We all have experienced that weather prediction for the near future is more precise than prediction for a later point in time.

A forecast period of 2 weeks is often stated to be the limit for weather predictions. But how then, are we then able to make useful climate predictions for the next 100 years?

For that, it is important to keep in mind the difference between the terms ‘weather’ and ‘climate’. Weather changes take place on a much shorter timescale and also on a smaller scale in space. For example, it matters whether it will rain in the morning or the afternoon, and whether a thunderstorm will hit a certain town or pass slightly west of it. Climate however, are weather statistics averaged over a long time, usually over at least 30 years. Talking about the climate in 80 years, for example, we are interested whether UK summer will be drier. We will not be able to say whether July of the year 2102 will be rainy or dry compared to today.

Because of this difference between weather and climate, the models differ in their specifications. Weather models have a finer resolution in time and space than climate models and are run over a much shorter period (e.g., weeks), whereas climate models can be run for hundreds or even thousands of years.

Figure 1: ‘Weather’ refers to short-term changes, and ‘climate’ to weather conditions averaged over at least 30 years (image source: ESA).

But there is more to it than just the differences in temporal and spatial resolution:

The predictability is based on two different sources: Mathematically, (1) weather is an ‘initial value problem’, (2) climate is a ‘boundary problem’. This is related to the question: how do we have to ‘feed’ the model to make a prediction? In other words, which type of input matters for (1) weather and (2) climate prediction models. A weather or climate model is just a set of code full of equations. Before we can run the model to get a prediction, we have to feed it with information.

Here we come back to the two sources of predictability:

(1) Weather prediction is an ‘initial value problem’: It is essential to start the model with initial values of one recent weather state. This means several variables (e.g., temperature and atmospheric pressure) given for 3-dimensional space (latitudes, longitudes and altitudes). This way, the model is informed, for example, about the position and strength of cyclones that might approach us soon and cause rain in a few days.

(2) Climate prediction is a ‘boundary value problem’: For the question whether UK summers will become drier by the end of the 21st century, the most important input is the atmospheric concentration of greenhouse gases. These concentrations are increasing and affecting our climate. Thus, to make a climate prediction, the models needs these concentrations not only from today, but also for the coming years, we have changing boundary conditions. For this, future concentrations are estimated (usually following different socio-economic scenarios).

Figure 2: Whether a prediction is an ‘initial value’ or ‘boundary value’ problem, depends on the time scale we want to predict (image source: MiKlip project).

And the other way around: For the weather prediction (like for the question of ‘will it rain next week?’), boundary conditions are not important: the CO2 concentration and its development throughout the week do not matter. And for the climate prediction (‘will we have drier summers by the end of the century?’), initial values are not important: it does not matter whether there was a cyclone over Iceland at the time we started the model run.

Though, hybrid versions of weather/climate prediction exist: Say we want to predict the climate in the ‘near’ future (‘near’ in climate timescales, for example in 10-20 years). For that, we can make use of both sources of predictability. The term used in this case would be ‘decadal climate prediction’. With this, we will of course not be able to predict the exact days when it will rain, but we could be able to say whether the UK summers in 2035-2045 will on average be drier or wetter than the preceding 10 years. However, when trying to predict climate beyond this decadal time scale, the added value of adding initial values to climate prediction is very limited.

Posted in Climate, Climate modelling, Predictability, Weather forecasting | Tagged | Leave a comment

Monitoring Climate Change From Space

Richard Allan

It’s never been more crucial to undertake a full medical check-up for planet Earth, and satellite instruments provide an essential technological tool for monitoring the pace of climate change, the driving forces and the impacts on societies and the ecosystems upon which we all depend. This is why hundreds of scientists will be milling about the  National Space Centre, Leicester at the UK National Earth Observation Conference, talking about the latest innovations, new missions and the latest scientific discoveries about the atmosphere, oceans and land surface. For my part, I will be taking a relatively small sheet of paper showing three current examples of how Earth Observation data is being used to understand ongoing climate change based on research I’m involved in.

The first example involves using satellite data measuring heat emanating from the planet to evaluate how sensitive Earth’s climate is to increases in heat trapping greenhouse gases. It’s important to know the amount of warming resulting from rising atmospheric concentrations of greenhouse gases, particularly carbon dioxide, since this will affect the magnitude of climate change. This determines the severity of impacts we will need to adapt to or that can be avoided with the required rapid, sustained and widespread cuts in greenhouse gas emissions. However, different computer simulations give different answers and part of this relates to changes in clouds that can amplify or dampen temperature responses through complex feedback loops. New collaborative research led by the Met Office show that the pattern of global warming across the world causes the size of these climate feedbacks to change over time and we have contributed satellite data that has helped to confirm current changes.

The second example uses a variety of satellite measurements of microwave and infrared electromagnetic emission to space along with ground-based data and simulations to assess how gaseous water vapour is increasing in the atmosphere and therefore amplifying climate change. Although there are some interesting differences between datasets, we find that the large amounts of invisible moisture near to the Earth’s surface are increasing by 1% every 10 years in line with what is expected from basic physics. This helps to confirm the realism of the computer simulations used to make future climate change projections. These projections show that increases in water vapour are intensifying heavy rainfall events and associated flooding.

In the third example, we exploit satellite-based estimates of precipitation to identify if projected intensification of the tropical dry seasons is already emerging in the observations. My colleague, Caroline Wainwright, recently led research showing how the wet and dry seasons are expected to change, and in many cases intensify, with global warming. But we wanted to know more – are these changes already emerging? So we exploited datasets using satellite measurements in the microwave and infrared to observe daily rainfall across the globe. Using this information and combining it with additional simulations of the present day we were able to show (and crucially understand why) projected intensification of the dry season in parts of South America, southern Africa and Australia are already emerging in the satellite record (Figure 1). This is particularly important since the severity of the dry season can be damaging for perennial crops and forests. It underscores the urgency in mitigating climate change by rapidly cutting greenhouse gas emissions, but also gauging the level of adaptation to the impacts of climate change needed. This research has just been published in the Geophysical Research Letters journal.

There is a huge amount of time, effort and ultimately cash that is needed to design, develop, launch and operate satellite missions. The examples I am presenting at the conference highlight the value in these missions for society through advancing scientific understanding of climate change and monitoring its increasing severity across the globe.

Figure 1 – present day trends in the dry season (lower 3 panels showing observations and present day simulations of trends in dry season dry spell length) are consistent with future projections (top panel, changes in dry season dry spell length 2070-2099 minus 1985-2014) over Brazil, southern Africa, Australia (longer dry spells, brown colours) and west Africa (shorter dry spells, green colours), increasing confidence in the projected changes in climate over these regions (Wainwright et al., 2022 GRL).


Allan RP, KM Willett, VO John & T Trent (2022) Global changes in water vapor 1979-2020, J. Geophys. Res., 127, e2022JD036728, doi:10.1029/2022JD036728

Andrews T et al. (2022) On the effect of historical SST patterns on radiative feedback, J Geophys. Res., 127, e2022JD036675. doi:10.1029/2022JD036675

Fowler H et al. (2021) Anthropogenic intensification of short-duration rainfall extremes, Nature Reviews Earth and Environment, 2, 107-122, doi:10.1038/s43017-020-00128-6.

Liu C et al. (2020) Variability in the global energy budget and transports 1985-2017, Clim. Dyn., 55, 3381-3396, doi: 10.1007/s00382-020-05451-8.

Wainwright CM, RP Allan & E Black (2022), Consistent trends in dry spell length in recent observations and future projections, Geophys. Res. Lett. 49, e2021GL097231 doi:10.1029/2021GL097231

Wainwright CM, E Black & RP Allan (2021), Future Changes in Wet and Dry Season Characteristics in CMIP5 and CMIP6 simulations, J. Hydrometeorology, 11, 2339-2357, doi:10.1175/JHM-D-21-0017.1

Posted in Climate, Climate change, Climate modelling, Clouds, earth observation, Energy budget, Water cycle | Tagged | Leave a comment

The Turbulent Life Of Clouds

By: Thorwald Stein

It’s been a tough summer for rain enthusiasts in Southern England, with the region having just recorded its driest July on record. But, there was no shortage of cloud: there will have been the slight probability of a shower in the forecast, a hint of rain on the weather radar app, or you spotted a particularly juicy cumulus cloud in the sky getting tantalisingly close to you before it disappeared into thin air. You wonder why there was a promise of rain a few hours or even moments ago, and you brought in your washing or put on your poncho for no reason. What happened to that cloud?

The first thing to consider is that clouds have edges, which, while not always easily defined, for a cumulus cloud can be imagined where the white of the cloud ends and the blue of the sky begins. In this sense, the cloud is defined by the presence of lots of liquid droplets due to the air being saturated, i.e. very humid conditions, and the blue sky – which we refer to as the “environment” – by the absence of droplets, due to the air being subsaturated. The second realisation is that clouds are always changing and not just static objects plodding along across the sky. Consider this timelapse of cumulus clouds off the coast near Miami and try to focus on a single cloud – see how it grows and then dissipates!

Notice how each cloud is made up of several consecutive pulses, each with its own (smaller scale) billows. If one such a pulse is vigorous enough, it may lead to deeper growth and, ultimately, rainfall. But, the cloud edge is not solid: through turbulent mixing from those pulses and billows, environmental air is trapped inside the clouds, encouraging evaporation of droplets and inhibiting cloud growth. Cumulus convection over the UK usually does not behave in such a photogenic fashion, as it often results from synoptic-scale rather than local weather systems, but we observe similar processes.

Why, then, are there often showers predicted that do not materialise? (1) Consider that the individual cumulus cloud is about a kilometre across and a kilometre deep. The individual pulses are smaller than that and the billows are smaller still, “… and little whirls have lesser whirls and so on to viscosity” (L.F.Richardson, 1922): we are studying complex turbulent processes over a wide range of scales, from more than a kilometre to less than a centimetre. Operational forecast models are run at grid lengths of around 1 km, which would turn the individual cumulus cloud into a single Minecraft-style cuboid. The turbulent processes that are so important for cloud development and dissipation are parameterised: a combination of variables on the grid-scale, including temperature, humidity, and winds will inform how much mixing of environmental air occurs. Unfortunately, our models are highly sensitive to the choice of parameters, affecting the duration, intensity, and even the 3-dimensional shapes of showers and thunderstorms predicted (Stein et al. 2015). Moreover, it is difficult to observe the relevant processes using routinely available measurements.

At the University of Reading, we are exploring ways to capture the turbulent and dynamical processes in clouds using steerable Doppler radars. Steerable Doppler radars can be pointed directly to the cloud of interest, allowing us to probe it over and over and study its development (see for instance this animation, created by Robin Hogan from scans using the Chilbolton Advanced Meteorological Radar). The Doppler measurements provide us with line-of-sight winds where small variations are indicative of turbulent circulations and tracking these variations from scan to scan enables us to estimate the updraft inside the cloud (Hogan et al. 2008 (4)). Meanwhile, the distribution of Doppler measurements at a single location informs us of the intensity of turbulence in terms of eddy dissipation rate, which we can use to evaluate the forecast models (Feist et al. 2019). Combined, we obtain a unique view of rapidly evolving clouds, like the thunderstorm in the figure below.

Figure: Updraft pulses detected using Doppler radar retrievals for a cumulonimbus cloud. Each panel shows part of a scan with time indicated at the top, horizontal distance on the x-axis and height on the y-axis. Colours show eddy dissipation rate, a measure of turbulence intensity, with red indicative of the most intense turbulence, using the method from Feist et al. (2019). Contours show vertical velocity and arrows indicate the wind field, using a method adapted from Hogan et al. (2008). The dotted line across the panels indicates a vertical motion of 10 meters per second. Adapted from Liam Till’s thesis.

There are numerous reasons why clouds appear where they do, but it is evident that turbulence plays an important role in the cloud life cycle. By probing individual clouds and targeting the turbulent processes within, we may be able to better grasp where and when turbulence matters. Our radar analysis continues to inform model development (Stein et al. 2015) ultimately enabling better decision making, whether it’s to bring in the washing or to postpone a trip due to torrential downpours.

(1) Apart from the physical processes considered in this blog, there are also limitations to predictability, neatly explained here: 


Feist, M.M., Westbrook, C.D., Clark, P.A., Stein, T.H.M., Lean, H.W., and Stirling, A.J., 2019: Statistics of convective cloud turbulence from a comprehensive turbulence retrieval method for radar observations. Q.J.R. Meteorol. Soc., 145, 727– 744.

Hogan, R.J., Illingworth, A.J. and Halladay, K., 2008: Estimating mass and momentum fluxes in a line of cumulonimbus using a single high-resolution Doppler radar. Q.J.R. Meteorol. Soc., 134, 1127-1141.

Richardson, L.F., 1922: Weather prediction by numerical process. Cambridge, University Press.

Stein, T. H. M., Hogan, R. J., Clark, P. A., Halliwell, C. E., Hanley, K. E., Lean, H. W., Nicol, J. C., & Plant, R. S., 2015: The DYMECS Project: A Statistical Approach for the Evaluation of Convective Storms in High-Resolution NWP Models, Bulletin of the American Meteorological Society, 96(6), 939-951.

Posted in Climate, Clouds, Turbulence, Weather forecasting | Tagged | Leave a comment