How TAMSAT have been supporting African people for over 35 years

By Ross Maidment

The University of Reading’s TAMSAT group ( ) have helped pioneer the use of satellite imagery in rainfall estimation across Africa since the early 1980s when the group was first established. Thanks to some bright and innovative minds back in the day, it was quickly realised that the availability of frequent thermal infrared satellite images providing full coverage of the African continent (e.g. Figure 1) could readily be used to observe the cold tops of rain-bearing convective cloud systems, and in turn, produce much needed information on where and how much rainfall has likely fallen. Given the high dependence on rainfall for many socially and economically important activities across sub-Saharan Africa (namely those in agriculture) and the severe lack of raingauges across much of the continent, this satellite-based alternative proved an extremely useful resource to monitor rainfall conditions and forewarn to impending water shocks, such as drought or flooding.

2016 12 01 Ross Maidment - Figure_1

Figure 1. Meteosat thermal infrared image from 29 November 2016, 1800 UTC (Source: White scenes denote cold cloud tops, while black/dark grey scenes denote the warmer land or sea surface.

More than three and a half decades on, the TAMSAT group still provide operational rainfall estimates (e.g. Figure 2) to all of Africa using a simple, yet effective rainfall estimation algorithm which is based on the use of cold cloud duration (CCD) fields derived from the satellite imagery. Over the years, many African meteorological services and other agencies or organisations have depended on this data for a range of applications such as flood and drought monitoring, famine early warning and weather index-based insurance schemes (e.g. Figure 3), helping millions of people to manage the highly variable rainfall climate that characterises much of the African continent.

2016 12 01 Ross Maidment - Figure_2

Figure 2. The TAMSAT seasonal rainfall anomaly for September-November 2015 with respect to the 1983-2012 climatology (Source: The seasonal anomaly, derived from 10-day total rainfall estimates, shows the impact of the 2015 El Niño event on African rainfall (namely, a wetter East Africa and drier Southern Africa).


2016 12 01 Ross Maidment - Figure_3 (752 x 450)

Figure 3. A weather index-based insurance workshop in Zambia where the use of TAMSAT data in insurance products are being discussed (Photo credit: Agrotosh Mookerjee).

In an era where there is an increasing amount of satellite-derived rainfall products, many of which are based on new and highly sophisticated sensors, it is reasonable to assume that the TAMSAT data, given it is based on a relatively simple method, may no longer be as useful. However, the strength of the TAMSAT data lies in its longevity and consistent estimation algorithm. For many applications, a short rainfall time-series makes it very difficult to assess the severity of unexpected changes in rainfall when the average or climatological conditions are not well known. The long time-series of the TAMSAT archive (since 1983) and its operational system ensures that new estimates are created on a regular basis, making the TAMSAT rainfall dataset a highly valuable resource, for both climate-based operational activities (e.g. Black et al 2016) and climate research (e.g. Maidment et al 2015).

So what next for TAMSAT? During the last 12 months, the TAMSAT group have invested huge efforts into overhauling their estimation algorithm, and in doing so, minimising several of the characteristic problems associated with the previous algorithm. In addition, and with the help of collaborators at the International Research Institute at Columbia University, the group have developed novel techniques to provide uncertainty rainfall estimates (which surprisingly many rainfall datasets do not issue) and merging of auxiliary information (such as raingauge measurements) with the satellite data to improve the estimation of rainfall intensities over short time periods. It is planned that these products will be run operationally alongside TAMSAT’s primary rainfall product during 2017, and also in a rainfall monitoring platform currently deployed in several countries across West and East Africa.

The activities described here, amongst others, ensures that TAMSAT are delivering both skillful and reliable rainfall products that are much needed in a region of the world that is deprived of adequate rainfall information, and at the same time, is challenged by marked climate variability and human-induced climate change which is expected to exacerbate current conditions in the near future.


Black, E., E. Tarnavsky, R. Maidment, H. Greatrex, A. Mookerjee, T. Quaife, and M. Brown, 2016: The Use of Remotely Sensed Rainfall for Managing Drought Risk: A Case Study of Weather Index Insurance in Zambia. Remote Sens., 8, 342.

Maidment, R. I., R. P. Allan, and E. Black, 2015: Recent observed and simulated changes in precipitation over Africa. Geophys. Res. Lett., 42, 2015GL065765.

Posted in Africa, Climate, drought, earth observation, Remote sensing | Leave a comment

Flying through the Indian monsoon

By Andy Turner

Forecasting the monsoon in India continues to be a challenge for scientists, both for the season ahead and long into the future, the monsoon being vital for 80% of the country’s annual rainfall and securing the food supply for more than a billion people.

For years scientists have studied climate models from all over the world to understand why they don’t represent the monsoon well. But new observations are needed to help understand the processes involved, including how the land surface affects the timing of the onset and cessation of the rains, and how the deep convective clouds of the tropics work together with the monsoon winds that cover a much larger region.

2016 11 25 Andy Turner - monsoon_seasons (329 x 792)Figure 1. The summer months feature much stronger rains than winter over India (coloured blue over the land) as winds blow onshore from the Indian Ocean. Also shown are sea-surface temperatures. Notice the heavy rainfall on the west coast, caused by the influence of the Western Ghats mountains.

So this year, with the support of funding from the Natural Environment Research Council in the UK and India’s Ministry of Earth Sciences, that is what we started with our INCOMPASS project.

University of Reading and the Indian Institute of Science in Bengaluru (Bangalore) led a team of more than 10 universities, research institutes and operational forecasting centres to observe the monsoon in India from the air and on the ground.

After years of planning, this May we flew the NERC-owned Atmospheric Research Aircraft, a BAe-146 four-engine jet operated by the NCAS Facility for Airborne Atmospheric Measurement to India, as part of one of the largest observational campaigns that NERC has ever funded.

We based ourselves in two locations: the northern city of Lucknow, central to some of India’s most fertile land in the basin of the river Ganges, and in Bengaluru in the southern peninsular, equidistant between the heavy rains of the Western Ghats mountains and the drier land on the east coast. The aircraft returned to the UK in mid-July.

In the north we captured the monsoon onset: taking measurements as the monsoon progresses from east to west over the plains. How do the transitions between wet and dry soils affect the development of monsoon storms? How do the temperature and humidity change as we approach the Thar Desert in north-western India? By analysing our results over the next few years, we’ll find out.

In the south we gave more attention to the heavy rainfall over the Western Ghats mountains. How does this rainfall change during the day? What does the atmospheric boundary layer look like to the west of the mountains, over the Arabian Sea from where the moisture originates? Just as in the north, and as explained in our Planet Earth article, several of our flights took place only a few hundred feet above the ground, giving a spectacular, if bumpy, view of the landscape.

A research aircraft has the advantage of being able to cover large distances and measure large weather systems from different angles. But fuel and supporting the aircraft is expensive and requires a huge commitment of engineers & flight operations and instrument scientists at FAAM and the Met Office. To accompany these measurements, the INCOMPASS project also put in place a series of flux towers (installed by colleagues at CEH) at locations across India (Figure 2).

2016 11 25 Andy Turner - kanpur_flux_tower

Figure 2. Towers like this one installed at IIT Kanpur by INCOMPASS colleagues at CEH will offer long-term measurements of fluxes of heat and moisture passed from the surface to the atmosphere. Photo (c) Andy Turner 2016.

Flux towers are vital for measuring turbulent fluxes of heat and moisture as they migrate from the surface to the atmosphere. The information collected in 2016 and hopefully for years to come will help develop and improve the JULES land model, a vital component of the weather and climate models used by the Met Office. INCOMPASS also gave us the opportunity to use the Department of Meteorology radiosonde equipment. We based this at IIT Kanpur in northern India, close to our Lucknow airport base, and launched more than 100 balloons during the monsoon rains of July.

2016 11 25 Andy Turner - balloon_launch (1068 x 712)

Figure 3. Scientists in India prepare to launch a radiosonde (weather balloon). Photo (c) Andy Turner 2016.

But what happens next? The team of scientists and students will spend the next few years analysing the vast datasets collected. With the Met Office and the NCAS Computational Modelling Services, we will be running model experiments at a variety of high resolutions to compare with our data for the 2016 monsoon. During the field campaign, we ran forecasts for India at 4 km resolution, but in the work to come we hope to perform tests below the kilometre scale to see how well we can re-forecast some of the storms in the monsoon.



Posted in Climate, Climate modelling, Monsoons, Numerical modelling | Leave a comment

From kilobytes to petabytes for global change research: take the skills survey!

By Vicky Lucas
Institute for Environmental Analytics

If you deal with megabytes of environmental sample data, or gigabytes of sensor data, or terabytes of model data or petabytes of remote sensing data, then I’d like you to take a survey.  If you create, look after, analyse, publish on or manage datasets for global change then I’d like to find out what are the necessary and emerging skills you need.

Global change research and development are pursuits that monitor, analyse and simulate every aspect of environmental developments, from climate to biodiversity to geochemistry to the human attitude and actions on the world.  For global change research to flourish, a range of skills are necessary, increasingly so in the broad area of data intensive digital skills and interdisciplinary work.

2016 11 18 Vicky Lucas Fig 1

Through this survey I would like to find out about existing training programmes and opportunities that you use and value, as well as the skills that are essential in your everyday work or that would make you more efficient or more effective.  Go to survey (closes 22 November).

I work for the Belmont Forum, which is a group of the world’s major and emerging funders, from the National Science Foundation in the USA, to FAPESP in Brazil to the Japan Science and Technology Agency, and everywhere in between.  The Belmont Forum aims to accelerate the delivery of research.  As part of the e-Infrastructure and Data Management project, I am focussed on capacity building, improving the workforce skills and knowledge to enable global change research to thrive.

If you, from anywhere in the world, have wrestled a spreadsheet, frowned at R or Python, filled your hard disk or delighted as you kicked off a month-long model run, then I’d really appreciate 10 minutes of your time to generate a few kilobytes of survey data of my own.

Vicky Lucas
Human Dimensions Champion, Belmont Forum e-Infrastructures and Data Management Project

Background on the Belmont Forum

2016 11 18 Vicky Lucas Fig 2.jpg

The Belmont Forum is a group of the world’s major and emerging funders of global environmental change research. It aims to accelerate delivery of the environmental research needed to remove critical barriers to sustainability by aligning and mobilizing international resources. It pursues the goals set in the Belmont Challenge by adding value to existing national investments and supporting international partnerships in interdisciplinary and trans-disciplinary scientific endeavours. You can also read about the full Belmont Challenge.

Belmont Forum Data Skills and Training Survey:

Posted in Academia, Climate, Climate change, Climate modelling, Numerical modelling, Remote sensing, Teaching & Learning | Tagged | Leave a comment

When meteorology altered the course of history (or maybe not)

By Bob Plant

The Battle of Milvian Bridge was fought on 28 October in the year 312 CE. The atmospheric conditions there on that day may have had a critical influence on the course of human history ever since. It’s a defensible opinion. Or they may not have been all that important: that’s a defensible opinion too. On the other hand, perhaps nothing in the least interesting happened, at least nothing of a meteorological nature. Again, that’s entirely plausible. This is a very longstanding and very much ongoing controversy. I’ll try to explain it …

2016 11 10 Bob Plant - Figure 1

Figure 1. A painting of the Battle of Milvian Bridge by Guilio Romano

The Roman empire by the third century had become difficult to control, with civil war becoming increasingly common and internal conflicts becoming increasingly destructive. The emperor Diocletian had tried to stabilize matters by formally dividing the running of the Eastern and Western halves of the empire, each run by its own emperor and junior emperor. This worked fairly well; the frontiers were strengthened and the tax system better organized. However, Diocletian’s abdication due to illness in 305 precipitated yet another succession struggle and civil war.

The ruthless chancer who eventually emerged victorious from this particular mess was the emperor Constantine, who began his bid in 306 in York and completed it by 324. Along the way, the battle of the Milvian Bridge in 312 was fought on the outskirts of Rome against the army of Maxentius, himself a recent usurper but supposedly recognized by Constantine as his superior and the emperor for the Western half of the empire. We’ll come back to the battle in a moment but first we should emphasize why Constantine’s victory matters. He went on to found the city of Constantinople (modern Istanbul) and shifted the imperial capital there. This was important in cementing the division into the eastern and western empires which did so much to shape the last two millennia of European and east-Asian history. But even more far-reaching was that he instigated Christianity as the state religion. This proceeded piecemeal, starting by relaxing and removing the persecutions and proscriptions of the latter part of Diocletian’s time (e.g. the Edict of Milan in 313) but ultimately establishing distinct legal and political advantages for Christians. The consequences of those changes have been enormous. 

It’s not clear whether Constantine’s actions on religion and the state were motivated by his own political calculations, by his genuine religious convictions or by some scrambled mixture of the two. If you were to take a guess anywhere along that spectrum it would not take long to find reputable historians making strongly-expressed arguments in support of it. Nonetheless, he was baptized on his death bed, and consistently professed to be a believer well before that, so it is safe to assume that he was at least a partial convert. An important but deeply controversial question for historians is when and how that conversion happened. And that brings us back to Milvian Bridge.

On the night of the battle Constantine apparently experienced a miraculous dream and around noon on the day itself a miraculous vision. A dream is somewhat difficult to verify or falsify of course, and not really our interest here. The vision was of “a cross of light in the heavens, above the sun, and bearing the inscription, CONQUER BY THIS”. The vision has been considered by many as being pivotal in his conversion, and in helping him to inspire the troops to victory on the day. What are we to make of this? 

We should note where this account actually comes from. It appears in the writings of Eusebius around two decades later, and apparently his source is that he was told so “long afterwards” by none other than the emperor himself. Now, Eusebius is not exactly considered the most reliable of writers on various matters for various reasons, and it is not difficult to find reasons to be cautious. The event is conspicuously absent in an account by Lactantius, for example, despite the fact that there would have been potentially thousands of witnesses to it associated with a full-scale battle on the edge of a major city. On the other hand, why invent something if there are potentially thousands of witnesses who might contradict it? Indeed many historians since, however credulous about miracles, have accepted that there may just be something in it: i.e., supposing that there may have been some natural atmospheric phenomenon, just possibly viewed with, shall we say … a little licence. This line of argument goes back several hundred years itself and is far from settled. To give a flavour of these sober and dispassionate scholarly debates, here is a quote from a very lengthy footnote in Potter’s (2013) biography of the emperor, “For further support of Weiss’s view [claiming a solar halo] see Barnes (2011), though I should note that refusal to accept Weiss’s view does not necessarily indicate an attachment to the Nazi party as is implied in his discussion.”

The meteorological explanation usually put forwards in modern historical articles is that it may have been a “sun dog”. Figure 2  is a very typical picture put forwards to support the notion, taken from the wikipedia page about the battle:

2016 11 10 Bob Plant - Figure 2

Figure 2. Two ‘sun dogs’ (parhelia). Source: Wikipedia.

Perhaps it does look like a plausible explanation, allowing for, shall we say … a little licence. But let’s be a little more careful. If we try searching online for pictures of sun dogs we can easily find many beautiful photographs, but we see relatively little that might resemble the description.

What is a sun dog anyway? A highly-recommended guide to atmospheric optical phenomena is provided by the website. A sun dog, or parhelion, is rather common and occurs when light is scattered by ice crystals in the shape of hexagonal plates that are suitably oriented, with the hexagonal faces aligned close to the horizontal. For a more vertically-extended display, or a “tall sundog”, it is helpful if the plates are not quite horizontal but are wobbling slightly as they fall. However, too much wobble, more than a degree or so, and the halo is lost. Less common, but a little bit more like the description, is a sun pillar which requires the ice crystals to be consistently somewhat tilted and a low angle of the sun. A rare event would be to combine both upper and lower pillars with a substantial horizontally-extended halo such as a parhelic arc which arises from reflections from the vertical faces of the crystal – creating something which can indeed look like a cross. This is not easily achieved, however, and it may be worth adding that Rome around noon in late October is a most unlikely time and place to be able to catch it. To get a sense of just how delicate the conditions would need to be, there is a fun halo simulator available from atoptics that you can play around with and see if you can manage to generate something like the image.

There are entire books and countless articles on this event from historians. And for that matter there are entire books on atmospheric optics. If you really want to develop an informed opinion, you have a lot of reading to do! I’ve simply tried to give a short introduction as a non-expert for non-experts. But I thought it may be interesting for the meteorologically-minded to know something of how and why the possible appearance of an atmospheric optical phenomenon has been such a hotly-debated question.

Posted in Atmospheric optics | Tagged , , | Leave a comment

The value of future observations

By Alison Fowler

The atmosphere and oceans are being routinely observed by a myriad of instruments. These instruments are positioned on board orbiting satellites, aircraft and ships, surface weather stations, and even balloons.  The information collected by these instruments can be used to ensure that modelled weather forecasts adhere to reality using a process known as data assimilation.

2016 11 03 Alison Fowler blog Fig 1

Figure 1:  Data coverage of the AMSU-A instrument, on board 6 different satellites, within a 6 hour time window (copyright ECMWF)

For the observations to be useful it is necessary that:

  • The observations can be compared to the forecast variables (e.g. temperature, humidity and wind)
  • We know the uncertainty in those observations
  • We know the uncertainty in the weather forecast model itself (so we know how much to trust the forecast vs how much to trust the observations)

These fundamentals of data assimilation are continually evolving, as the weather models become more sophisticated and are addressing new societal needs, new instruments are developed and computational resources and mathematical techniques advance.

These different aspects of data assimilation were addressed at the fifth annual international symposium on data assimilation held at the University of Reading during a very hot week in July 2016. This symposium brought together 200 scientists from 15 countries spread across four different continents and received sponsorship from NCEO, the Met Office and ECMWF.

2016 11 03 Alison Fowler blog Fig 2 (800 x 533)

Figure 2: Participants of the Fifth annual international symposium on data assimilation (photograph copyright (C) Stephen Burt).

This symposium comprised of 10 different sessions, one of which focused on the particular problem of assessing the value of observations. This is important not only for evaluating which (of the very many) observations are most important for providing an accurate weather forecast but also for designing instruments which are able further to reduce the uncertainty in the forecast. This latter problem is particularly difficult due to the fast pace at which data assimilation systems are changing, which means that by the time the instrument is operational  (possibly in a few decades time) its value may be very different than if the data could be assimilated today.

There are many possible metrics for assessing the value of observations. Some are based on how sensitive the forecast skill is to the value of the observations, others try to quantify the amount of information in the observations for reducing the uncertainty in our knowledge of the current state of the atmosphere. Computing these metrics before the instrument is built and the data is available relies on accurate estimates of the error characteristics of the instrument and its relationship to the model variables and, hence, is very challenging.

It is clearly difficult to describe the value of a future observation unequivocally by a single figure. Instead we need to provide insight, through on-going research, as to how the value of observations are sensitive to changes in the ever evolving data assimilation system. There will be much to discuss at the next symposium!

Posted in Climate, earth observation, Measurements and instrumentation, Numerical modelling, Remote sensing | Tagged | Leave a comment

How can a hurricane near the USA affect the weather in Europe?

By John Methven

It may seem bizarre that processes occurring within clouds near the USA, involving tiny ice crystals and water droplets, can have an influence on high-impact weather events thousands of kilometres away in Europe, and our ability to predict them days in advance. However, this is the fundamental nature of the atmosphere as a chaotic dynamical system. Information is transferred from one region to another in the atmosphere through wave propagation and transport of properties within the air, such as water vapour. Weather systems developing over the North Atlantic and hitting Europe are intimately related to large-amplitude meanders of the jet stream, known as Rossby waves. Characteristic weather patterns grow in concert with the waves, and the jet stream acts as a wave guide, determining the focus of the wave activity at tropopause-level (about 10 km altitude). Rossby wave energy transfers downstream rapidly, amplifying the meanders and the weather events associated with them.

Perhaps even an even greater stretch for the imagination is the idea that you could direct research aircraft and weather balloons into the jet stream across the Atlantic to understand how a tropical cyclone near the USA can influence a wind storm in Scotland and then flooding in Norway. Nevertheless, this is what was attempted last month in the North Atlantic Waveguide and Downstream Impacts Experiment (NAWDEX). The experiment involved five research aircraft equipped with lidar, radar and dropsondes (measurement devices that fall on small parachutes from the aircraft) for measuring high resolution cross-sections of winds, temperature and humidity. The aircraft furthest upstream was the NASA Global Hawk UAV (flying from the USA as part of the NOAA SHOUT programme). The German HALO (cloud radar, water vapour lidar and dropsondes) and DLR Falcon aircraft (two wind lidars) were based from Iceland for the whole campaign month (16 September to 16 October this year), as was the French SAFIRE Falcon (cloud radar and lidar). The UK FAAM BAe 146 aircraft (cloud microphysics and dropsondes) joined from East Midlands airport. In addition, more than 300 weather balloons were launched from ground sites spanning from Canada in the west to Norway and Italy in the east, Svalbard in the north and the Azores in the south. Even commercial ships crossing the mid-Atlantic launched balloons for NAWDEX. A scientific experiment on this scale cannot be conducted by one nation. Contributing countries included Germany, France, UK, Switzerland, USA, Canada, Iceland and Norway as well as the met services from countries who launched weather balloons as part of the EU-funded mechanism EUMETNET (UK, Denmark, Norway, France, Portugal and Italy). International cooperation is achieved through a common purpose and determination and also with coordination through a working group of the WMO World Weather Research Programme.

2016 10 21 John Methven - BBC _91444006_mediaitem91444005

On board one of the NAWDEX research flights: courtesy BBC News (

A golden opportunity emerged during the second week of the NAWDEX campaign, as tropical storm Karl moved northwards from the Bahamas and was forecast to interact with the jet stream with highly uncertain outcomes in terms of high impact weather for Europe 5-6 days later. The Global Hawk was first to the scene with a comprehensive coverage of dropsondes on the night of 22/23 September.

Tropical storms move slowly and Karl was sampled again off the east coast of the USA on 24/25 September. Then there was a dramatic change as Karl interacted with the jet stream on the 26th undergoing a process called “extratropical transition” when the cyclone also intensified. The German HALO aircraft was able to reach the centre of the storm during this critical stage from its base in Iceland.

Following transition, the jet stream on the southern flank of cyclone Karl became much stronger and the whole system was stretched out and advanced very rapidly towards the north of Scotland where it was intercepted by both the UK FAAM aircraft and German DLR Falcon aircraft coming together above Torshavn (Faroe Islands) from their bases in East Midlands and Iceland.

Above the Scottish north coast the jet maximum was observed to be 89 m/s (200 mph) which is unusually strong for the time of year and was associated with severe winds at the surface across northern Scotland. I was lucky enough to be on the flight. So was BBC science correspondent, David Shukman, who reported on his experience on BBC TV and website. The jet streak (a locally intense section of the jet stream) moved into Norway and was followed by two days of persistent heavy rainfall and flooding as a moist air stream from the mid-Atlantic was drawn northwards to meet the jet stream on the Norwegian coast.

What do we hope to learn from the sequence of research flights?
We will focus on detailed measurements of cloud physical properties and their relation to the structure of the winds and temperature in the vicinity of the jet stream and its evolution over days to weeks.

Recent research has shown that forecast ‘busts’ (where skill is much lower than usual) for Europe share a common precursor 5-6 days beforehand; there is a distinct Rossby wave pattern with a more prominent ridge (northwards displacement of the jet stream) across the eastern USA. The reasons for these forecast busts are not known, but it is hypothesised that the representation of diabatic (cloud and radiative heating) processes, over the USA and Atlantic, lowers the predictability in this situation. Diabatic processes create shallow temperature structures either side of the tropopause, tending to enhance tropopause sharpness and the jet stream wind maximum. Recent theory indicates that tropopause sharpness can have a far-reaching influence on Rossby wave propagation and thereby downstream forecast error. However, the sharpness is not well represented in both models and satellite data due to poor vertical resolution in the tropopause region. We already know from a first look at the flight data that the tropopause was observed to be sharper than represented in forecasts, but much more scientific investigation is needed to understand why.

The same physical processes that are poorly represented in weather forecast models also constitute a major source of uncertainty in climate model projections and make the prediction of changes in regional precipitation and wind patterns in response to global warming very uncertain. Among them are cloud microphysics, cloud radiative feedbacks, and turbulent boundary layer dynamics, which are parameterized in both weather and climate models. NAWDEX will help us to improve our representation of these processes by furthering our understanding of how the physical processes influence synoptic-scale dynamics, thereby affecting not only mesoscale sensible weather but also large-scale weather regimes. Only once the physical processes are represented well in models, can the excitation and maintenance of large-scale patterns on seasonal timescales by global teleconnections, or downscaling of climate information for the Atlantic/European sector, be tackled with confidence through numerical simulation.

If the idea of taking a research aircraft into the jet stream to discover more about the atmosphere excites you, then you should consider a career in atmospheric physics. There are an ever expanding range of employment opportunities with specific environmental physics skills, as well as more general openings for graduates with the problem solving and analytical capability that training in physics and mathematics brings. Our Department offers undergraduate degrees in Environmental Physics and Meteorology. If you have already done a first degree (perhaps in Physics, Mathematics or other physical science) you could consider the MSc in Atmosphere, Oceans and Climate, or entering directly onto the PhD programme.


Posted in Environmental physics, Measurements and instrumentation, Numerical modelling, University of Reading, Weather forecasting | Leave a comment

Where is the high probability?

By Peter Jan van Leeuwen

To determine the uncertainty in weather and climate forecasts an ensemble of model runs is used, see Figure 1. Each model run represents a plausible scenario of what the atmosphere or the climate system will do, and each of these scenarios is considered equally likely. The underlying idea is that these model runs describe in an approximate way what the probability is of each possible weather or climate event. So if several model runs in the ensemble cluster together, meaning they forecast a similar weather event, the probability of that weather event is larger than a weather event with only one model run, or none at all.

2016 10 21 Peter Jan van Leeuwen Fig 1

Figure 1. An example of an ensemble of weather forecasts, from the Met Office webpage 

Mathematically we say that each model run represents a probability mass, and the total probability mass of all model runs is equal to 1. The probability mass is the probability density times the volume, in which the volume is the volume in temperature, velocity etc of the set of events we are interested in. The probability density tells us how the probability is distributed over the events. For instance the probability that the temperature in Reading is between T1=14.9 and T2=15,1 degrees is equal to

Prob( T1<T<T2)  =  p(T=15) (T2-T1)

… where (T2-T1) is the volume and p(T=15) is the probability density.

There are an infinite number of possible weather events, each with a well-defined probability density, but we can only afford a small number of model runs, typically 10 to 100. So the question becomes how should we distribute the model runs to best describe the probability of all these events? Or, in other words, where are the areas of high probability mass?

You might think that we should put at least one model run at the most likely weather event. But, interestingly, that is incorrect. Although the probability density is highest, the probability mass is small because the volume is small, much smaller than in other parts of the domain with different weather events. How does that work? It all has to do with the high dimension of the system. With that I mean that the weather does consist of the temperature in Reading, in London, in Paris, in, well, in all places of the world that are in the model. But not only temperature is important in all these places, also humidity, rain rate, wind strength and direction, clouds, etc. etc., and all of these in all places that are in the model. So the number of variables is enormous, typically a 1,000 million! And that does strange things to probability.

So where should we look for the high probability mass? The further you move away from the most likely state, the smaller the probability density becomes. But, on the other hand, the volume grows, and it grows quite rapidly. It turns out that an optimum is reached and the maximal probability mass area is found at a distance from the most likely state of the number of places the model contains (Reading, London, etc), times the number of model variables (temperature T, velocity u,  etc) at each of these places. This means that the optimal positions for the model runs would be in that volume, so far away from the most likely state, as illustrated in Figure 2.

2016 10 21 Peter Jan van Leeuwen Fig 2

Figure 2. The blue curve shows the probability density of each weather event, with the most likely event having the largest value of the blue curve. Note that that curve decreases very rapidly with distance from this peak. The green curve denotes the volume of events that have the same distance from the most likely weather event. That curve grows very rapidly with distance from that most likely event. The red curve is the product of the two, and shows that the probability mass is not at the most likely weather event, but some distance away from that. For real weather models that distance can be huge …

So we find that the model runs should be very far away from the most likely weather event, in a rather narrow range it turns out. But what does that mean, and how do we get the models there? Very far away means that the total distance from the most likely weather event, measured in all variables and at all places that are in the model, is large. Looking at each variable individually, so for example the temperature in Reading, the distance is rather small. So, as shown in Figure 1, the differences are not that large if we only concentrate on a single variable. And how do we get the model runs there? Well, if the initial spread in the model runs was chosen to represent the uncertainty in our present estimate of the weather well, the model, if any good, should do the rest.

So you might ask what all this fuss is about. The answer is that to get the models starting in the right place to get the correct uncertainty is not easy. For that we need data assimilation, the science of combining models and observations. And for data assimilation these distances are crucial. But that will be another blog …

Posted in Climate, Numerical modelling, Weather forecasting | Tagged | Leave a comment

Producing quantitative estimates of radiative forcing

By Will Davies

Last year the Paris climate conference agreed to an action plan to limit global warming to below 2 degC – preferably 1.5 degC. Various initiatives are measuring performance against this target – such as the global warming index which provides an index of human-induced global warming relative to pre-industrial times, and the Copernicus Atmosphere Monitoring Service (CAMS) which will deliver operational services including near-real-time analyses and forecasts of atmospheric composition, and estimates of instantaneous radiative forcing (RF).

The difference in the amount of radiation received from the sun and the amount that is radiated back into space is referred to as the Earth’s energy budget. RF measures the imbalance in this budget when the climate system is perturbed by components such as greenhouse gases, aerosols and clouds.

Here in Reading I am part of a CAMS team producing quantitative estimates of RF with respect to pre-industrial times (PI) using PI concentrations provided in the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC) from year 1750. The production chain being developed will use a CAMS global reanalysis dataset which will include data assimilated from recent satellite launches such as Sentinel-3 in order to produce improved RF estimates. CAMS consolidates previous research such as MACC and so the CAMS production chain has been prototyped using MACC reanalysis data.

The CAMS 74 production chain uses a radiative transfer (RT) code that is based on the standalone version of the Rapid Radiative Transfer Model for General circulation models (RRTMG) as used in the European Centre for Medium-Range Weather Forecasts (ECMWF)’s Integrated Forecast System (IFS). The development of this RT code will include stratospheric temperature adjustment using the seasonally-evolving fixed dynamical heating approximation.

Early versions of the code have been run on MACC reanalysis data – see Figures 1,2 and 3.

2016 10 13 Will Davies - Fig1

Figure 1. The 2007 annually-averaged RF for the aerosol-radiation interaction (ari) short wave (SW) RF, the aerosol-cloud interaction (aci) SW RF, the all sky methane (CH4) long wave (LW) RF and the all sky carbon dioxide (CO2) SW+LW RF

2016 10 13 Will Davies - Fig2.pngFigure 2. The mean global distribution of the methane RF at the top of atmosphere (TOA) for a clear sky on 21 June 2009, showing the effect that meteorological conditions have on the methane RF.2016 10 13 Will Davies - Fig3Figure 3. The CO2 TOA LW clear sky yearly instantaneous RF from 2003 to 2009 which shows the steady increase in RF and hence global warming caused by CO2 emissions.

Many scientists see the Paris 2015 target as ambitious. The radiative forcing products provided by the CAMS monitoring service will clarify this and will help to highlight the scale of the challenge that we face.



Posted in Atmospheric chemistry, Climate, Climate change, Climate modelling, Numerical modelling, Solar radiation | Tagged | Leave a comment

El Niño in West and Central Africa

By Chimene Daleu


What is El Niño, how often does it occur, and why is everyone so concerned this year?
El Niño is the warming phase of the El Niño Southern Oscillation (ENSO). During an El Niño event, the central to eastern tropical Pacific warms by 0.5-2 degC or more for anything between a few months to two years. El Niño occurs, on average, every three to seven years. It impacts weather systems around the globe so that some places receive more rainfall while others receive none at all, often in a reversal of their usual weather pattern.

There were super El Niño events in 1972-73, 1982-83 and in 1997-98, the latter bringing record global temperatures alongside droughts, floods and forest fires. The current El Niño has already affected millions of people and comes on top of already volatile and erratic weather patterns linked to climate change. Globally, 2014 and 2015 were the hottest years on record, with the Pacific Ocean already warming up to an unprecedented degree.

El Niño in West and Central Africa
Experts have confirmed that unusual high surface temperatures at the end of 2015 and in January 2016, above the annual increase, are an indication that El Niño could have possibly impacted West and Central Africa (particularly in Chad, Cameroon and the Democratic Republic of the Congo) from September 2015 to March 2016. However, since January 2016, a decrease in ENSO intensity was noted. Most model outputs and expert assessments have suggested a persistence of this decreasing trend leading to ENSO neutral conditions starting from May 2016.

2016 10 06 Chimene Daleu Fig 2.png

Figure 1. Source:

2016 10 06 Chimene Daleu Fig 3

Figure 2. Source:

Predictions – June to September 2016 for West and Central Africa
Figures 1 and 2 show the seasonal temperature and precipitation that were forecasted for July to September 2016, while Figure 3 shows significant weather and climate events expected from June to September 2016.

  • Below average precipitation was very likely from July to September 2016 over the west part of Guinea, Sierra Leone, Liberia, southern Côte d’Ivoire, Ghana, Togo Benin and Nigeria, southeastern CAR, Sudan, northern DRC, Uganda and most of South Sudan and Ethiopia.
  • Near to below average precipitation was very likely over the most of the coastal part of Mauritania, Senegal, Gambian and Guinea Bissau, the Gulf of Guinea from Sierra Leone to Nigeria during June to August period.
  • Over the northern Guinea, central Sahel region, near to above average precipitation was very likely from June to September 2016.
  • Near to above average temperature was very likely over most part of northern Africa, the Sahel region and southern Africa from June to September, 2016.
  • Above average temperature was very likely during June to September 2016 over Morocco, Algeria, Tunisia, northern Libya, northern and eastern Egypt, northernmost Mauritania and Mali.

2016 10 06 Chimene Daleu Fig 4

Figure 3. Source:


Posted in Climate, ENSO, Seasonal forecasting | Leave a comment

Helping National Grid manage the sudden growth in solar power

By Daniel Drew

In Britain it had been a year without summer. Wet spring had merged imperceptibly into bleak autumn. For months the sky had remained a depthless grey. Sometimes it rained, but mostly it was just dull, a land without shadows. It was like living inside Tupperware”.

Bill Bryson, The Lost Continent: Travels in small town America

Given the reputation of the Great British weather, it is perhaps surprising that the UK now has more solar panels than France, Spain and Australia. Some of these installations generate hot water (solar thermal) but the vast majority harness the photoelectric effect to generate electricity, known as solar photovoltaic cells, or solar PV (potentially confusing initials for a meteorologist). Since January 2014 the installed capacity of solar PV has increased dramatically from 2.8 GW to 10.7 GW (as of July 2016), nearly all of which benefit from feed-in tariffs which guarantee a set income for each kWh of solar generation.

While the increased proportion of renewable generation is beneficial to reducing the carbon intensity of UK electricity, it does present a challenge to National Grid, the system operator responsible for ensuring supply equals demand throughout the day. Solar PV generation is highly variable over a range of temporal scales. Clearly there is a well understood seasonal and diurnal pattern, but also higher frequency variability due to clouds. National Grid is however used to dealing with variable generation, over the last 15 years the capacity of wind power in the UK has steadily increased to 14.5 GW (as of January 2016). During this time, National Grid has been working on research projects (including several with the University of Reading) to develop a detailed understanding of the variability and predictability of UK wind power.

Solar generation presents a new challenge. Whereas wind capacity is located in a relatively small number of very large wind farms, solar PV capacity is distributed across a large number of small installations- typically on the rooftops of buildings. The installations are generally very small, therefore the owners are under no obligation to provide National Grid with electricity generation data or even inform them of the existence of the panels. Solar generation is therefore ‘seen’ by National Grid simply as a reduction in the electricity demand. Proportionally this reduction can be quite large, particularly on a clear weekend day in summer when the electricity demand is typically only 25 to 30 GW.

To maintain the previously accurate predictions of electricity demand, an understanding of the variability and predictability of solar generation is required. Our project uses state-of-the-art meteorological datasets to address questions such as; how changeable is UK solar power? How extreme can solar power swings get? Are they correlated with swings in wind power? How well are extreme events forecast?

Find out more about our project at

2016 09 22 Daniel Drew - Fig 1 demand_schematic

Illustration of the reduction in GB electricity demand as observed by National Grid due to solar PV generation on a typical summer weekday.

Posted in Climate, Renewable energy, Solar radiation | Leave a comment