Howling Space Gales and why we should photograph them.

By: Luke Barnard

Most people are familiar with the fact the Sun emits a range of electromagnetic radiation (e.g. sunlight), and that this radiation is necessary to sustain life on Earth as we know it. What is less well known is that alongside the Sun’s electromagnetic radiation, it also generates a wind of plasma that continuously blows out through the Solar System, with speeds of 250 km/s to 750 km/s.

This solar wind impacts our everyday lives through its effects on the technology we increasingly depend on; particularly spacecraft in orbit around Earth. We rely on satellites for critical services such as communications, GPS, and weather forecasting. When services like these are disrupted, it can have both expensive and dangerous consequences [1].

During periods of intense solar wind activity, it squeezes and shakes Earth’s magnetic field. This produces energetic charged particles which are harmful to satellite electronics and can also make it difficult to maintain radio communications with them. Depending on how intense the solar wind is, satellites can be temporarily or permanently damaged, with knock on impacts to the services they provide.

Space Weather Forecasting grew out of the need to understand and predict when situations like this would occur. A key challenge in space weather forecasting is to be able to forecast the solar wind flow throughout the Solar System. This is difficult because there are only a handful of spacecraft able to measure the solar wind, and these only measure it at single points which are vastly separated. By way of analogy, it is like trying to forecast the weather at Reading, with only weather observations at a few other far away cities, like Exeter, Manchester and Brighton. The limited information is still useful, but there is a lot that can happen in between and a lot of uncertainty.

Our research [2] aims to help solve this problem by using images of the solar wind plasma to characterise the solar wind flow near the Sun. This would be an extra source of information on the solar wind flow, which we could use to help improve computer models that forecast the solar wind.

Figure 1: This shows the relative locations of Earth, STEREO-A and STEREO-B. The purple shaded regions show the field-of-view of the inner Heliospheric Imager camera on STEREO-A.

NASA’s STEREO mission consists of two spacecraft which are in Earth-like orbits, but they drift relative to Earth [Figure 1], so that they can observe the space between the Sun and Earth. On each spacecraft are a pair of cameras called the Heliospheric Imagers, which produce images of the solar wind plasma [Figure 2]. The cameras record visible sunlight that has scattered off of electrons in the solar wind. Interpreting the images is tricky because there are electrons and visible light everywhere in space, and so we don’t actually produce an image of a specific feature or object. But, because we understand the physics of sunlight well, and of how sunlight scatters off of electrons, we are able to use these images to identify regions where there are relatively more electrons, and a denser solar wind.

Figure 2: A movie of heliospheric imager images from July 2008. Movie obtained from the UK Solar System Data Centre

Our aim was to show that variability in the images could be statistically related to the direct single point measurements of solar wind flow observed by other spacecraft. This would be the first step in creating and calibrating a technique to estimate the solar wind flow directly from the images.

We compared the solar wind point measurements and images directly, computing the correlation between variability in the images recorded by STEREO-A with the solar wind speed measured directly at Earth, STEREO-A, and STEREO-B. We found that there is a strong correlation between variability in the images and the solar wind speed observations at the three spacecraft, but that the correlation was largest when a delay was applied between the image and solar wind observations. This delay was different for each pair of spacecraft, and changed in time in a way that can only be explained by the orbits of the spacecraft. Based on this statistical analysis we have concluded that we probably can trace the flow of the solar wind in the Heliospheric Imager data. Our next step is to investigate how to best compute a reliable estimate of the solar wind speed directly from the images.

References:

[1] Cannon, P., et al. (2013), Extreme space weather: Impacts on engineered systems and infrastructure, Tech. Rep., Royal Acad. of Eng., London. ISBN:1903496950 Link to PDF here

[2] Barnard, L.A., Owens, M.J., Scott, C.J., Jones, S.R.: 2019, Extracting inner-heliosphere solar wind speed information from heliospheric imager observations. Space Weather 17. https://doi.org/10.1029/2019SW002226

Posted in Climate, space weather | Leave a comment

Trading Evil lasers for MAGIC Doppler lidars

By: Janet Barlow 

Lasers may have an evil reputation in Hollywood, but they are very good for observing urban meteorology. We recently took part in the MAGIC project field campaign in London, deploying a Doppler lidar to measure wind-speed around tall buildings.

Just like a duck in water, a tall building causes a wake behind it. The wake can be 100s of metres long downstream, causing reduced wind speeds and increased turbulence. Wakes can thus affect air quality, so it is important to represent them in pollutant dispersion models.

Recently we reported on wind tunnel experiments where we measured flow around a model tall building at the MAGIC project experimental site. One question was whether the wake affected natural ventilation of the test building at the centre of the site. Measuring flow around actual tall buildings is impossible using traditional meteorological instruments like cup anemometers: they are simply too small to measure the whole wake. Instead, we used a Doppler lidar which can measure wind-speed remotely over a wide area (Drew et al. 2011).

Figure 1: Principle of infra-red Doppler lidar operation. Image taken from https://www.hko.gov.hk/publica/wxonwings/wow018/wow18e.htm

The principle behind radar observations of rainfall used for a weather forecast is that a pulse of electromagnetic radiation of a certain wavelength is beamed out into the atmosphere (Figure 1). A lidar uses infra-red light that interacts with particles of a similar size to the light wavelength. Some light is scattered back to the instrument and measured. But the backscattered waves are shifted in frequency by an amount proportional to the wind-speed blowing the particles around. This is the same “Doppler effect” that we hear when an ambulance goes by and its siren seems to change pitch: the soundwaves change wavelength in proportion to its speed. One advantage of using infra-red frequencies is that lidars are eyesafe. Not evil at all!

Figure 2: Photo showing MAGIC experimental site. The tall building (height: 81 m) and the lidar (white box) are highlighted with a red circle. The London Eye is on the far left and the Shard is on the far right.

We placed our Doppler lidar on the roof of a building at the MAGIC experimental site in London (Figure 2). At a height of 27 m we had a good view above most rooftops. We scanned the laser beam horizontally in a circle, meaning that laser light was reflected from tall buildings, allowing us to locate them.  

Figure 3: Lidar horizontal scan of local wind-speeds minus the average wind-speed across the whole scan. The wind direction was north-westerly. The building is shown as a red square and its wake is the yellow area to the south-east of it.

Figure 3 shows a horizontal scan of wind-speed measured by the lidar. The velocity measurement at each pixel has been subtracted from the average velocity across the whole scan (NB: as velocity is negative towards the lidar, a wake appears as a positive difference). The wake is approximately 150 m long, which means the test building is definitely affected by the wake – it is 85 m away from the tall building. Flow around it is weaker and more turbulent, affecting pollutant levels and the ability to ventilate rooms through open windows.

So, does a wake measured around a real building resemble wind tunnel measurements? We also found that the building wake in the wind tunnel was long enough to affect the test building – but how much was wind-speed reduced, compared to if the tall building was not there? The wind tunnel experiments suggested around 40% reduction at the location of the MAGIC test building (Hertwig et al. 2019); the lidar measurements for our case study suggest around 25%. With 6 months of data, we have many more cases to analyse to quantify wake behaviour under different weather conditions.

This amazing instrument allows us to “see” urban winds and provides invaluable data to improve forecasting and building design. But we definitely don’t need to attach our lidar to a shark’s head. That would just be evil.

Thanks to Eric Mathieu, Elsa Aristodemou, Jess Brown, Ian Read and Selena Zito for technical assistance.

References:

Drew, D.R., Barlow, J.F. and Lane, S.E. (2013) Observations of wind speed profiles over Greater London, UK using a Doppler lidar, Journal of Wind Engineering and Industrial Aerodynamics, 121, 98-105, DOI: 10.1016/j.jweia.2013.07.019

Hertwig, D, Gough, H., Grimmond, C.S.B., Barlow, J.F., Kent, C.W., Lin, W., Robins, A.R. and Hayden, P. (2019) Wake characteristics of tall buildings in a realistic urban canopy, Boundary-Layer Meteorology, 172, 239-270, doi: 10.1007/s10546-019-00450-7

Posted in Boundary layer, Climate, Urban meteorology | Leave a comment

Don’t (always) blame the weather forecaster

By: Ross Bannister

There are (I am sure) numerous metaphors that suggest that a small, almost immeasurable event, can have a catastrophic outcome – that adding the proverbial straw to the load of the camel will break its back. In 1972, the mathematical meteorologist Ed Lorenz famously gave the presentation, “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” Unlike for folks who do keep a domestic camel, this title was not intended to be interpreted literally, but instead to ask how a system like the Earth’s atmosphere is affected by vanishingly small perturbations. But is it possible for a butterfly’s flap to really have consequences? Without the ability to experiment on two or more otherwise identical Earths, demonstrating this is impossible.

Learning from computer simulations

Atmospheric scientists are acutely aware that computer-derived forecasts are sensitive to the ‘initial conditions’ provided to them. Modern weather forecasting is done by representing the atmosphere at an initial time with vast sets of numbers stored inside a computer (this set is called the initial conditions of the model). The computer marches this state forward in steps into the computer’s version of the future. The rules that the computer uses to do this task boil down to Newton’s laws of motion (i.e. how forces acting on air masses change their motion), and other processes that affect the behaviour of the atmosphere, like heating and cooling by radiation and by condensation/evaporation of water. Unlike in the real world, it is possible in the computer to create two identical sets of initial conditions apart from small differences, and then to let the computer calculate the two possible future states.

Sensitivity to initial conditions

So, what do scientists find from these experiments? At first the forecasts are virtually indistinguishable, but at some time they start to show noticeable differences. These appear typically on small scales and then start to affect larger scales (known as the inverse energy cascade). Lorenz discovered this serendipitously in the 1950s when he ran simplified weather simulations on a research computer (a valve/diode-based Royal-McBee LGP-30 with the equivalent of 16 kilobytes of memory). He found that if he stopped the simulation, and restarted it with similar, but rounded, sets of numbers representing the weather, the computer simulated weather patterns that became very different from those that are forecast had he not stopped and restarted the simulation. Lorenz had discovered sensitive dependence to initial conditions (or colloquially, the “butterfly effect” in connection with the title of his presentation). Faced with two such different outcomes, which one, if any, is the better forecast? Hmm …

Figure 1:

Numerical solutions of two x, y, z trajectories obeying the (non-linear) Lorenz-63 equations to demonstrate sensitive dependence to initial conditions (red and yellow lines/points). At t = 0 the initial conditions are indistinguishably close and at t = 3 the two trajectories virtually overlap. At t = 6 small differences appear, which become more obvious at t = 9. By t = 12 and t = 15 the two trajectories are so different that they occupy separate branches. The beauty of the structure that emerges by solving the Lorenz-63 equations is quite amazing. For the record, the Lorenz-63 equations are: dx/dt = σ(yx), dy/dt = –xz + rx y, and dz/dt = xy bz, with σ = 10, r = 28, and b = 8/3. The multiplication of one of x, y, z with another such variable gives these equations their non-linear property.

Try this at home

This effect is also seen in simple non-linear equations. In 1963, Lorenz published a seminal work, “Deterministic non-periodic flow”, where he introduced some equations that describe how variables, x, y, and z change in time. These equations may be regarded as representing a highly simplified version of the atmosphere. It is only possible to solve these equations approximately with the help of a computer (note to reader – try this, it’s fun!). One can visualise the solution by taking particular x, y, and z values at a given time as the co-ordinates of a point in space. Joining the points up in time shows the forecasts as trajectories, and one may think of different positions as representing different kinds of weather. Figure 1 shows two such trajectories (red and yellow), whose initial conditions are nearly identical at time t = 0. As time progresses, they diverge, slowly at first, until by t = 12 they represent completely different states (note the resemblance to a butterfly).

Ensemble weather prediction

Scientists routinely run large models from many initial conditions, each subject to a slight variation – a technique called ensemble forecasting. The initial conditions differ by amounts believed to be around the level of uncertainty that the weather is known using observations and previous forecasts. These are combined in a physically consistent way, using data assimilation (which is my area of research). As a rule of thumb, differences seen in the small-scale weather forecast patterns emerge first. Indeed if the forecast grid is small enough to resolve cloud systems then the ensemble members will likely first disagree in the forecast of convective events, like showers and thunderstorms. This is why patterns of convective precipitation are so hard to predict beyond a few hours. One forecast may predict heavy rain at a particular location between 4.00 and 4.10pm, another between 4.30 and 4.35pm, and another may predict no heavy rain. Ensemble forecasting allows forecasters to understand the range of likely outcomes (usually all ensemble members will predict heavy showers, but with slightly different locations), and to give probabilistic forecasts for individual locations. While small-scale features will differ, large-scale weather patterns (such as high and low pressure systems) are usually predicted accurately at these early stages. As forecast time progresses the uncertainty develops in larger scales and eventually the forecast of the large-scale systems become unpredictable.

Fundamental limits

As a rule of thumb, km-scale motion is predictable to no more than about half-hour, 10 km-scales to about one hour, 100 km-scales to about 10 hours, 1000 km-scales to about one day, 10000 km-scales to about four or five days, and the largest scales no more than about a week or two. In extra-tropical regions, for example, there is a particular kind of atmospheric instability (baroclinic instability) between scales of around one to three thousand km which can lead to a lowering of predictability on those scales, although observing the weather at these scales is given special attention so that the uncertainty at these scales is reduced in the initial conditions.

[We should note that climate models make projections many years, decades, or centuries into the future and use the same building blocks as weather models. Climate models though predict different things: long-time averaged conditions rather than the weather at particular times, which is thought to be very useful as long as realistic forcings (e.g. the radiative forcing associated with changes in greenhouse gas concentrations in the atmosphere) are known.]

Room for improvement?

So what hope is there of improving weather prediction given these fundamental limits? There are other factors that can be improved. The spread in the ensemble’s initial conditions can be reduced with more observations and better assimilation. Model error can also be reduced. No model is perfect, but there is room for improvement by decreasing the grid size and time step (severely restricted by cost and available computer power), and by improving the representation of physical processes (also restricted by computing and on research activity).  While scientific and technological barriers can be broken, the fundamental limits of nature cannot. As the air motion of the butterfly’s flap mixes with all the other fluctuations, it is impossible to say exactly how it will change the course of the atmosphere, just that it will.

References:

Lorenz E.N., The Essence of Chaos, UCL Press Ltd., London (1993), ISBN-13: 978-0295975146. A readable a thought provoking popular account of chaos theory.

Lorenz E.N., Deterministic nonperiodic flow, Journal of the Atmospheric Sciences 20 (1963), 130–141, DOI:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2. An exploration of the derivation and interpretation of the Lorenz-63 equations.

Lorenz E.N., The predictability of a flow which possesses many scales of motion, Tellus 21 (1969), 289–307, DOI:10.1111/j.2153-3490.1969.tb00444.x. This paper explores different kinds of predictability and how predictability depends on scale.

Tribbia J.J. and Baumhefner D.P., Scale interactions and atmospheric predictability: An updated perspective, Monthly Weather Review 132 (2004), 703–713, DOI:10.1175/1520-0493(2004)132<0703:SIAAPA>2.0.CO;2. An update on earlier work of Lorenz with more modern weather prediction models.

Palmer T.N., Dring A., and Seregin G., The real butterfly effect, Nonlinearity 27 (2014), R123–R141, doi:10.1088/0951-7715/27/9/R123. A discussion of the “butterfly effect” term to necessarily refer to a finite time limit to predictability in fluids with many scales of motion.

Data Assimilation Research Centre, What is data assimilation?, research.reading.ac.uk/met-darc/aboutus/what-is-data-assimilation. A brief introduction to data assimilation.

Met Office, The Met Office ensemble system, www.metoffice.gov.uk/research/weather/ensemble-forecasting/mogreps. An introduction to the Met Office’s ensemble prediction system.

University of Hamburg, Forecasts diagrams for Europe, visibility.cen.uni-hamburg.de/meteograme.html. Choose a European city for ensemble forecasts of temperature and precipitation. A graphic illustration of the growth of uncertainty with forecast time from weather forecast models.

Posted in Climate, Climate modelling, data assimilation, Numerical modelling, Predictability | Leave a comment

High-resolution insights into future European winters

By: Alexander Baker

Figure 1: Observed UK rainfall anomaly as a percentage of 1981-2010 monthly average for (a) December 2013, (b) January 2014, and (c) February 2014. Figure from Huntingford et al. (2014).

Most – roughly 70% – of Europe’s winter rainfall is brought by extratropical storms, which are steered our way by the westerly North Atlantic jet stream. The wettest winters, such as 2013/14 (Figure 1), are often when Europe is at the receiving end of a veritable convoy of storm systems, the human and economic impacts of which are felt far and wide.

An analogy: the sharpness of a digital photograph depends on how many pixels comprise it – in other words, on the camera’s resolution. The more pixels; the higher the resolution; the sharper the image. Climate models break down Earth’s atmosphere into three-dimensional pixels called grid cells. With a relatively low number of large grid cells (each typically 100-200 km wide), simulated weather systems not only appear pixelated when visualised, but are actually not all that realistic; their flows of air, heat and moisture don’t properly resemble those in the real world. This limits our confidence in using these models to make predictions. However, developments in high-performance computing have enabled the size of a climate model’s grid cells to be shrunk (to roughly 25 km in our case), thereby increasing their number and enabling the simulation of air flows over mountainous terrain, weather processes, and other aspects of atmospheric variability in more detail.

In a recent paper published in Journal of Climate, we address two questions. Does increasing a model’s atmospheric resolution improve the fidelity of simulated European winter hydroclimate? How do high-resolution future projections differ, if at all, from those at low-resolution? We compared simulations with low- and high-resolution versions of the same climate model – Met Office’s Hadley Centre Global Environmental Model (version 3) Global Atmosphere 3.0 (hereafter HadGEM3-GA3; Walters et al. 2011) – to establish exactly what the impact of resolution is on the North Atlantic jet and on downstream storm activity and precipitation. At the lowest resolution (‘N96’), the latitude-longitude grid is made up of grid cells each 135 km. At mid- (‘N216’) and high-resolution (‘N512’), grid cells are each 60 and 25 km, respectively. We’ll focus here on how the North Atlantic jet behaves at different model resolutions.

Figure 2: Frequency of North Atlantic eddy-driven jet latitude in reanalyses and HadGEM3-GA3 under historical climate (upper panel) and the projected future change (lower panel). We use ‘N’ notation to describe resolutions: ‘N96’ (135 km), ‘N216’ (60 km) and ‘N512’ (25 km). Figure adapted from Baker et al. (2019).

What about future projections? Under climate change (RCP 8.5), at all model resolutions, southern jet occurrences decrease but northern jet occurrences increase (Figure 2, lower panel). The upshot of this is fewer storms making landfall over southern Europe and more across northern Europe towards the end of the twenty-first century. These climate change consequences are significantly enhanced by increased resolution. Crucially, this reveals the extent to which lower-resolution models may have previously underestimated aspects of the jet’s response to climate change, and thereby changes in winter storms and precipitation, and their associated hazards. There is much more work to do: further studies investigating other models and climate change scenarios are needed, but our study offers insight into how high-resolution might bring the picture of Europe’s future winters into sharper focus.

References

Baker, A. J. et al., 2019: Enhanced climate change response of wintertime North Atlantic circulation, cyclonic activity and precipitation in a 25 km-resolution global atmospheric model. Journal of Climatehttps://doi.org/10.1175/JCLI-D-19-0054.1

Huntingford, C. et al., 2014: Potential influences on the United Kingdom’s floods of winter 2013/14. Nature Climate Change 4, 769. 10.1038/nclimate2314

Walters, D. N. et al., 2011: The Met Office Unified Model Global Atmosphere 3.0/3.1 and JULES Global Land 3.0/3.1 configurations. Geoscientific Model Development 4, 919-941. 10.5194/gmd-4-919-2011 

Posted in Climate change, Climate modelling, extratropical cyclones, Numerical modelling | Leave a comment

Turbulence Matters

By: Torsten Auerswald

Most people are only consciously aware of the existence of turbulence when the pilot announces it. But apart from the discomfort of a bumpy flight, turbulence affects us in many other important aspects of daily life. The fact that turbulent mixing is much more efficient than molecular diffusion does not only come in handy when stirring the tea after adding sugar and milk (whether it is better to first put the milk or the tea into the cup can be discussed in another blog entry), but also has impacts on important problems in medicine, engineering, biology and physics, to name just a few. Turbulence is responsible for noise created at the blades of wind turbines and knowledge about it can help engineers to design quieter wings. It also affects the delivery of drugs to the lung, and therapeutic aerosols can be designed to optimise their effect in the body by modifying their aerodynamic properties. Turbulent effects are important for designing efficient filters for power plants or improving the efficiency of fuel engines. For our weather, turbulence plays a critical role. For example, it controls the exchange of heat and moisture between the soil and the atmosphere and is one of the factors which influences the development and characteristics of clouds. This is the reason why it is of so much importance for weather forecast models to describe turbulent processes accurately.

Unfortunately, the importance of turbulence is directly proportional to the difficulty to study its properties. The underlying set of equations which describe all fluid flows are the Navier-Stokes equations. This set of equations is extremely difficult (and most of the times impossible) to solve analytically. This is why for most real-world applications computer models are used which are able to find numerical solutions of the Navier-Stokes equations with good precision. Especially for turbulent flows, these computer models are numerically very expensive and the direct numerical simulation of turbulent flows remains restricted to relatively simple cases. In most applications, a certain level of approximation for the smaller scales, or even the whole turbulent part of the flow, is necessary to be able to simulate turbulent flows. This is necessary despite the fact that computer performance has rapidly increased over the last decades.

When the first numerical weather forecast was computed by hand by Lewis F. Richardson in 1922, it took him 6 weeks to calculate a 6-hour forecast for Europe. This forecast, apart from coming approximately 5.96 weeks too late, was also wrong. Nevertheless, his work was revolutionary and kicked off the era of numerical weather forecasts. In the publication of his results he estimated that it would take 64000 human computers (people who solve the numerical equations by hand) to simulate the global weather in real-time (meaning it would take one hour to compute a one hour forecast). He envisioned a weather centre in which hundreds of human computers would work together solving the equations for their respective parts of the forecast domain, and coordinators would make sure that everybody stays in sync, collect the results for each time step from the human computers and unify them to one big data set. He basically described the parallelisation of numerical flow models, including the communication between the “nodes”, which decades later would be used to compute global weather forecasts in only a few hours on modern supercomputers.

One of the first supercomputers was the CDC 6600. In 1964 it was considered to be the most powerful computer in the world and could compute an astonishing three million floating point operations per second (3 megaflops). Today’s fastest supercomputer is the IBM Summit which is able to perform 122×1015 floating point operations per second, 40 billion times more than the CDC 6600. Despite this impressive increase in computational power, current supercomputers are still not fast enough to allow the direct simulation of turbulent flows for most real-life applications, leaving plenty of interesting research topics for current and future scientists to investigate.

Reference:
Richardson, L. F., 1922: Weather prediction by numerical process. Cambridge university press, 236 pp.

 

Posted in Numerical modelling, Turbulence | Leave a comment

The consequences of climate change: how bad could it get?

By: Nigel Arnell

The United Nations Climate Action Summit held in New York on 23rd September was meant to be the occasion where countries and industry organisations made stronger commitments to reduce the emissions of the greenhouse gases that are causing global warming. Whilst 65 countries pledged to achieve net-zero emissions by 2050, the overall commitment – according to some commentators – fell “woefully short” of expectations. Most news headlines after the event focused on Greta Thunberg’s speech where she passionately challenged world leaders to do more: “the eyes of all future generations are upon you”.  Meanwhile, campaign groups such as Extinction Rebellion warn of ‘unprecedented global emergency’, ‘climate breakdown’ and ‘mass extinction’. The best-selling author David Wallace-Wells writes of ‘The Uninhabitable Earth’.

So what are the consequences of climate change, and how bad could it get? The impacts of climate change in the future depend not only on how climate – temperature, precipitation and so on – changes, but also on how societies and economies change. Our estimates of changes in climate depend on two things. First is a projection of how emissions will change. This depends on how economies and energy use change, and we cannot predict this: it will depend on policy choices, such as the actions to reduce emissions announced in New York last week. We, therefore, use ‘scenarios’ to describe plausible changes in emissions, but these should not be seen as predictions. The second part is where climate science fits in. Projections of the effect of an emissions scenario on changes in local weather are made using climate models. Whilst all models produce broadly similar changes in climate (temperatures increase in high latitudes more than in tropical regions, wet areas tend to get wetter and dry areas tend to get drier), the detailed projections can vary considerably between climate models. We can estimate the consequences of these changes in local weather for local climate hazards and resources – such as floods and droughts – using separate ‘impacts’ models.

Figure 1: Change in global heatwave, drought and flood hazard through the 21st century under three plausible emissions scenarios, and numbers of people affected in 2100. The five estimates of people affected for each emissions scenario are from five different socio-economic scenarios. See Arnell et al. (2019) for specific definitions of the indicators.

In practice, policymakers and others want to know the human impacts of climate change. We can estimate the direct impacts – such as the number of people affected by flooding, droughts or heatwaves – by combining our estimates of the physical changes in climate with socio-economic scenarios describing plausible changes in population and the economy. Figure 1 shows the global direct impacts of climate change on exposure to heatwaves, river flooding and drought (from Arnell et al., 2019). The left panels show changes in the physical hazard through the 21st century, under different emissions. The range in estimates for each emissions scenario shows the uncertainty due to different model projections of local temperature and precipitation. By 2100, the chance of experiencing a major heatwave has risen from 8% to 100%, the average proportion of time in drought has gone up from 6 to 30%, and the chance of a flood has increased from 2% to 7% (in some regions the changes are greater). The right-hand panels show the human impacts in 2100, under three of the emissions scenarios and for five socio-economic scenarios. The biggest impacts – in terms of numbers of people affected – are obviously with the highest emissions, but the differences between the socio-economic scenarios can be very large. When presenting such results we often focus on the central estimates (the solid lines in the figures), but we could instead look at the ‘worst-case’ impacts at the top end of the distribution. These can be a lot higher than the central estimates.

These direct impacts have knock-on consequences: changes in the frequency of droughts, for example, could plausibly lead to loss of livelihoods, food insecurity, political instability and displacement of people. It is these potential ‘systemic’ risks that are of greatest significance for policymakers and indeed are behind many of the most extreme warnings about climate emergencies and crises. However, these knock-on consequences depend not only on the physical changes in climate and future socio-economic scenarios that we can model but also – and largely – on how societies and governments react and behave. In order to estimate the likelihood of the real ‘worst-case scenarios’ that are ringing the loudest alarm bells we, therefore, need to link the work of climate scientists, impact modellers and experts in institutions, governance and human behaviour.

References:

Arnell, N.W. et al. (2019) The global and regional impacts of climate change under Representative Concentration Pathway forcings and Shared Socioeconomic Pathway socioeconomic scenarios. Environmental Research Letters 14 084046.  doi:10.1088/1748-9326/ab35a6

Wallace-Wells, D. (2019) The Uninhabitable Earth: a story of the future. Penguin: London

Posted in Climate, Climate change, Climate modelling | Leave a comment

Probing the atmosphere with sound waves

By: Javier Amezcua

Summer is a quiet time for both the University of Reading and the town itself. The buzzing that fills campus during term time is gone, the population decreases and activities are reduced. Some people find it relaxing – I find it boring and lethargic.  There is an exception to this quietness which occurs during the August Bank Holiday weekend. Any music aficionado knows this is the time when the annual Reading and Leeds Festival takes place. Thousands of people from all around the UK descend into our town to inhabit a patch of land next to the Thames for three days and enjoy some of their favourite bands, as well as some other excesses…

During my seven years in Reading, I have indulged myself with attending the festival four times. In each of these occasions, I have had a similar conversation with my colleagues when we were back at work the day after the Bank Holiday Monday. First, they hypothesize that I was the oldest person in the whole festival (not true, but it is accurate to say that I am too old to camp and instead I go home each day). Second, they state that at night they could hear the music all the way to their houses. I found the latter comment interesting given what I had experienced. Let me explain: there have been days in which I was not interested in the headliner act and hence I left the festival while the music was still playing. In each occasion, I remember walking away from the festival and noticing the music fading progressively until it was gone. So how could other people hear it at their homes when these were further away from the stage? The answer, not obvious to me at first, is that there were some sound waves that were ‘jumping’ over me.

Figure 1: Simple representation of the reflection of a vertically propagating wave in the atmosphere. The sound wave (yellow line) departs from a source (S) at the surface, reaches a maximum height (Zmax) and it is reflected back towards the surface where it can be detected in a receiver (R). The cross-winds that the wave-front encounters make it seem to come from a false source in a slightly different direction than the real source.

To understand my explanation, we need to think about the way sound travels from the loud-speakers around and above the stage. Without going into the specifics of the type of loud-speaker (there are lots), the sound waves can be transmitted in both the horizontal and vertical directions (you can see a classical illustration of how spherical sound waves work in the youtube link in the references). At the end of the night, as I walked away from the source, the sound waves coming horizontally in my direction were attenuated by the medium (air) and obstacles, and hence I stopped hearing the music. What about those waves with a vertical component? Figure 1 answers this: the yellow line represents the path of these waves. They travel up to some maximum height until they are reflected back towards the surface. The attenuation over that path is different from the attenuation of the horizontal propagating wave (in the case I am discussing it is less). So, the people in town were receiving a sound-wave that had been reflected in an upper level and still had enough intensity. It also helped that is was night-time and not day-time (at night the conditions are more prone for reflection/refraction, but that is another issue).

Figure 2: Vertical sensitivities for the infra-sound waves generated by the detonation of old ammunition in Finland and detected in Norway. The horizontal axis is the time; we indicate the year but not the exact time of detonation. The vertical axis is height. Notice that most infra-sound waves reach about 40km in height.

Why am I telling this story? Because lately, I have been using the behaviour I just described as a tool to probe the winds in the stratosphere (roughly between 12 to 50 km in the mid-latitudes). Finland has a lot of old ammunition, mainly from the Cold War, that it is trying to get rid of. Therefore, every summer the Finnish army performs a series of controlled detonations at a remote location during the course of several days. These explosions produce infra-sound waves (waves below 20 Hz which cannot be detected by the human ear). Some of them follow a path with a vertical component, they reach a maximum level and they are reflected back to the surface where they are detected, about 10 minutes after the explosion, in a station in Norway. This station is about 178 km due north from the explosion site and it has quite powerful micro-barometers which are able to measure precisely the pressure variations caused by the infrasound waves. I have some enthusiastic colleagues in the Norway Seism Array (NORSAR) who have shared these observations with me. Figure 2 shows a model-based reconstruction of the maximum height the waves reach for explosion events from 2001 to 2018. There are different numbers of detonations per year, which is why the horizontal axis looks irregular. Notice that most of the waves reach about 40 km in height, and some up to 60 km.

So how do I probe the atmosphere with these waves? As the waves travel through the atmosphere they are affected by several atmospheric conditions: winds, humidity, etc. In particular, the presence of cross-winds (i.e. winds perpendicular to the direction of the wavefront) can shift the detection angle of the waves when they reach the ground. Hence the waves appear to have come from a false source in the direction of the blue line in Figure 1. Since I know the exact location of the source, the time it took for the waves to be detected, and the shift angle towards the apparent source, I can deduce some values for the cross-winds each infrasound wave encountered, including those in upper levels of the atmosphere. In order to solve this estimation problem, I use techniques from inverse problems and data assimilation which I will not discuss in this post; I only mention that I use an implementation of the ensemble Kalman filter.

It is quite difficult to measure winds in the stratosphere, hence any source of information is valuable and this includes the strategy discussed here. There are other sources of infrasound waves that we can exploit around the world, and some of them are natural! For instance, the ocean swell in a spot near Iceland is a natural producer of infra-sound waves. So, we could use this hot-spot to probe the winds between that location and the same receiver in Norway, or many other receiving stations at the moment. At the moment I am participating in a collaborative project with some people from different institutions in Europe to solve this problem.

Reference:

Amezcua J., S. P. Naeholm, E. M. Blixt, and A. J. Charlton-Perez, 2019: Assimilation of atmospheric infrasound data to constrain tropospheric and stratospheric winds. QJRMS, submitted.

Extract of Sound Waves And Their Sources (1933), you can see a classical illustration of how spherical sound waves work in this animation.

 

Posted in Climate, data assimilation, Stratosphere, Wind | Leave a comment

Coffee and atmospheric physics

by: Prof Maarten Ambaum

Every morning I trundle down to the office kitchen and I make myself a whole thermos flask of coffee which keeps me going for the rest of the day. In fact, most people in our Department have a similar daily ritual. During coffee breaks, science is discussed as well as more mundane things (a lot of politics, these days). Coffee is the fuel of science!

There are deeper links between science and coffee as well: recently our hot water boiler in the office kitchen was replaced by a fancy new hot water boiler. This new boiler has a so-called “eco-mode” which claims to save energy, essentially by using the boiler at half capacity. This claim could not go untested; we are a science department after all! Some basic thermodynamics (the science of heat and energy) and some experiments showed that the eco-mode is nothing like it: it does not save energy and we haven’t used the eco-mode since. A blog with the fun details can be found here.

In fact, this autumn I will again be teaching our new cohort of master’s students the ins and outs of atmospheric thermodynamics. It is a profoundly interesting part of physics and it is at the fundament of our understanding of the climate and weather. And of our understanding of hot water boilers, of course.

A good understanding of fundamental physics is crucial in our field of science. For example, most climate sceptics use arguments that fall over at the level of fundamental physical understanding.

Many people still cannot accept the idea that adding carbon dioxide to the atmosphere could ever heat up the atmosphere in any substantial way. This kind of argument can be debunked comprehensively by basic thermodynamics. The key is that adding carbon dioxide to the atmosphere is similar to putting a thicker duvet on your bed: a thicker duvet will make you feel warmer, not because you produce more heat or somehow the duvet makes you warm. The key is that the heat energy you produce has a harder job of escaping to the environment through a thicker duvet and it can only do so by increasing the temperature in your bed, allowing the same amount of heat to escape through the thicker duvet.

The same is true for the earth’s climate: the atmosphere acts as a blanket on the earth’s surface. The earth’s surface is heated directly by the sun (which remains broadly constant in its energy output), so if the atmospheric blanket gets thicker (by adding carbon-dioxide), the earth’s surface needs to get warmer for the heat to escape at the same rate.

There are many fascinating additional details to this picture, way too many to address here. Many of those I will be teaching to our new group of students (for example, how and why does carbon dioxide change the effective thickness of the atmospheric blanket), and many are also still actively researched in our Department (for example how changing cloud properties might change the effective thickness of the atmospheric blanket, but also how they might change the amount of energy from the sun reaching the earth’s surface). But the underlying fundaments are rock solid physics.

Here’s a brainteaser to keep you busy: for my coffee to stay hot for longer, should I pour it in bigger or a smaller mug?

References:

Ambaum, M. H. P., 2010: Thermal Physics of the Atmosphere, J. Wiley & Sons, Chichester, 256pp.

Ambaum, M. H. P., and M. Prosser, 2019: Is our “ECO mode” hot water boiler eco-friendly?

Posted in Climate | Leave a comment

Do we have an appropriate description of energetic particles in the Earth’s outer radiation belt?

By: Oliver Allanson

Figure 1: A particle undergoes Brownian motion.

The short answer: probably not, at least not all of the time.

In our state-of-the-art and physics-based numerical experiments, we analyse the motion of 100 million individual high-energy electrons that evolve within conditions like that found within the Earth’s hazardous ‘radiation belt’ environment. We observe that electrons do not always behave according to the manner that is most typically used by scientists to describe their evolution. The standard mathematical description that is most commonly used is based upon diffusion proceeding in a manner that is analogous to ‘Brownian motion’, e.g. the familiar high-school experiment showing the random motion of particles suspended within a fluid. The random motion of an individual particle undergoing Brownian motion is illustrated in Figure 1 [1]. In contrast, we observe that the electrons sometimes spread apart at rates that either ‘accelerate’ or ‘decelerate’ in time. This could have implications for the modelling of high-energy electrons in our magnetosphere, and hence for satellite safety.

Figure 2: The Earth’s Radiation Belts.

Figure 3: Not all diffusion is Brownian! The ‘mean-squared-displacement’ can evolve at rates that either increase (‘super-diffusion’) or decrease (‘sub-diffusion’) with time.

The Earth’s outer radiation belt

The Earth’s outer radiation belt is a dynamic and spatially extended radiation environment within the Earth’s inner magnetosphere, composed of energetic plasma that is trapped by the geomagnetic field (see Figure 2 [2]). The size and location of the outer radiation belt varies dramatically in response to solar wind variability. The lifetime of some individual energetic particles can be long (~years). However, orders of magnitude changes in the particle flux can occur on much shorter timescales (~hours). Whilst we know that the radiation belt environment is ultimately driven by the solar wind and the pre-existing state of the magnetosphere, it is very challenging to accurately predict, or model, fluxes within the radiation belt. This difficulty arises from the fact that the magnetosphere can store and transport energy in many different ways, and over a range of different time and length scales. This difficulty in prediction is a pressing concern given the hundreds of satellites that orbit within this hazardous environment. The highly variable and energetic electron environment poses critical space weather hazards for Low, Medium, and Geosynchronous Earth Orbiting (LEO, MEO, and GEO) spacecraft; thus, the ability to predict its variability is a key goal of the magnetospheric space weather community. Most physics-based computer models of particle dynamics in the radiation belts rely upon a specific version of the ‘quasilinear theory’. This approach is founded upon a number of physical assumptions that are now known not to always hold in the radiation belt. Furthermore, the mathematics that is used to describe this quasilinear theory is based upon ‘normal diffusion’ equations, i.e. equations that (in a given space) describe ‘stochastic’ Brownian motion. This stochastic assumption is also considered to be uncertain in given circumstances. Our work tries to test these assumptions, by processing data from state-of-the-art and fully self-consistent numerical experiments. Electron diffusion characteristics are directly extracted from particle data. The ‘nature’ of the diffusive response is not always constant in time, i.e. we observe a time dependent ‘rate of diffusion’, that is inconsistent with Brownian motion (see Figure 3 [3]). However, after an initial transient phase, the rate of diffusion does tend to a constant, in a manner that is consistent with the assumptions of quasilinear diffusion theory. This work establishes a framework for future investigations on the nature of diffusion due to in the Earth’s outer radiation belts, using physics-based numerical experiments.

How much, and when, does this matter?

All of the work described here pertains to a ‘benchmarking’ scenario in which we prove the concept of our experimental technique, and under which conditions one is least likely to observe particularly exotic behaviour [4]. In future experiments we will: (i) make more quantitative assessments; (ii) subject the plasma to more extreme conditions (we therefore expect to find a more sustained ‘non-Brownian’ response); (iii) assess the implications on current models.

[1] A particle undergoes Brownian motion.

Reproduced from https://commons.wikimedia.org/wiki/File:Csm_Brownian-Motion_f99de6516a.png.

[2] The Earth’s Radiation Belts.

Reproduced from https://www.nasa.gov/mission_pages/sunearth/news/gallery/20130228-radiationbelts.html.

[3] Not all diffusion is Brownian! The ‘mean-squared-displacement’ can evolve at rates that either increase (‘super-diffusion’) or decrease (‘sub-diffusion’) with time.

Reproduced from https://commons.wikimedia.org/wiki/File:Msd_anomalous_diffusion.svg.

[4] O. Allanson, C. E. J. Watt, H. Ratcliffe, N. P. Meredith, H. J. Allison, S. N. Bentley, T.

Bloch and S. A. Glauert, Particle-in-cell experiments examine electron diffusion by whistler-mode waves: 1. Benchmarking with a cold plasma, Journal of Geophysical Research: Space Physics (in press).

Posted in Space, space weather | Leave a comment

Climate change is spinning up the global energy and water cycles.

By: Richard Allan

I was unfortunate enough to mildly injure my middle finger by typing too frenetically on a train journey from Toulouse returning from an Intergovernmental Panel on Climate Change meeting. I soon forgot about this by luckily stepping on a rusty nail the next day while demolishing a shed and following a tetanus booster I am back to assessing research and preparing text outlining our knowledge of how the water cycle is expected to evolve as the planet continues to heat up from the emissions of greenhouse gases.

Climate change will impact people and the ecosystems upon which we all depend through aspects of the water cycle. The physics of the atmosphere, oceans and land surface tell us that climate change will alter and in many cases intensify events that cause there to be too little usable water to meet our needs or produce too much water at once as deluges inundate drainage capacity.  Thousands of person-years of work crams state-of-the-art scientific knowledge into millions of lines of computer code required to make realistic simulations of our climate. These are combined with observations of the real world and physical interpretation to assess the range of future possibilities for policy makers to plan effectively.

No one is killed by global average temperature yet understanding and monitoring how the Earth’s energy and water cycles are currently evolving is a challenge for our observing systems and a test of our basic understanding of the climate system. At the risk of further injuring my finger, I’ll get straight to a simple depiction of how our global climate is evolving in the diagram below. This shows departures from the usual monthly values in global average surface temperature, atmospheric moisture, precipitation and the energy accumulation driving climate change. These are based on surface measurements and satellite observations where gaps in coverage are filled with a meld of observations and simulations called “reanalyses”. The grey shading shows results from “CMIP6”, the latest generation climate simulations, here run in atmosphere-only “AMIP” mode (fed with the observed sea surface temperature and sea ice as well as realistic changes in radiative forcing agents that are perturbing our climate) that are directly comparable to the observations.

Figure 1:– Simulations and observations of global average temperature, moisture, precipitation and heating balance between absorbed sunlight and emission of infrared radiative energy to space (extended from Allan et al. 2014a,b).

The ocean temperature has been increasing around 0.2oC every decade, primarily due to rising atmospheric carbon dioxide concentrations. This trend is punctuated by natural climate fluctuations. For example the 1991 eruption of Mt Pinatubo in the Philippines cooled the global climate for a few years as ejected particles reflected sunlight back to space (seen by the dip in Earth’s heating rate) while slow, random sloshing about of the ocean briefly warms climate in El Niño events (as marked on the diagram in 1998 and 2016). The temporary warmth is eventually lost to space as seen by the dip in Earth’s heating rate as El Niño takes hold.

As the planet has warmed, both satellite estimates and surface observations show that moisture in the atmospheric column becomes more plentiful (a 6-7% increase for each oC of global warming). This is expected from basic physics and simulations of the atmosphere reliably recreate the real world. This increases our confidence in the most powerful amplifying effect on climate change, the water vapour feedback in which warmer air with more moisture traps more radiative heat. A greater abundance of moisture also drives an intensification of the water cycle with greater flows of moisture from regions of strong evaporation into storms. This is intensifying rainfall events and the severity of flooding where heavy rainfall occurs. This is also seen in warm El Niño events with a peak in precipitation globally, although the impacts are felt more by the redistribution of rainfall and unusual weather patterns.

The global precipitation rate is a slave to Earth’s energy balance rather than moisture which is why only small changes in global precipitation (a 1 or 2% increase for each oC of warming) are expected in the short term as seen in the simulations and satellite data. Satellites and ocean measurements monitoring Earth’s energy balance and although this fluctuates from year to year there is a continual accumulation that is heating the planet equivalent to every person currently alive on Earth each using twenty-two 2-kilowatt electric kettles to boil the ocean (babies would probably need supervision).

Current indicators of climate change are vital in strengthening understanding of how the climate is changing currently and will change in the future and what is needed to avoid and adapt to associated damaging effects. Earth observation from satellites and other observations are vital in verifying, questioning and improving this understanding. And with that I’m off to the UK’s National Centre for Earth Observation annual conference to learn more!

References:

Allan, R. P., C. Liu, N. G. Loeb, M. D. Palmer, M. Roberts, D. Smith and P.-L. Vidale (2014) Changes in global net radiative imbalance 1985-2012, Geophysical Research Letters, 41, 5588-5597, doi:10.1002/2014GL060962  

Allan, R. P., C. Liu, M. Zahn, D. A. Lavers, E. Koukouvagias and A. Bodas-Salcedo (2014) Physically consistent responses of the global atmospheric hydrological cycle in models and observations, Surveys in Geophysics, 35, 533-552, doi:10.1007/s10712-012-9213-z

 

 

 

Posted in Climate, earth observation, Water cycle | Leave a comment