Turbulence Matters

By: Torsten Auerswald

Most people are only consciously aware of the existence of turbulence when the pilot announces it. But apart from the discomfort of a bumpy flight, turbulence affects us in many other important aspects of daily life. The fact that turbulent mixing is much more efficient than molecular diffusion does not only come in handy when stirring the tea after adding sugar and milk (whether it is better to first put the milk or the tea into the cup can be discussed in another blog entry), but also has impacts on important problems in medicine, engineering, biology and physics, to name just a few. Turbulence is responsible for noise created at the blades of wind turbines and knowledge about it can help engineers to design quieter wings. It also affects the delivery of drugs to the lung, and therapeutic aerosols can be designed to optimise their effect in the body by modifying their aerodynamic properties. Turbulent effects are important for designing efficient filters for power plants or improving the efficiency of fuel engines. For our weather, turbulence plays a critical role. For example, it controls the exchange of heat and moisture between the soil and the atmosphere and is one of the factors which influences the development and characteristics of clouds. This is the reason why it is of so much importance for weather forecast models to describe turbulent processes accurately.

Unfortunately, the importance of turbulence is directly proportional to the difficulty to study its properties. The underlying set of equations which describe all fluid flows are the Navier-Stokes equations. This set of equations is extremely difficult (and most of the times impossible) to solve analytically. This is why for most real-world applications computer models are used which are able to find numerical solutions of the Navier-Stokes equations with good precision. Especially for turbulent flows, these computer models are numerically very expensive and the direct numerical simulation of turbulent flows remains restricted to relatively simple cases. In most applications, a certain level of approximation for the smaller scales, or even the whole turbulent part of the flow, is necessary to be able to simulate turbulent flows. This is necessary despite the fact that computer performance has rapidly increased over the last decades.

When the first numerical weather forecast was computed by hand by Lewis F. Richardson in 1922, it took him 6 weeks to calculate a 6-hour forecast for Europe. This forecast, apart from coming approximately 5.96 weeks too late, was also wrong. Nevertheless, his work was revolutionary and kicked off the era of numerical weather forecasts. In the publication of his results he estimated that it would take 64000 human computers (people who solve the numerical equations by hand) to simulate the global weather in real-time (meaning it would take one hour to compute a one hour forecast). He envisioned a weather centre in which hundreds of human computers would work together solving the equations for their respective parts of the forecast domain, and coordinators would make sure that everybody stays in sync, collect the results for each time step from the human computers and unify them to one big data set. He basically described the parallelisation of numerical flow models, including the communication between the “nodes”, which decades later would be used to compute global weather forecasts in only a few hours on modern supercomputers.

One of the first supercomputers was the CDC 6600. In 1964 it was considered to be the most powerful computer in the world and could compute an astonishing three million floating point operations per second (3 megaflops). Today’s fastest supercomputer is the IBM Summit which is able to perform 122×1015 floating point operations per second, 40 billion times more than the CDC 6600. Despite this impressive increase in computational power, current supercomputers are still not fast enough to allow the direct simulation of turbulent flows for most real-life applications, leaving plenty of interesting research topics for current and future scientists to investigate.

Reference:
Richardson, L. F., 1922: Weather prediction by numerical process. Cambridge university press, 236 pp.

 

Posted in Numerical modelling, Turbulence | Leave a comment

The consequences of climate change: how bad could it get?

By: Nigel Arnell

The United Nations Climate Action Summit held in New York on 23rd September was meant to be the occasion where countries and industry organisations made stronger commitments to reduce the emissions of the greenhouse gases that are causing global warming. Whilst 65 countries pledged to achieve net-zero emissions by 2050, the overall commitment – according to some commentators – fell “woefully short” of expectations. Most news headlines after the event focused on Greta Thunberg’s speech where she passionately challenged world leaders to do more: “the eyes of all future generations are upon you”.  Meanwhile, campaign groups such as Extinction Rebellion warn of ‘unprecedented global emergency’, ‘climate breakdown’ and ‘mass extinction’. The best-selling author David Wallace-Wells writes of ‘The Uninhabitable Earth’.

So what are the consequences of climate change, and how bad could it get? The impacts of climate change in the future depend not only on how climate – temperature, precipitation and so on – changes, but also on how societies and economies change. Our estimates of changes in climate depend on two things. First is a projection of how emissions will change. This depends on how economies and energy use change, and we cannot predict this: it will depend on policy choices, such as the actions to reduce emissions announced in New York last week. We, therefore, use ‘scenarios’ to describe plausible changes in emissions, but these should not be seen as predictions. The second part is where climate science fits in. Projections of the effect of an emissions scenario on changes in local weather are made using climate models. Whilst all models produce broadly similar changes in climate (temperatures increase in high latitudes more than in tropical regions, wet areas tend to get wetter and dry areas tend to get drier), the detailed projections can vary considerably between climate models. We can estimate the consequences of these changes in local weather for local climate hazards and resources – such as floods and droughts – using separate ‘impacts’ models.

Figure 1: Change in global heatwave, drought and flood hazard through the 21st century under three plausible emissions scenarios, and numbers of people affected in 2100. The five estimates of people affected for each emissions scenario are from five different socio-economic scenarios. See Arnell et al. (2019) for specific definitions of the indicators.

In practice, policymakers and others want to know the human impacts of climate change. We can estimate the direct impacts – such as the number of people affected by flooding, droughts or heatwaves – by combining our estimates of the physical changes in climate with socio-economic scenarios describing plausible changes in population and the economy. Figure 1 shows the global direct impacts of climate change on exposure to heatwaves, river flooding and drought (from Arnell et al., 2019). The left panels show changes in the physical hazard through the 21st century, under different emissions. The range in estimates for each emissions scenario shows the uncertainty due to different model projections of local temperature and precipitation. By 2100, the chance of experiencing a major heatwave has risen from 8% to 100%, the average proportion of time in drought has gone up from 6 to 30%, and the chance of a flood has increased from 2% to 7% (in some regions the changes are greater). The right-hand panels show the human impacts in 2100, under three of the emissions scenarios and for five socio-economic scenarios. The biggest impacts – in terms of numbers of people affected – are obviously with the highest emissions, but the differences between the socio-economic scenarios can be very large. When presenting such results we often focus on the central estimates (the solid lines in the figures), but we could instead look at the ‘worst-case’ impacts at the top end of the distribution. These can be a lot higher than the central estimates.

These direct impacts have knock-on consequences: changes in the frequency of droughts, for example, could plausibly lead to loss of livelihoods, food insecurity, political instability and displacement of people. It is these potential ‘systemic’ risks that are of greatest significance for policymakers and indeed are behind many of the most extreme warnings about climate emergencies and crises. However, these knock-on consequences depend not only on the physical changes in climate and future socio-economic scenarios that we can model but also – and largely – on how societies and governments react and behave. In order to estimate the likelihood of the real ‘worst-case scenarios’ that are ringing the loudest alarm bells we, therefore, need to link the work of climate scientists, impact modellers and experts in institutions, governance and human behaviour.

References:

Arnell, N.W. et al. (2019) The global and regional impacts of climate change under Representative Concentration Pathway forcings and Shared Socioeconomic Pathway socioeconomic scenarios. Environmental Research Letters 14 084046.  doi:10.1088/1748-9326/ab35a6

Wallace-Wells, D. (2019) The Uninhabitable Earth: a story of the future. Penguin: London

Posted in Climate, Climate change, Climate modelling | Leave a comment

Probing the atmosphere with sound waves

By: Javier Amezcua

Summer is a quiet time for both the University of Reading and the town itself. The buzzing that fills campus during term time is gone, the population decreases and activities are reduced. Some people find it relaxing – I find it boring and lethargic.  There is an exception to this quietness which occurs during the August Bank Holiday weekend. Any music aficionado knows this is the time when the annual Reading and Leeds Festival takes place. Thousands of people from all around the UK descend into our town to inhabit a patch of land next to the Thames for three days and enjoy some of their favourite bands, as well as some other excesses…

During my seven years in Reading, I have indulged myself with attending the festival four times. In each of these occasions, I have had a similar conversation with my colleagues when we were back at work the day after the Bank Holiday Monday. First, they hypothesize that I was the oldest person in the whole festival (not true, but it is accurate to say that I am too old to camp and instead I go home each day). Second, they state that at night they could hear the music all the way to their houses. I found the latter comment interesting given what I had experienced. Let me explain: there have been days in which I was not interested in the headliner act and hence I left the festival while the music was still playing. In each occasion, I remember walking away from the festival and noticing the music fading progressively until it was gone. So how could other people hear it at their homes when these were further away from the stage? The answer, not obvious to me at first, is that there were some sound waves that were ‘jumping’ over me.

Figure 1: Simple representation of the reflection of a vertically propagating wave in the atmosphere. The sound wave (yellow line) departs from a source (S) at the surface, reaches a maximum height (Zmax) and it is reflected back towards the surface where it can be detected in a receiver (R). The cross-winds that the wave-front encounters make it seem to come from a false source in a slightly different direction than the real source.

To understand my explanation, we need to think about the way sound travels from the loud-speakers around and above the stage. Without going into the specifics of the type of loud-speaker (there are lots), the sound waves can be transmitted in both the horizontal and vertical directions (you can see a classical illustration of how spherical sound waves work in the youtube link in the references). At the end of the night, as I walked away from the source, the sound waves coming horizontally in my direction were attenuated by the medium (air) and obstacles, and hence I stopped hearing the music. What about those waves with a vertical component? Figure 1 answers this: the yellow line represents the path of these waves. They travel up to some maximum height until they are reflected back towards the surface. The attenuation over that path is different from the attenuation of the horizontal propagating wave (in the case I am discussing it is less). So, the people in town were receiving a sound-wave that had been reflected in an upper level and still had enough intensity. It also helped that is was night-time and not day-time (at night the conditions are more prone for reflection/refraction, but that is another issue).

Figure 2: Vertical sensitivities for the infra-sound waves generated by the detonation of old ammunition in Finland and detected in Norway. The horizontal axis is the time; we indicate the year but not the exact time of detonation. The vertical axis is height. Notice that most infra-sound waves reach about 40km in height.

Why am I telling this story? Because lately, I have been using the behaviour I just described as a tool to probe the winds in the stratosphere (roughly between 12 to 50 km in the mid-latitudes). Finland has a lot of old ammunition, mainly from the Cold War, that it is trying to get rid of. Therefore, every summer the Finnish army performs a series of controlled detonations at a remote location during the course of several days. These explosions produce infra-sound waves (waves below 20 Hz which cannot be detected by the human ear). Some of them follow a path with a vertical component, they reach a maximum level and they are reflected back to the surface where they are detected, about 10 minutes after the explosion, in a station in Norway. This station is about 178 km due north from the explosion site and it has quite powerful micro-barometers which are able to measure precisely the pressure variations caused by the infrasound waves. I have some enthusiastic colleagues in the Norway Seism Array (NORSAR) who have shared these observations with me. Figure 2 shows a model-based reconstruction of the maximum height the waves reach for explosion events from 2001 to 2018. There are different numbers of detonations per year, which is why the horizontal axis looks irregular. Notice that most of the waves reach about 40 km in height, and some up to 60 km.

So how do I probe the atmosphere with these waves? As the waves travel through the atmosphere they are affected by several atmospheric conditions: winds, humidity, etc. In particular, the presence of cross-winds (i.e. winds perpendicular to the direction of the wavefront) can shift the detection angle of the waves when they reach the ground. Hence the waves appear to have come from a false source in the direction of the blue line in Figure 1. Since I know the exact location of the source, the time it took for the waves to be detected, and the shift angle towards the apparent source, I can deduce some values for the cross-winds each infrasound wave encountered, including those in upper levels of the atmosphere. In order to solve this estimation problem, I use techniques from inverse problems and data assimilation which I will not discuss in this post; I only mention that I use an implementation of the ensemble Kalman filter.

It is quite difficult to measure winds in the stratosphere, hence any source of information is valuable and this includes the strategy discussed here. There are other sources of infrasound waves that we can exploit around the world, and some of them are natural! For instance, the ocean swell in a spot near Iceland is a natural producer of infra-sound waves. So, we could use this hot-spot to probe the winds between that location and the same receiver in Norway, or many other receiving stations at the moment. At the moment I am participating in a collaborative project with some people from different institutions in Europe to solve this problem.

Reference:

Amezcua J., S. P. Naeholm, E. M. Blixt, and A. J. Charlton-Perez, 2019: Assimilation of atmospheric infrasound data to constrain tropospheric and stratospheric winds. QJRMS, submitted.

Extract of Sound Waves And Their Sources (1933), you can see a classical illustration of how spherical sound waves work in this animation.

 

Posted in Climate, data assimilation, Stratosphere, Wind | Leave a comment

Coffee and atmospheric physics

by: Prof Maarten Ambaum

Every morning I trundle down to the office kitchen and I make myself a whole thermos flask of coffee which keeps me going for the rest of the day. In fact, most people in our Department have a similar daily ritual. During coffee breaks, science is discussed as well as more mundane things (a lot of politics, these days). Coffee is the fuel of science!

There are deeper links between science and coffee as well: recently our hot water boiler in the office kitchen was replaced by a fancy new hot water boiler. This new boiler has a so-called “eco-mode” which claims to save energy, essentially by using the boiler at half capacity. This claim could not go untested; we are a science department after all! Some basic thermodynamics (the science of heat and energy) and some experiments showed that the eco-mode is nothing like it: it does not save energy and we haven’t used the eco-mode since. A blog with the fun details can be found here.

In fact, this autumn I will again be teaching our new cohort of master’s students the ins and outs of atmospheric thermodynamics. It is a profoundly interesting part of physics and it is at the fundament of our understanding of the climate and weather. And of our understanding of hot water boilers, of course.

A good understanding of fundamental physics is crucial in our field of science. For example, most climate sceptics use arguments that fall over at the level of fundamental physical understanding.

Many people still cannot accept the idea that adding carbon dioxide to the atmosphere could ever heat up the atmosphere in any substantial way. This kind of argument can be debunked comprehensively by basic thermodynamics. The key is that adding carbon dioxide to the atmosphere is similar to putting a thicker duvet on your bed: a thicker duvet will make you feel warmer, not because you produce more heat or somehow the duvet makes you warm. The key is that the heat energy you produce has a harder job of escaping to the environment through a thicker duvet and it can only do so by increasing the temperature in your bed, allowing the same amount of heat to escape through the thicker duvet.

The same is true for the earth’s climate: the atmosphere acts as a blanket on the earth’s surface. The earth’s surface is heated directly by the sun (which remains broadly constant in its energy output), so if the atmospheric blanket gets thicker (by adding carbon-dioxide), the earth’s surface needs to get warmer for the heat to escape at the same rate.

There are many fascinating additional details to this picture, way too many to address here. Many of those I will be teaching to our new group of students (for example, how and why does carbon dioxide change the effective thickness of the atmospheric blanket), and many are also still actively researched in our Department (for example how changing cloud properties might change the effective thickness of the atmospheric blanket, but also how they might change the amount of energy from the sun reaching the earth’s surface). But the underlying fundaments are rock solid physics.

Here’s a brainteaser to keep you busy: for my coffee to stay hot for longer, should I pour it in bigger or a smaller mug?

References:

Ambaum, M. H. P., 2010: Thermal Physics of the Atmosphere, J. Wiley & Sons, Chichester, 256pp.

Ambaum, M. H. P., and M. Prosser, 2019: Is our “ECO mode” hot water boiler eco-friendly?

Posted in Climate | Leave a comment

Do we have an appropriate description of energetic particles in the Earth’s outer radiation belt?

By: Oliver Allanson

Figure 1: A particle undergoes Brownian motion.

The short answer: probably not, at least not all of the time.

In our state-of-the-art and physics-based numerical experiments, we analyse the motion of 100 million individual high-energy electrons that evolve within conditions like that found within the Earth’s hazardous ‘radiation belt’ environment. We observe that electrons do not always behave according to the manner that is most typically used by scientists to describe their evolution. The standard mathematical description that is most commonly used is based upon diffusion proceeding in a manner that is analogous to ‘Brownian motion’, e.g. the familiar high-school experiment showing the random motion of particles suspended within a fluid. The random motion of an individual particle undergoing Brownian motion is illustrated in Figure 1 [1]. In contrast, we observe that the electrons sometimes spread apart at rates that either ‘accelerate’ or ‘decelerate’ in time. This could have implications for the modelling of high-energy electrons in our magnetosphere, and hence for satellite safety.

Figure 2: The Earth’s Radiation Belts.

Figure 3: Not all diffusion is Brownian! The ‘mean-squared-displacement’ can evolve at rates that either increase (‘super-diffusion’) or decrease (‘sub-diffusion’) with time.

The Earth’s outer radiation belt

The Earth’s outer radiation belt is a dynamic and spatially extended radiation environment within the Earth’s inner magnetosphere, composed of energetic plasma that is trapped by the geomagnetic field (see Figure 2 [2]). The size and location of the outer radiation belt varies dramatically in response to solar wind variability. The lifetime of some individual energetic particles can be long (~years). However, orders of magnitude changes in the particle flux can occur on much shorter timescales (~hours). Whilst we know that the radiation belt environment is ultimately driven by the solar wind and the pre-existing state of the magnetosphere, it is very challenging to accurately predict, or model, fluxes within the radiation belt. This difficulty arises from the fact that the magnetosphere can store and transport energy in many different ways, and over a range of different time and length scales. This difficulty in prediction is a pressing concern given the hundreds of satellites that orbit within this hazardous environment. The highly variable and energetic electron environment poses critical space weather hazards for Low, Medium, and Geosynchronous Earth Orbiting (LEO, MEO, and GEO) spacecraft; thus, the ability to predict its variability is a key goal of the magnetospheric space weather community. Most physics-based computer models of particle dynamics in the radiation belts rely upon a specific version of the ‘quasilinear theory’. This approach is founded upon a number of physical assumptions that are now known not to always hold in the radiation belt. Furthermore, the mathematics that is used to describe this quasilinear theory is based upon ‘normal diffusion’ equations, i.e. equations that (in a given space) describe ‘stochastic’ Brownian motion. This stochastic assumption is also considered to be uncertain in given circumstances. Our work tries to test these assumptions, by processing data from state-of-the-art and fully self-consistent numerical experiments. Electron diffusion characteristics are directly extracted from particle data. The ‘nature’ of the diffusive response is not always constant in time, i.e. we observe a time dependent ‘rate of diffusion’, that is inconsistent with Brownian motion (see Figure 3 [3]). However, after an initial transient phase, the rate of diffusion does tend to a constant, in a manner that is consistent with the assumptions of quasilinear diffusion theory. This work establishes a framework for future investigations on the nature of diffusion due to in the Earth’s outer radiation belts, using physics-based numerical experiments.

How much, and when, does this matter?

All of the work described here pertains to a ‘benchmarking’ scenario in which we prove the concept of our experimental technique, and under which conditions one is least likely to observe particularly exotic behaviour [4]. In future experiments we will: (i) make more quantitative assessments; (ii) subject the plasma to more extreme conditions (we therefore expect to find a more sustained ‘non-Brownian’ response); (iii) assess the implications on current models.

[1] A particle undergoes Brownian motion.

Reproduced from https://commons.wikimedia.org/wiki/File:Csm_Brownian-Motion_f99de6516a.png.

[2] The Earth’s Radiation Belts.

Reproduced from https://www.nasa.gov/mission_pages/sunearth/news/gallery/20130228-radiationbelts.html.

[3] Not all diffusion is Brownian! The ‘mean-squared-displacement’ can evolve at rates that either increase (‘super-diffusion’) or decrease (‘sub-diffusion’) with time.

Reproduced from https://commons.wikimedia.org/wiki/File:Msd_anomalous_diffusion.svg.

[4] O. Allanson, C. E. J. Watt, H. Ratcliffe, N. P. Meredith, H. J. Allison, S. N. Bentley, T.

Bloch and S. A. Glauert, Particle-in-cell experiments examine electron diffusion by whistler-mode waves: 1. Benchmarking with a cold plasma, Journal of Geophysical Research: Space Physics (in press).

Posted in Space, space weather | Leave a comment

Climate change is spinning up the global energy and water cycles.

By: Richard Allan

I was unfortunate enough to mildly injure my middle finger by typing too frenetically on a train journey from Toulouse returning from an Intergovernmental Panel on Climate Change meeting. I soon forgot about this by luckily stepping on a rusty nail the next day while demolishing a shed and following a tetanus booster I am back to assessing research and preparing text outlining our knowledge of how the water cycle is expected to evolve as the planet continues to heat up from the emissions of greenhouse gases.

Climate change will impact people and the ecosystems upon which we all depend through aspects of the water cycle. The physics of the atmosphere, oceans and land surface tell us that climate change will alter and in many cases intensify events that cause there to be too little usable water to meet our needs or produce too much water at once as deluges inundate drainage capacity.  Thousands of person-years of work crams state-of-the-art scientific knowledge into millions of lines of computer code required to make realistic simulations of our climate. These are combined with observations of the real world and physical interpretation to assess the range of future possibilities for policy makers to plan effectively.

No one is killed by global average temperature yet understanding and monitoring how the Earth’s energy and water cycles are currently evolving is a challenge for our observing systems and a test of our basic understanding of the climate system. At the risk of further injuring my finger, I’ll get straight to a simple depiction of how our global climate is evolving in the diagram below. This shows departures from the usual monthly values in global average surface temperature, atmospheric moisture, precipitation and the energy accumulation driving climate change. These are based on surface measurements and satellite observations where gaps in coverage are filled with a meld of observations and simulations called “reanalyses”. The grey shading shows results from “CMIP6”, the latest generation climate simulations, here run in atmosphere-only “AMIP” mode (fed with the observed sea surface temperature and sea ice as well as realistic changes in radiative forcing agents that are perturbing our climate) that are directly comparable to the observations.

Figure 1:– Simulations and observations of global average temperature, moisture, precipitation and heating balance between absorbed sunlight and emission of infrared radiative energy to space (extended from Allan et al. 2014a,b).

The ocean temperature has been increasing around 0.2oC every decade, primarily due to rising atmospheric carbon dioxide concentrations. This trend is punctuated by natural climate fluctuations. For example the 1991 eruption of Mt Pinatubo in the Philippines cooled the global climate for a few years as ejected particles reflected sunlight back to space (seen by the dip in Earth’s heating rate) while slow, random sloshing about of the ocean briefly warms climate in El Niño events (as marked on the diagram in 1998 and 2016). The temporary warmth is eventually lost to space as seen by the dip in Earth’s heating rate as El Niño takes hold.

As the planet has warmed, both satellite estimates and surface observations show that moisture in the atmospheric column becomes more plentiful (a 6-7% increase for each oC of global warming). This is expected from basic physics and simulations of the atmosphere reliably recreate the real world. This increases our confidence in the most powerful amplifying effect on climate change, the water vapour feedback in which warmer air with more moisture traps more radiative heat. A greater abundance of moisture also drives an intensification of the water cycle with greater flows of moisture from regions of strong evaporation into storms. This is intensifying rainfall events and the severity of flooding where heavy rainfall occurs. This is also seen in warm El Niño events with a peak in precipitation globally, although the impacts are felt more by the redistribution of rainfall and unusual weather patterns.

The global precipitation rate is a slave to Earth’s energy balance rather than moisture which is why only small changes in global precipitation (a 1 or 2% increase for each oC of warming) are expected in the short term as seen in the simulations and satellite data. Satellites and ocean measurements monitoring Earth’s energy balance and although this fluctuates from year to year there is a continual accumulation that is heating the planet equivalent to every person currently alive on Earth each using twenty-two 2-kilowatt electric kettles to boil the ocean (babies would probably need supervision).

Current indicators of climate change are vital in strengthening understanding of how the climate is changing currently and will change in the future and what is needed to avoid and adapt to associated damaging effects. Earth observation from satellites and other observations are vital in verifying, questioning and improving this understanding. And with that I’m off to the UK’s National Centre for Earth Observation annual conference to learn more!

References:

Allan, R. P., C. Liu, N. G. Loeb, M. D. Palmer, M. Roberts, D. Smith and P.-L. Vidale (2014) Changes in global net radiative imbalance 1985-2012, Geophysical Research Letters, 41, 5588-5597, doi:10.1002/2014GL060962  

Allan, R. P., C. Liu, M. Zahn, D. A. Lavers, E. Koukouvagias and A. Bodas-Salcedo (2014) Physically consistent responses of the global atmospheric hydrological cycle in models and observations, Surveys in Geophysics, 35, 533-552, doi:10.1007/s10712-012-9213-z

 

 

 

Posted in Climate, earth observation, Water cycle | Leave a comment

Effect of the North Atlantic Ocean on the Northeast Asian climate: variability and predictability

By: Paul-Arthur Monerie

North East Asia has warmed substantially after the mid-1990s leading to an increase in temperature extremes and to societal impacts (Dong et al., 2016). Predicting the variability of the North East Asian climate is therefore of primordial interest since it would help the population to anticipate strong climatic events.

Figure 1: Anomaly correlation coefficient skill score (ACC) for SAT in DePreSys3 hindcasts (using NCEP as observations) in extended summer (JJAS) for year 2–5 lead-times. The ACC calculated after a linear trend is removed at each grid-point. Stippling indicates that the ACC is different to zero at the 95% confidence level according to a Monte-Carlo procedure. Figure from Monerie et al. (2017).

Climate models allow simulating climate and projecting its short-term to long-term evolutions. We used the decadal prediction system DePreSys3 (Dunstone et al., 2011) and assessed how the model is able to predict, retrospectively, the observed temperature, up to 5 years ahead (Monerie et al., 2017). The correlation between the observed temperature and the simulated temperature (i.e. the anomaly correlation coefficient) shows that the climate model satisfactorily reproduces the observed temperature over many places, including North East Asia and the North Atlantic Ocean (Fig. 1).

Further analyses have highlighted a statistical co-variability between the temperature over East Asia and the variability of the temperature over the North Atlantic Ocean, with a positive phase of the North Atlantic Multidecadal Variability (i.e. the low-frequency variability of the North Atlantic sea surface temperature) associated with a warming over North East Asia, in agreement with Lin et al. (2016)  and Sun et al. (2019). Prediction systems have good skill in retrospectively predicting the temperature over the North Atlantic Ocean up to 5 years ahead (García-Serrano et al., 2015) and we thus propose that such climate models and experimental protocols could be useful to predict the low-frequency variability of the temperature over East Asia.  

Figure 2: Impact of AMV on (top panels) surface temperature (°C), in (left) JJA and (right) SON. Stippling indicates that changes are significantly different to zero according to a Student’s t-test at the 95% confidence level.

The mechanisms linking the North Atlantic Ocean to North East Asia have then been assessed by performing a set of sensitivity experiments, following (Boer et al., 2016), and using the MetUM-GOML2 climate model (Hirons et al., 2015). We confirm, that in a climate model, a warming of the North Atlantic Ocean is associated with an increase in temperature over North East Asia (Fig. 2). We identify two mechanisms, which link the North Atlantic Ocean to East Asia. First, the warming of the Atlantic Ocean is associated with a perturbation of the circumglobal teleconnection pattern (i.e. the atmospheric circulation over the Northern Hemisphere) (Ding and Wang, 2005; Beverley et al., 2019). Second, the Atlantic Ocean is able to force a part of the variability of the Pacific Ocean, leading to an excess in precipitation over the Philippines, and to the propagation of a Rossby wave, which propagate over the western Pacific Ocean. Both mechanisms are able to impact East Asia, through increasing heat advection and incoming surface shortwave radiation locally.

Our ongoing results show that we might be able to increase our ability to predict climate over East Asia by improving our knowledge on the impacts and variability of the North Atlantic Ocean.  

References

Boer, G. J., and Coauthors, 2016: The Decadal Climate Prediction Project (DCPP) contribution to CMIP6. Geoscientific Model Development., 9(10), 3751–3777. article. https://doi.org/10.5194/gmd-9-3751-2016

Beverley, J.D., S.J. Woolnough, L.H. Baker, S.J. Johnson, and A. Weisheimer, 2019: The northern hemisphere circumglobal teleconnection in a seasonal forecast model and its relationship to European summer forecast skill. Climate Dynamics., 52, 3759, https://doi.org/10.1007/s00382-018-4371-4

Ding, Q., and B. Wang, 2005: Circumglobal teleconnection in the Northern Hemisphere summer. J Clim, 18:3483–3505. doi:10.1175/JCLI3473.1

Dong, B., R.T. Sutton, W. Chen, X. Liu, R. Lu, and Y. Sun, 2016: Abrupt summer warming and changes in temperature extremes over Northeast Asia since the mid-1990s: Drivers and physical processes. Advances in Atmospheric Sciences., 33(9), 1005–1023. https://doi.org/10.1007/s00376-016-5247-3

Dunstone, N. J., D.M. Smith, and R. Eade, 2011: Multi-year predictability of the tropical Atlantic atmosphere driven by the high latitude North Atlantic Ocean. Geophysical Research Letters,. 38(14). https://doi.org/10.1029/2011GL047949

García-Serrano, J., V. Guemas, and F.J. Doblas-Reyes, 2015: Added-value from initialization in predictions of Atlantic multi-decadal variability. Climate Dynamics., 44(9–10), 2539–2555. https://doi.org/10.1007/s00382-014-2370-7

Hirons, L. C., N.P. Klingaman, and S.J. Woolnough, 2015: MetUM-GOML: a near-globally coupled atmosphere–ocean-mixed-layer model. Geoscientific Model Development., 8, 363–379. https://doi.org/10.5194/gmd-8-363-2015

Lin, J.-S., B. Wu, and T.-J. Zhou, 2016: Is the interdecadal circumglobal teleconnection pattern excited by the Atlantic multidecadal Oscillation? Atmospheric and Oceanic Science Letters., 9(6), 451–457. https://doi.org/10.1080/16742834.2016.1233800

Monerie, P.-A., J. Robson, B. Dong, and N. Dunstone, 2017: A role of the Atlantic Ocean in predicting summer surface air temperature over North East Asia? Climate Dynamics. https://doi.org/10.1007/s00382-017-3935-z

Sun, X., S. Li, X. Hong, and R. Lu, 2019: Simulated Influence of the Atlantic Multidecadal Oscillation on Summer Eurasian Nonuniform Warming since the Mid-1990s. Advances in Atmospheric Sciences., 36(8), 811–822. article. https://doi.org/10.1007/s00376-019-8169-z

Posted in Climate, Climate modelling, Predictability | Leave a comment

It’s Hotter Than A Ginger Mill In Hades

By: Giles Harrison and Stephen Burt

Or so they sometimes say in the south of the United States. But without a reference ginger mill or ready access to Hades, how do we know how hot it really is, and how much can we trust the measurements of the record temperatures we had in July? The basics of air temperature measurement are simple enough – put a thermometer in the shade and keep air moving past it – but the details of doing this matter a lot. And perhaps in all the flurry about records, this detail isn’t so widely appreciated. For example, how many times have you heard a radio phone-in programme asking listeners for car or garden temperature readings to compare, or a tennis commentator mentioning the temperature on centre court at Wimbledon? For a thermometer anywhere in direct sunlight, sheltered from the wind, its temperature is just that of a hot thing in the sun. It’s highly unlikely to be a reliable air temperature.

Meteorologists have worked on this problem for a long time. The first liquid-in-glass thermometers appeared in Renaissance Italy in the 1640s, gradually becoming more reliable and consistent during the eighteenth century. Temperature measurements slowly became more widespread in Europe as thermometers improved, and became particularly well organised internationally in the eighteenth and nineteenth centuries. Some of the earliest reliable air temperature measurements began in national observatories making astronomical or geophysical measurements for which the temperature was merely needed as a correction factor, and many of these early “temperature series” still continue. The needs of modern climate science have made understanding these early meteorological technologies, and the exposure of the instruments, much more important.

Figure 1: Thermometer screens. (Left) Stevenson-type screen at the Reading University Atmospheric Observatory. (Right) Beehive screen at the meteorological site of the Universitat de les Illes Balears, Palma. Both sites also have nearby wind measurements.

To provide protection from direct sunlight, long-wave (terrestrial) radiation and other demanding environmental factors such as rain, while retaining airflow, thermometers are usually placed within a semi-porous shelter or shield, often referred to as a thermometer screen. Screens are almost always made from white material (externally at least) to reflect sunlight:  many different designs are in use internationally. At a meteorological site they should be positioned for good airflow and arranged so that the hinged door to read the thermometer opens on the shady side. In later versions of the widely adopted thermometer screen originally designed by the lighthouse engineer Thomas Stevenson (1818-1887, and father of Robert Louis Stevenson), double-louvred slats are used to form the sides of the screen, to maximise thermal contact with the air passing through. Smaller cylindrical “beehive” screens based on the same principle containing smaller electronic sensors are now also widely used (figure 1).

The accuracy of the air temperature recorded by a screen depends on three main factors: how closely the in-screen temperature follows the air temperature, how quickly the sensor responds to changes in temperature, and of course the accuracy of the sensor used. A meteorological thermometer is typically a liquid-in-glass device (e.g. a mercury thermometer), or an electronic sensor, such as a platinum resistance thermometer. With their lower mass, the latter can respond more quickly than the former, so the World Meteorological Organisation (WMO) sets out observing guidelines on sensor response time, mandating that temperature measurements be averaged over 60 seconds. This helps ensure comparability of records between different instrument types (and thus historical records) and avoid spurious very short-duration maximum and minimum temperatures. Thermometers (whether liquid-in-glass or electronic) are calibrated by comparison against reference devices in laboratory experiments, and the corrections needed derived.  With regular calibration checks to eliminate effects of drift, and many other precautions, measurements accurate to 0.1 °C become possible.

Figure 2: Temperature difference (Tdiff) between a thermometer in open air and screen temperature (Tscrn) at the Reading University Atmospheric Observatory, plotted against (left) screen temperature and (right) wind speed at 2m (u2), which is approximately at the screen height. (Modified from [2]).

The question of how closely the screen temperature represents the air temperature is much more difficult, as to assess it perfectly the true air temperature itself would be needed. Comparison against a reference temperature better than that of the screen is all that can be done, and the precision experiments necessary are difficult to maintain for anything other than short periods. Comparisons (or “trials”) between one design of screen and another are more common, and tend to be undertaken by national meteorological services. These of course only show how to account for changes in screen design, but not the fundamental question of how well air temperature itself is determined. Nevertheless, from the few investigations available, WMO states[1] that worst-case temperature differences between naturally ventilated thermometer screens and artificially-ventilated (aspirated) sensors and air temperature lie between 2.5 °C and -0.5 °C. With temperatures commonly reported to 0.1 °C, this seems astonishingly large! However, in a year-long study[2] at Reading University Atmospheric Observatory using a naturally ventilated screen with a careful procedure to overcome inevitable sensor breakages, differences as large as this were indeed occasionally observed, skewed to the same warm bias of the screen indicated by WMO (Figure 2). However, these large differences were exceptional, as 90% of the temperature differences were well within ± 0.5 °C. Figure 2 shows that the key aspect in reducing the uncertainties is the wind flow around and through the screen, because the largest temperature differences occur in calm conditions, both by day and by night. This was originally recognised by the Scottish physicist John Aitken (1839-1919, and more famous perhaps for his pioneering work on aerosols), who argued for forced ventilation through a thermometer screen[3]. Aspirated temperature measurements were hardly ever implemented until recent years, but improved technologies mean they are increasingly regarded as reference climate measurements, in the United States[4] and other countries, although, as yet, very few UK Met Office observing sites are equipped with aspirated sensors.

Ventilation is essential for rapid thermal exchange between the air, the thermometer screen and the enclosed temperature sensor itself, to try to ensure and maintain thermal equilibrium even as the air temperature fluctuates continuously. At low wind speeds, this is much less effective and the time taken for the thermometer screen to “catch up” with external air temperature changes can be quite long, as much as half an hour[5]. Further work[6] at Reading Observatory showed that this was improved to a couple of minutes for near-screen wind speeds of 2 ms-1 or greater, but that for wind speeds less than this, lag times increased considerably. Because winds are often light or even calm at night, this effect is more likely to affect a night-time minimum temperature than a day-time maximum. Some maxima or minima may therefore still be under-recorded in a poorly ventilated screen, in a sheltered observing site or in light wind conditions. For temperature measurements made in screens, the response time of the screen is greater than that of the sensor – sometimes many times so in light winds: for aspirated temperature measurements, in contrast,  the sensor response time alone is the determining factor.

Figure 3: (left) Screen temperature (Tscreen) measured at Reading Observatory on 25th July 2019, and (right) screen temperature plotted against wind speed at 2 m (u2), using 5 min average values. The dashed red line marks Tscreen= 35° C, and the dotted blue line Tscreen= 20 °C.

Looking at the measurements made at the well-instrumented Reading Observatory for Thursday 25 July 2019 (Figure 3), the wind speed at 2 m (u2) is well correlated with the screen temperature. For the times when Tscreen was greater than 35 °C, the median u2 was 2.3 ms-1: in contrast, when Tscreen was less than 20 °C, the median u2 was 0.3 ms-1. This shows that, although the daytime maximum was well ventilated, this is not true of the nocturnal temperature minimum, which will have been less reliably determined.

The actual moment of temperature maximum is a very local phenomenon, amongst other things depending on airflow over the site, positions of heat sources and soil characteristics, urban heat island effects and, most commonly, the presence of cloud. For example, on 10 August 2003, when Reading recorded its hottest day to date at 36.4 °C, cloud materialised at Reading just before the time of the maximum in air temperature, and probably prevented a greater temperature being reached[7]. Even for the Reading Observatory thermometer screen on 25 July 2019, which was moderately well ventilated, temperature fluctuations lasting a few minutes, as might well have been generated beneath the broken clouds which were present, would be damped out.

The variations in maximum temperatures across nearby sites probably experiencing similar conditions on 25 July are interesting to compare (Table 1). Differences in radiative environment between extensive tarmac (Heathrow) and bleached grass surfaces (Kew Gardens) are perhaps not as great as might appear, as both had identical maximum temperatures. On the other hand, the more open instrument enclosure at Teddington (NPL) probably contributed to a slightly lower maximum temperature there than at other London sites.

Table 1. Maximum temperatures reported on 25 July 2019.
Reading 36.3 °C (from automatic system: maximum thermometer in screen 36.0 °C)
Heathrow 37.9 °C
Northolt 37.6 °C
Kew Gardens 37.9 °C
St James’s Park 37.0 °C
Teddington 36.7 °C

The median of these is 37.3 °C, with an inter-quartile range of 1.05 °C, so there is no doubt that temperatures were consistently that of an extremely hot UK summer day. Local factors, however, are evidently hugely important in determining which site “wins” the maximum temperature record. We now know that the new record UK screen temperature of 38.7 °C occurred at the long-running climatological site at the Botanical Gardens in Cambridge. From the arguments above, whether the air temperature there was indeed greater than that at Faversham in August 2003 (where the screen then recorded 38.5 °C, and was in many respects seriously anomalous anyway[8]) is rather difficult to say – neither site provided simultaneous wind data at screen height, for example.

An extreme “record” screen temperature value at any one site may consequently be of only limited quantitative usefulness, given local variability and inherent limitations in the measurement, although of course nothing here regarding the details of local measurements changes the robust result that globally, temperatures are rising. The maximum temperature continues to be of remarkably widespread interest, even if it isn’t well appreciated how it arises, how reliably it can be measured and whether – if only the newspaper headline writers knew it – that it could well be platinum rather than mercury which yields it.

References:

[1]  World Meteorological Organization (WMO), 2014: WMO No.8 – Guide to Meteorological Instruments and Methods of Observation (CIMO guide) (Updated version, May 2017), 1139 pp.

[2] R.G. Harrison, 2010. Natural ventilation effects on temperatures within Stevenson screens. Q. J. Royal Meteorol. Soc. 136: 253–259. DOI:10.1002/qj.537

[3] J. Aitken, 1884. Thermometer screens. Proc R. Soc. Edinburgh 12:667.

[4] H.J. Diamond, and Coauthors, 2013: U.S. Climate Reference Network after One Decade of Operations: Status and Assessment. Bull. Amer. Meteorol. Soc., 94: 485-498. https://doi.org/10.1175/BAMS-D-12-00170.1

[5] D. Bryant, 1968. An investigation into the response of thermometer screens – The effect of wind speed on the lag time. Meteorol. Mag. 97:183–186

[6] R.G. Harrison, 2011. Lag-time effects on a naturally ventilated large thermometer screen. Q. J. Royal Meteorol. Soc. 137: 402–408. DOI:10.1002/qj.745

[7] E. Black, M. Blackburn, G. Harrison, B. Hoskins and J. Methven, 2004 Factors contributing to the summer 2003 European heatwave Weather 59, 8:217-223

[8] S.D. Burt and P. Eden, 2004. The August 2003 heatwave in the United Kingdom: Part 2 – The hottest sites Weather, 59, 9:239-246

Posted in Climate, Measurements and instrumentation | Leave a comment

Why was there decadal increase in summer heat waves over China across the mid-1990s?

By: Buwen Dong

Heat waves (HWs), commonly defined as prolonged periods of excessive hot weather, are a distinctive type of high-temperature extreme (Perkins 2015). These high-temperature extremes can lead to severe damage to human society and ecosystems. In our studies, we focus on decadal changes in the HWs over China and consider three independent types:

Compound HW—at least three consecutive days with simultaneous hot days and hot nights (Tmax ≥ 90th percentile and Tmin ≥ 90th percentile).

Daytime HW—at least three consecutive hot days (only Tmax ≥ 90th percentile), without consecutive hot nights.

Nighttime HW—at least three consecutive hot nights (only Tmin ≥ 90th percentile), without consecutive hot days.

Illustrated in Figure 1 are distributions of 753 stations in China station dataset and the time evolutions of the area-averaged frequency and intensity of the compound, daytime, and nighttime HWs over different regions in China. One of most important features is the abrupt decadal change across the mid-1990s, from the early period (EP) of 1964-1981 to present day (PD) of 1994-2011, characterized by increases in frequency and intensity (Su and Dong 2019a).

Figure 1: (Top) Distributions of 753 stations in China station dataset. The dots in green, orange and purple represent the sub-regions of North-eastern China (NEC), South-eastern China (SEC) and Western China (WC), respectively. Time series of area-averaged (left) frequency (events per year) and (right) intensity (°C) of (a)–(b) compound, (c)–(d) daytime, and (e)–(f) nighttime HWs in extended summer over the whole mainland of China (black solid lines), North-eastern China (blue dashed lines), South-eastern China (orange dashed lines), and Western China (green dashed lines). Black dashed lines denote the time means of area-averaged indicators. Red solid lines represent the decadal variations of area-averaged indicators, obtained by a 9-yr running average. The black solid and dashed, as well as the red solid lines are for the left Y axis, while the dashed blue, orange, and green lines are for the right Y axis.

What has caused these rapid decadal changes in HW properties across the mid-1990s over China? A set of numerical experiments using an atmosphere–ocean–mixed layer coupled model (MetUM-GOML1; Hirons et al. 2015) have been performed in a study by Su and Dong (2019a) to understand the relative importance of changes in greenhouse gas (GHG) concentrations and anthropogenic aerosol (AA) precursor emissions.

The area-averaged changes in frequency and intensity of the three types of HWs over all of China and all three sub-regions for both observations and model experiments are demonstrated in Figure.2. Quantitatively, the changes of the three types of HWs in response to ALL forcing changes simulated by models are in some agreement with observations, not only over China as a whole, but also over the individual sub-regions.

Figure 2: Area-averaged changes in (left) frequency (events per year), (centre) intensity (°C), and (right) spatial extent (km2) of (a)–(c) compound, (d)–(f) daytime, and (g)–(i) nighttime HWs over all of China, NEC, SEC, and WC in observations and simulations forced by ALL forcing, GHG forcing, and AA forcing. The error bars indicate the 90% confidence intervals based on a two-tailed Student’s t test.

The results above indicate that the observed decadal changes in the frequency and intensity of compound, daytime, and nighttime HWs over China across the mid-1990s are primarily forced by the changes in anthropogenic forcings. The impacts of GHG changes and that of AA changes are different in many aspects. GHG changes contribute dominantly to the increases in all aspects of the three types of HWs over most regions in China, while AA changes significantly increase the frequency and intensity of the daytime HWs over NEC but decrease them over SEC.

Looking forward in the next few decades, greenhouse gas concentrations will continue to rise and anthropogenic aerosol precursor emissions over China will decline. Projected future changes of the three types of HWs over China in the mid-21st century relative to the present day are stronger than their decadal changes across the mid-1990s (Su and Dong 2019b). Notably, projected future changes relative to PD in the frequency of compound HWs and all three aspects of daytime HWs are 2–4 times of the corresponding decadal changes across the mid-1990s in observations. The future increases in the duration of compound HWs and the frequency and duration of nighttime HWs are 20–80% larger than their decadal changes across the mid-1990s. These results suggest people will encounter much fiercer changes of HWs over China in the future than they have experienced across the mid-1990s and China would face a challenge to take adaptation measures to cope with the projected frequency increase, intensity enhancement and duration extension of HWs.

References:

Hirons, L., N. Klingaman, and S. Woolnough, 2015: MetUM-GOML: A near-globally coupled atmosphere–ocean-mixed-layer model. Geosci. Model Dev., 8, 363–379, https://doi.org/10.5194/gmd-8-363-2015

Perkins, S. E., 2015: A review on the scientific understanding of heatwaves—Their measurement, driving mechanisms, and changes at the global scale. Atmos. Res.164–165, 242–267, https://doi.org/10.1016/j.atmosres.2015.05.014.

Su, Q. and B. Dong, 2019a: Recent decadal changes in heat waves over China: drivers and mechanisms. J. Clim., 32, 4215-4234. doi: https://doi.org/10.1175/JCLI-D-18-0479.1

Su, Q. and B. Dong, 2019b: Projected near-term changes in three types of heat waves over China under RCP4.5. Clim. Dyn.,53, doi: https://doi.org/10.1007/s00382-019-04743-y

Posted in Aerosols, China, Climate, Climate change, Climate modelling | Leave a comment

Making the best use of HPC

By: Grenville Lister

High performance computing (HPC) is changing – there will be a new UK national service in early 2020 (and a period of time with no national service while the new platform is installed) – and the medium to longer-term future is more uncertain than at any time in the last few decades. Much of the community is planning for exascale computing, with associated challenges in both the utilisation of storage and programmability. However, for all the changes ahead, a key issue is managing the resources we have, and will have. Here I take the opportunity to discuss this issue, drawing on my experiences with NERC HPC, but with a take-home message that should apply to other busy resource pools (e.g. departmental or institutional computing).

We usually think of compute resource in terms of node-hours – you’d generally pay for use of whole nodes, even if whole nodes aren’t actually being used (the bit of your node left unused isn’t accessible to others, hence you foot the cost). On day one of a new machine, it will be capable of delivering a fixed number of these given its projected lifetime; for ARCHER (the UK National HPC service), that number was approximately 212 million node-hours (4920 nodes for 24 hours per day for 360 days per year for 5 years. On day 2 and each subsequent day, that number went down by 118,080 – as of July 11th 2019, ARCHER had only 25 million left. Unfortunately, node-hours disappear whether or not they are used for computation (the energy bill is lower if they’re not computing). The same goes for resource allocations – we effectively have a NERC- ARCHER for a year-at-a-time since resources are allocated yearly with the reset switch thrown on March 31st; a block of ARCHER node-hours allocated to a project starts to evaporate on April 1st. Obvious really, but sometimes overlooked by those of us running numerical simulations under the typical yearly resource allocation cycle. This argument is a little oversimplified, nevertheless, expecting to use large parts of an allocation at the last minute may be unrealistic and/or not possible at all – ultimately, the node-hours just won’t be there.

Differing HPC systems try to ensure an equal spread of usage over time to avoid a mad rush at the end of an allocation period, either through imposing a use-it-or-lose-it policy in conjunction with periodic (quarterly or semi-annual) node-hour sub-distributions or by use of a clever job scheduler. None of us like having restrictions placed upon us by HPC service providers or administrators, especially when circumstances beyond our control cause delays or otherwise prevent HPC usage as intended, but managing an even burn rate of nodes ensures that users are able to consume their full resource quota.

Efficient use of storage space raises in some sense orthogonal concerns. Space doesn’t disappear over time. It fills up of course, but the user generally has the option to recover it, and whereas node-hours are available to all until used, storage space is reserved at the moment of allocation and can (and does) sit empty for significant lengths of time. This is less of a problem on a system such as ARCHER, where there is an understanding that data held on disc is only ever ephemeral and managing space is easy, on JASMIN (a super-data-cluster based at the Rutherford Appleton Laboratory), for example, where group workspaces are relatively long lived, the challenge is to request and manage an appropriate volume, bearing in mind that several storage media may be available to support data storage on different time scales, with particular emphasis on the use of Elastic Tape for the medium term.

We in NERC do a pretty good job of consuming HPC resources, both node-hours and petabytes. I am confident that with a community cognizant of resourcing challenges and their efficient use, we shall continue to do so as new technologies emerge. Speaking of new technologies: the major event at ARCHER in February 2020 will be its withdrawal from service and in May 2020 ARCHER’s successor will commence operation. We shall have a whole lot more node-hours to play with to generate a whole lot more data – a scenario under which we anticipate that management of resources will be increasingly important.

Posted in High performance computing, Numerical modelling | Leave a comment