Effect of the North Atlantic Ocean on the Northeast Asian climate: variability and predictability

By: Paul-Arthur Monerie

North East Asia has warmed substantially after the mid-1990s leading to an increase in temperature extremes and to societal impacts (Dong et al., 2016). Predicting the variability of the North East Asian climate is therefore of primordial interest since it would help the population to anticipate strong climatic events.

Figure 1: Anomaly correlation coefficient skill score (ACC) for SAT in DePreSys3 hindcasts (using NCEP as observations) in extended summer (JJAS) for year 2–5 lead-times. The ACC calculated after a linear trend is removed at each grid-point. Stippling indicates that the ACC is different to zero at the 95% confidence level according to a Monte-Carlo procedure. Figure from Monerie et al. (2017).

Climate models allow simulating climate and projecting its short-term to long-term evolutions. We used the decadal prediction system DePreSys3 (Dunstone et al., 2011) and assessed how the model is able to predict, retrospectively, the observed temperature, up to 5 years ahead (Monerie et al., 2017). The correlation between the observed temperature and the simulated temperature (i.e. the anomaly correlation coefficient) shows that the climate model satisfactorily reproduces the observed temperature over many places, including North East Asia and the North Atlantic Ocean (Fig. 1).

Further analyses have highlighted a statistical co-variability between the temperature over East Asia and the variability of the temperature over the North Atlantic Ocean, with a positive phase of the North Atlantic Multidecadal Variability (i.e. the low-frequency variability of the North Atlantic sea surface temperature) associated with a warming over North East Asia, in agreement with Lin et al. (2016)  and Sun et al. (2019). Prediction systems have good skill in retrospectively predicting the temperature over the North Atlantic Ocean up to 5 years ahead (García-Serrano et al., 2015) and we thus propose that such climate models and experimental protocols could be useful to predict the low-frequency variability of the temperature over East Asia.  

Figure 2: Impact of AMV on (top panels) surface temperature (°C), in (left) JJA and (right) SON. Stippling indicates that changes are significantly different to zero according to a Student’s t-test at the 95% confidence level.

The mechanisms linking the North Atlantic Ocean to North East Asia have then been assessed by performing a set of sensitivity experiments, following (Boer et al., 2016), and using the MetUM-GOML2 climate model (Hirons et al., 2015). We confirm, that in a climate model, a warming of the North Atlantic Ocean is associated with an increase in temperature over North East Asia (Fig. 2). We identify two mechanisms, which link the North Atlantic Ocean to East Asia. First, the warming of the Atlantic Ocean is associated with a perturbation of the circumglobal teleconnection pattern (i.e. the atmospheric circulation over the Northern Hemisphere) (Ding and Wang, 2005; Beverley et al., 2019). Second, the Atlantic Ocean is able to force a part of the variability of the Pacific Ocean, leading to an excess in precipitation over the Philippines, and to the propagation of a Rossby wave, which propagate over the western Pacific Ocean. Both mechanisms are able to impact East Asia, through increasing heat advection and incoming surface shortwave radiation locally.

Our ongoing results show that we might be able to increase our ability to predict climate over East Asia by improving our knowledge on the impacts and variability of the North Atlantic Ocean.  

References

Boer, G. J., and Coauthors, 2016: The Decadal Climate Prediction Project (DCPP) contribution to CMIP6. Geoscientific Model Development., 9(10), 3751–3777. article. https://doi.org/10.5194/gmd-9-3751-2016

Beverley, J.D., S.J. Woolnough, L.H. Baker, S.J. Johnson, and A. Weisheimer, 2019: The northern hemisphere circumglobal teleconnection in a seasonal forecast model and its relationship to European summer forecast skill. Climate Dynamics., 52, 3759, https://doi.org/10.1007/s00382-018-4371-4

Ding, Q., and B. Wang, 2005: Circumglobal teleconnection in the Northern Hemisphere summer. J Clim, 18:3483–3505. doi:10.1175/JCLI3473.1

Dong, B., R.T. Sutton, W. Chen, X. Liu, R. Lu, and Y. Sun, 2016: Abrupt summer warming and changes in temperature extremes over Northeast Asia since the mid-1990s: Drivers and physical processes. Advances in Atmospheric Sciences., 33(9), 1005–1023. https://doi.org/10.1007/s00376-016-5247-3

Dunstone, N. J., D.M. Smith, and R. Eade, 2011: Multi-year predictability of the tropical Atlantic atmosphere driven by the high latitude North Atlantic Ocean. Geophysical Research Letters,. 38(14). https://doi.org/10.1029/2011GL047949

García-Serrano, J., V. Guemas, and F.J. Doblas-Reyes, 2015: Added-value from initialization in predictions of Atlantic multi-decadal variability. Climate Dynamics., 44(9–10), 2539–2555. https://doi.org/10.1007/s00382-014-2370-7

Hirons, L. C., N.P. Klingaman, and S.J. Woolnough, 2015: MetUM-GOML: a near-globally coupled atmosphere–ocean-mixed-layer model. Geoscientific Model Development., 8, 363–379. https://doi.org/10.5194/gmd-8-363-2015

Lin, J.-S., B. Wu, and T.-J. Zhou, 2016: Is the interdecadal circumglobal teleconnection pattern excited by the Atlantic multidecadal Oscillation? Atmospheric and Oceanic Science Letters., 9(6), 451–457. https://doi.org/10.1080/16742834.2016.1233800

Monerie, P.-A., J. Robson, B. Dong, and N. Dunstone, 2017: A role of the Atlantic Ocean in predicting summer surface air temperature over North East Asia? Climate Dynamics. https://doi.org/10.1007/s00382-017-3935-z

Sun, X., S. Li, X. Hong, and R. Lu, 2019: Simulated Influence of the Atlantic Multidecadal Oscillation on Summer Eurasian Nonuniform Warming since the Mid-1990s. Advances in Atmospheric Sciences., 36(8), 811–822. article. https://doi.org/10.1007/s00376-019-8169-z

Posted in Climate, Climate modelling, Predictability | Leave a comment

It’s Hotter Than A Ginger Mill In Hades

By: Giles Harrison and Stephen Burt

Or so they sometimes say in the south of the United States. But without a reference ginger mill or ready access to Hades, how do we know how hot it really is, and how much can we trust the measurements of the record temperatures we had in July? The basics of air temperature measurement are simple enough – put a thermometer in the shade and keep air moving past it – but the details of doing this matter a lot. And perhaps in all the flurry about records, this detail isn’t so widely appreciated. For example, how many times have you heard a radio phone-in programme asking listeners for car or garden temperature readings to compare, or a tennis commentator mentioning the temperature on centre court at Wimbledon? For a thermometer anywhere in direct sunlight, sheltered from the wind, its temperature is just that of a hot thing in the sun. It’s highly unlikely to be a reliable air temperature.

Meteorologists have worked on this problem for a long time. The first liquid-in-glass thermometers appeared in Renaissance Italy in the 1640s, gradually becoming more reliable and consistent during the eighteenth century. Temperature measurements slowly became more widespread in Europe as thermometers improved, and became particularly well organised internationally in the eighteenth and nineteenth centuries. Some of the earliest reliable air temperature measurements began in national observatories making astronomical or geophysical measurements for which the temperature was merely needed as a correction factor, and many of these early “temperature series” still continue. The needs of modern climate science have made understanding these early meteorological technologies, and the exposure of the instruments, much more important.

Figure 1: Thermometer screens. (Left) Stevenson-type screen at the Reading University Atmospheric Observatory. (Right) Beehive screen at the meteorological site of the Universitat de les Illes Balears, Palma. Both sites also have nearby wind measurements.

To provide protection from direct sunlight, long-wave (terrestrial) radiation and other demanding environmental factors such as rain, while retaining airflow, thermometers are usually placed within a semi-porous shelter or shield, often referred to as a thermometer screen. Screens are almost always made from white material (externally at least) to reflect sunlight:  many different designs are in use internationally. At a meteorological site they should be positioned for good airflow and arranged so that the hinged door to read the thermometer opens on the shady side. In later versions of the widely adopted thermometer screen originally designed by the lighthouse engineer Thomas Stevenson (1818-1887, and father of Robert Louis Stevenson), double-louvred slats are used to form the sides of the screen, to maximise thermal contact with the air passing through. Smaller cylindrical “beehive” screens based on the same principle containing smaller electronic sensors are now also widely used (figure 1).

The accuracy of the air temperature recorded by a screen depends on three main factors: how closely the in-screen temperature follows the air temperature, how quickly the sensor responds to changes in temperature, and of course the accuracy of the sensor used. A meteorological thermometer is typically a liquid-in-glass device (e.g. a mercury thermometer), or an electronic sensor, such as a platinum resistance thermometer. With their lower mass, the latter can respond more quickly than the former, so the World Meteorological Organisation (WMO) sets out observing guidelines on sensor response time, mandating that temperature measurements be averaged over 60 seconds. This helps ensure comparability of records between different instrument types (and thus historical records) and avoid spurious very short-duration maximum and minimum temperatures. Thermometers (whether liquid-in-glass or electronic) are calibrated by comparison against reference devices in laboratory experiments, and the corrections needed derived.  With regular calibration checks to eliminate effects of drift, and many other precautions, measurements accurate to 0.1 °C become possible.

Figure 2: Temperature difference (Tdiff) between a thermometer in open air and screen temperature (Tscrn) at the Reading University Atmospheric Observatory, plotted against (left) screen temperature and (right) wind speed at 2m (u2), which is approximately at the screen height. (Modified from [2]).

The question of how closely the screen temperature represents the air temperature is much more difficult, as to assess it perfectly the true air temperature itself would be needed. Comparison against a reference temperature better than that of the screen is all that can be done, and the precision experiments necessary are difficult to maintain for anything other than short periods. Comparisons (or “trials”) between one design of screen and another are more common, and tend to be undertaken by national meteorological services. These of course only show how to account for changes in screen design, but not the fundamental question of how well air temperature itself is determined. Nevertheless, from the few investigations available, WMO states[1] that worst-case temperature differences between naturally ventilated thermometer screens and artificially-ventilated (aspirated) sensors and air temperature lie between 2.5 °C and -0.5 °C. With temperatures commonly reported to 0.1 °C, this seems astonishingly large! However, in a year-long study[2] at Reading University Atmospheric Observatory using a naturally ventilated screen with a careful procedure to overcome inevitable sensor breakages, differences as large as this were indeed occasionally observed, skewed to the same warm bias of the screen indicated by WMO (Figure 2). However, these large differences were exceptional, as 90% of the temperature differences were well within ± 0.5 °C. Figure 2 shows that the key aspect in reducing the uncertainties is the wind flow around and through the screen, because the largest temperature differences occur in calm conditions, both by day and by night. This was originally recognised by the Scottish physicist John Aitken (1839-1919, and more famous perhaps for his pioneering work on aerosols), who argued for forced ventilation through a thermometer screen[3]. Aspirated temperature measurements were hardly ever implemented until recent years, but improved technologies mean they are increasingly regarded as reference climate measurements, in the United States[4] and other countries, although, as yet, very few UK Met Office observing sites are equipped with aspirated sensors.

Ventilation is essential for rapid thermal exchange between the air, the thermometer screen and the enclosed temperature sensor itself, to try to ensure and maintain thermal equilibrium even as the air temperature fluctuates continuously. At low wind speeds, this is much less effective and the time taken for the thermometer screen to “catch up” with external air temperature changes can be quite long, as much as half an hour[5]. Further work[6] at Reading Observatory showed that this was improved to a couple of minutes for near-screen wind speeds of 2 ms-1 or greater, but that for wind speeds less than this, lag times increased considerably. Because winds are often light or even calm at night, this effect is more likely to affect a night-time minimum temperature than a day-time maximum. Some maxima or minima may therefore still be under-recorded in a poorly ventilated screen, in a sheltered observing site or in light wind conditions. For temperature measurements made in screens, the response time of the screen is greater than that of the sensor – sometimes many times so in light winds: for aspirated temperature measurements, in contrast,  the sensor response time alone is the determining factor.

Figure 3: (left) Screen temperature (Tscreen) measured at Reading Observatory on 25th July 2019, and (right) screen temperature plotted against wind speed at 2 m (u2), using 5 min average values. The dashed red line marks Tscreen= 35° C, and the dotted blue line Tscreen= 20 °C.

Looking at the measurements made at the well-instrumented Reading Observatory for Thursday 25 July 2019 (Figure 3), the wind speed at 2 m (u2) is well correlated with the screen temperature. For the times when Tscreen was greater than 35 °C, the median u2 was 2.3 ms-1: in contrast, when Tscreen was less than 20 °C, the median u2 was 0.3 ms-1. This shows that, although the daytime maximum was well ventilated, this is not true of the nocturnal temperature minimum, which will have been less reliably determined.

The actual moment of temperature maximum is a very local phenomenon, amongst other things depending on airflow over the site, positions of heat sources and soil characteristics, urban heat island effects and, most commonly, the presence of cloud. For example, on 10 August 2003, when Reading recorded its hottest day to date at 36.4 °C, cloud materialised at Reading just before the time of the maximum in air temperature, and probably prevented a greater temperature being reached[7]. Even for the Reading Observatory thermometer screen on 25 July 2019, which was moderately well ventilated, temperature fluctuations lasting a few minutes, as might well have been generated beneath the broken clouds which were present, would be damped out.

The variations in maximum temperatures across nearby sites probably experiencing similar conditions on 25 July are interesting to compare (Table 1). Differences in radiative environment between extensive tarmac (Heathrow) and bleached grass surfaces (Kew Gardens) are perhaps not as great as might appear, as both had identical maximum temperatures. On the other hand, the more open instrument enclosure at Teddington (NPL) probably contributed to a slightly lower maximum temperature there than at other London sites.

Table 1. Maximum temperatures reported on 25 July 2019.
Reading 36.3 °C (from automatic system: maximum thermometer in screen 36.0 °C)
Heathrow 37.9 °C
Northolt 37.6 °C
Kew Gardens 37.9 °C
St James’s Park 37.0 °C
Teddington 36.7 °C

The median of these is 37.3 °C, with an inter-quartile range of 1.05 °C, so there is no doubt that temperatures were consistently that of an extremely hot UK summer day. Local factors, however, are evidently hugely important in determining which site “wins” the maximum temperature record. We now know that the new record UK screen temperature of 38.7 °C occurred at the long-running climatological site at the Botanical Gardens in Cambridge. From the arguments above, whether the air temperature there was indeed greater than that at Faversham in August 2003 (where the screen then recorded 38.5 °C, and was in many respects seriously anomalous anyway[8]) is rather difficult to say – neither site provided simultaneous wind data at screen height, for example.

An extreme “record” screen temperature value at any one site may consequently be of only limited quantitative usefulness, given local variability and inherent limitations in the measurement, although of course nothing here regarding the details of local measurements changes the robust result that globally, temperatures are rising. The maximum temperature continues to be of remarkably widespread interest, even if it isn’t well appreciated how it arises, how reliably it can be measured and whether – if only the newspaper headline writers knew it – that it could well be platinum rather than mercury which yields it.

References:

[1]  World Meteorological Organization (WMO), 2014: WMO No.8 – Guide to Meteorological Instruments and Methods of Observation (CIMO guide) (Updated version, May 2017), 1139 pp.

[2] R.G. Harrison, 2010. Natural ventilation effects on temperatures within Stevenson screens. Q. J. Royal Meteorol. Soc. 136: 253–259. DOI:10.1002/qj.537

[3] J. Aitken, 1884. Thermometer screens. Proc R. Soc. Edinburgh 12:667.

[4] H.J. Diamond, and Coauthors, 2013: U.S. Climate Reference Network after One Decade of Operations: Status and Assessment. Bull. Amer. Meteorol. Soc., 94: 485-498. https://doi.org/10.1175/BAMS-D-12-00170.1

[5] D. Bryant, 1968. An investigation into the response of thermometer screens – The effect of wind speed on the lag time. Meteorol. Mag. 97:183–186

[6] R.G. Harrison, 2011. Lag-time effects on a naturally ventilated large thermometer screen. Q. J. Royal Meteorol. Soc. 137: 402–408. DOI:10.1002/qj.745

[7] E. Black, M. Blackburn, G. Harrison, B. Hoskins and J. Methven, 2004 Factors contributing to the summer 2003 European heatwave Weather 59, 8:217-223

[8] S.D. Burt and P. Eden, 2004. The August 2003 heatwave in the United Kingdom: Part 2 – The hottest sites Weather, 59, 9:239-246

Posted in Climate, Measurements and instrumentation | Leave a comment

Why was there decadal increase in summer heat waves over China across the mid-1990s?

By: Buwen Dong

Heat waves (HWs), commonly defined as prolonged periods of excessive hot weather, are a distinctive type of high-temperature extreme (Perkins 2015). These high-temperature extremes can lead to severe damage to human society and ecosystems. In our studies, we focus on decadal changes in the HWs over China and consider three independent types:

Compound HW—at least three consecutive days with simultaneous hot days and hot nights (Tmax ≥ 90th percentile and Tmin ≥ 90th percentile).

Daytime HW—at least three consecutive hot days (only Tmax ≥ 90th percentile), without consecutive hot nights.

Nighttime HW—at least three consecutive hot nights (only Tmin ≥ 90th percentile), without consecutive hot days.

Illustrated in Figure 1 are distributions of 753 stations in China station dataset and the time evolutions of the area-averaged frequency and intensity of the compound, daytime, and nighttime HWs over different regions in China. One of most important features is the abrupt decadal change across the mid-1990s, from the early period (EP) of 1964-1981 to present day (PD) of 1994-2011, characterized by increases in frequency and intensity (Su and Dong 2019a).

Figure 1: (Top) Distributions of 753 stations in China station dataset. The dots in green, orange and purple represent the sub-regions of North-eastern China (NEC), South-eastern China (SEC) and Western China (WC), respectively. Time series of area-averaged (left) frequency (events per year) and (right) intensity (°C) of (a)–(b) compound, (c)–(d) daytime, and (e)–(f) nighttime HWs in extended summer over the whole mainland of China (black solid lines), North-eastern China (blue dashed lines), South-eastern China (orange dashed lines), and Western China (green dashed lines). Black dashed lines denote the time means of area-averaged indicators. Red solid lines represent the decadal variations of area-averaged indicators, obtained by a 9-yr running average. The black solid and dashed, as well as the red solid lines are for the left Y axis, while the dashed blue, orange, and green lines are for the right Y axis.

What has caused these rapid decadal changes in HW properties across the mid-1990s over China? A set of numerical experiments using an atmosphere–ocean–mixed layer coupled model (MetUM-GOML1; Hirons et al. 2015) have been performed in a study by Su and Dong (2019a) to understand the relative importance of changes in greenhouse gas (GHG) concentrations and anthropogenic aerosol (AA) precursor emissions.

The area-averaged changes in frequency and intensity of the three types of HWs over all of China and all three sub-regions for both observations and model experiments are demonstrated in Figure.2. Quantitatively, the changes of the three types of HWs in response to ALL forcing changes simulated by models are in some agreement with observations, not only over China as a whole, but also over the individual sub-regions.

Figure 2: Area-averaged changes in (left) frequency (events per year), (centre) intensity (°C), and (right) spatial extent (km2) of (a)–(c) compound, (d)–(f) daytime, and (g)–(i) nighttime HWs over all of China, NEC, SEC, and WC in observations and simulations forced by ALL forcing, GHG forcing, and AA forcing. The error bars indicate the 90% confidence intervals based on a two-tailed Student’s t test.

The results above indicate that the observed decadal changes in the frequency and intensity of compound, daytime, and nighttime HWs over China across the mid-1990s are primarily forced by the changes in anthropogenic forcings. The impacts of GHG changes and that of AA changes are different in many aspects. GHG changes contribute dominantly to the increases in all aspects of the three types of HWs over most regions in China, while AA changes significantly increase the frequency and intensity of the daytime HWs over NEC but decrease them over SEC.

Looking forward in the next few decades, greenhouse gas concentrations will continue to rise and anthropogenic aerosol precursor emissions over China will decline. Projected future changes of the three types of HWs over China in the mid-21st century relative to the present day are stronger than their decadal changes across the mid-1990s (Su and Dong 2019b). Notably, projected future changes relative to PD in the frequency of compound HWs and all three aspects of daytime HWs are 2–4 times of the corresponding decadal changes across the mid-1990s in observations. The future increases in the duration of compound HWs and the frequency and duration of nighttime HWs are 20–80% larger than their decadal changes across the mid-1990s. These results suggest people will encounter much fiercer changes of HWs over China in the future than they have experienced across the mid-1990s and China would face a challenge to take adaptation measures to cope with the projected frequency increase, intensity enhancement and duration extension of HWs.

References:

Hirons, L., N. Klingaman, and S. Woolnough, 2015: MetUM-GOML: A near-globally coupled atmosphere–ocean-mixed-layer model. Geosci. Model Dev., 8, 363–379, https://doi.org/10.5194/gmd-8-363-2015

Perkins, S. E., 2015: A review on the scientific understanding of heatwaves—Their measurement, driving mechanisms, and changes at the global scale. Atmos. Res.164–165, 242–267, https://doi.org/10.1016/j.atmosres.2015.05.014.

Su, Q. and B. Dong, 2019a: Recent decadal changes in heat waves over China: drivers and mechanisms. J. Clim., 32, 4215-4234. doi: https://doi.org/10.1175/JCLI-D-18-0479.1

Su, Q. and B. Dong, 2019b: Projected near-term changes in three types of heat waves over China under RCP4.5. Clim. Dyn.,53, doi: https://doi.org/10.1007/s00382-019-04743-y

Posted in Aerosols, China, Climate, Climate change, Climate modelling | Leave a comment

Making the best use of HPC

By: Grenville Lister

High performance computing (HPC) is changing – there will be a new UK national service in early 2020 (and a period of time with no national service while the new platform is installed) – and the medium to longer-term future is more uncertain than at any time in the last few decades. Much of the community is planning for exascale computing, with associated challenges in both the utilisation of storage and programmability. However, for all the changes ahead, a key issue is managing the resources we have, and will have. Here I take the opportunity to discuss this issue, drawing on my experiences with NERC HPC, but with a take-home message that should apply to other busy resource pools (e.g. departmental or institutional computing).

We usually think of compute resource in terms of node-hours – you’d generally pay for use of whole nodes, even if whole nodes aren’t actually being used (the bit of your node left unused isn’t accessible to others, hence you foot the cost). On day one of a new machine, it will be capable of delivering a fixed number of these given its projected lifetime; for ARCHER (the UK National HPC service), that number was approximately 212 million node-hours (4920 nodes for 24 hours per day for 360 days per year for 5 years. On day 2 and each subsequent day, that number went down by 118,080 – as of July 11th 2019, ARCHER had only 25 million left. Unfortunately, node-hours disappear whether or not they are used for computation (the energy bill is lower if they’re not computing). The same goes for resource allocations – we effectively have a NERC- ARCHER for a year-at-a-time since resources are allocated yearly with the reset switch thrown on March 31st; a block of ARCHER node-hours allocated to a project starts to evaporate on April 1st. Obvious really, but sometimes overlooked by those of us running numerical simulations under the typical yearly resource allocation cycle. This argument is a little oversimplified, nevertheless, expecting to use large parts of an allocation at the last minute may be unrealistic and/or not possible at all – ultimately, the node-hours just won’t be there.

Differing HPC systems try to ensure an equal spread of usage over time to avoid a mad rush at the end of an allocation period, either through imposing a use-it-or-lose-it policy in conjunction with periodic (quarterly or semi-annual) node-hour sub-distributions or by use of a clever job scheduler. None of us like having restrictions placed upon us by HPC service providers or administrators, especially when circumstances beyond our control cause delays or otherwise prevent HPC usage as intended, but managing an even burn rate of nodes ensures that users are able to consume their full resource quota.

Efficient use of storage space raises in some sense orthogonal concerns. Space doesn’t disappear over time. It fills up of course, but the user generally has the option to recover it, and whereas node-hours are available to all until used, storage space is reserved at the moment of allocation and can (and does) sit empty for significant lengths of time. This is less of a problem on a system such as ARCHER, where there is an understanding that data held on disc is only ever ephemeral and managing space is easy, on JASMIN (a super-data-cluster based at the Rutherford Appleton Laboratory), for example, where group workspaces are relatively long lived, the challenge is to request and manage an appropriate volume, bearing in mind that several storage media may be available to support data storage on different time scales, with particular emphasis on the use of Elastic Tape for the medium term.

We in NERC do a pretty good job of consuming HPC resources, both node-hours and petabytes. I am confident that with a community cognizant of resourcing challenges and their efficient use, we shall continue to do so as new technologies emerge. Speaking of new technologies: the major event at ARCHER in February 2020 will be its withdrawal from service and in May 2020 ARCHER’s successor will commence operation. We shall have a whole lot more node-hours to play with to generate a whole lot more data – a scenario under which we anticipate that management of resources will be increasingly important.

Posted in High performance computing, Numerical modelling | Leave a comment

Improving model representation of cloud ice using cloud radar and aircraft observations

By: Peggy Achtert

Understanding the evolution of the ice phase in clouds is of great importance for understanding the development of thunderstorms and the formation of heavy rain. However, cloud ice poses an enormous challenge for both measurements and modelling. While we can probe ice particles in the atmosphere with remote-sensing instruments that make use of electromagnetic radiation over a range of wavelengths that spans from many centimeters down to hundreds of microns, we cannot get a direct measure of the parameters we need to know to understand the role of cloud ice in atmospheric processes. For this, we need to have a model that describes the scattering of electromagnetic radiation by ice particles of different sizes, a model that describes the size distribution of the ice particles, and a relationship between particle mass and particle size.

We have designed an atmospheric experiment to evaluate the three assumptions implicit in remote-sensing techniques by performing collocated measurements with multiple ground-based radars and the Facility for Airborne Atmospheric Measurements (FAAM, https://www.faam.ac.uk/) research aircraft. With this new knowledge, we want to test and develop microphysical parameterizations used in atmospheric models.

Within the Parameterizing Ice Clouds using Airborne Observations and Triple-frequency Doppler Radar Data (PICASSO) project, we want to look at how the scattering behaviour of snowflakes changes with wavelength. For this, we are operating three scanning radars at Chilbolton Observatory (https://www.chilbolton.stfc.ac.uk/Pages/home.aspx) that are probing the same cloud volumes: the 3 GHz Chilbolton Advanced Meteorological Radar (CAMRa) radar (25m antenna), the 94 GHz Galileo radar installed on side of the CAMRa antenna, and the 35 GHz Kepler cloud radar, with its antenna slaved to the CAMRa radar. The combination of the different radars allows for detecting ice particles over a wide range of sizes. More importantly, the difference in radar reflectivity between two wavelengths – say 10 cm and 9 mm [3 and 35GHz] – is related to both the size and the shape of the ice particles. For any given assumption about the particle shape (or scattering model) we can directly predict what we should get for a different wavelength pair from the measurements. This takes one degree of freedom from the problem and enables a further consistency check. A match of observations and scattering model is evidence that the latter is an appropriate choice. A mismatch tells us that something is wrong with the scattering model. While a few studies of that type, i.e. using multi-wavelengths radar observations, have been conducted, there has so far been almost none with collocated in-situ sampling to support the interpretation of the remote-sensing data. Within PICASSO, the ground-based remote sensing is therefore complemented by airborne measurements of cloud droplets and ice particles up to 2 cm in size together with the concentration of ice and liquid water. These independent measurements provide the “truth” that we want to be able to retrieve from the remote-sensing measurements.

Figure 1: Picture of the CAMRa antenna dish at Chilbolton Observatory with the FAAM research aircraft in the background (red circle).

Normally in a campaign like this we might scan our radars up and down a prearranged flight radial and attempt to match up the radar and in-situ data after the fact. This approach generally leads to substantial scatter in the comparison between radar and aircraft. We therefore used a real-time position feed from the aircraft to drive the antenna automatically – a technique that builds on other work at Chilbolton to track satellites and other objects in space. In a nutshell, the radar antennas track the aircraft’s movement as it flies towards Chilbolton. The aircraft can be identified in the display of radar reflectivity in Figure 2 as a thin line of strong signal at 1 km height that reaches as far as 60 km from Chilbolton. In the closest few km the antenna runs ahead to reach vertical as the aircraft performs an overpass.

Figure 2: Distance-height display of radar reflectivity measured with CAMRa at Chilbolton Observatory during PICASSO. Warm colours refer to strong signal while cold colours refer to weak signals. The thin red line at 1 km height marks the flight path of the FAAM research aircraft.

In the PICASSO data set, we can now select the radar measurements as close as possible to the aircraft echo, and plot a time series of reflectivity along the aircraft track. We can then calculate the same thing from the in-situ particle size distributions. In a preliminary analysis of an ice cloud sampled during PICASSO, we find that the reflectivity calculated from the in-situ measurements is highly correlated with the radar observations. But if we select a specific mass-size relationship commonly used in the analysis of radar observations, we find a significant (factor of 4) difference in magnitude. This suggests that the ice particles in this particular cloud were around twice as dense as predicted by today’s parameterizations. In the next steps of this work, we will use the full set of data collected during PICASSO to evaluate the available parameterizations and models used in the analysis of radar observations and, if necessary, propose new relationships for an improved retrieval of cloud ice from radar data.

Posted in Clouds, Microphysics, radar | Leave a comment

Challenges in the closure of the surface energy budget at the continental scale

By: Bo Dong

Since satellite observations began in the late 1970s, our knowledge of energy flows in and out of the Earth’s climate system has been greatly advanced. Taking advantage of state-of-the-art Earth Observation (EO) programmes such as the Clouds and the Earth’s Radiant Energy System (CERES), energy exchanges at the top of the atmosphere (TOA) can be estimated with satisfying accuracy. EO based energy and water budgets at the surface, however, have not yet come to a consensus, largely because they cannot be directly measured from space but have to be inferred using additional physical or empirical models. As such, with large uncertainties, various combinations of surface energy and turbulent flux datasets can yield an imbalance of more than 20 Wm-2 on global annual mean basis, and even more at regional scales where transports of energy and water further complicate the surface state.

 Bringing necessary expertise from different disciplines together, a variational “Earth system inverse” modelling is one of the best methodologies for achieving a closure of the surface energy budget by optimising each balance flux component. With balance constraints at continental and global scales, the NASA Energy and Water Cycle Study (NEWS) of L’Ecuyer et al. (2015) and Rodell et al. (2015) were among the first to use inverse modelling approach to adjust multiple satellite data products for air-sea-land vertical fluxes of heat and freshwater within their uncertainty ranges, yielding balanced budgets. Although this approach has the advantage of reintroducing energy and water cycle closure information lost in the development of independent flux datasets, one caveat we note is that results are sensitive to the choices of input datasets and the associated uncertainty estimates (Thomas et al. 2019).

 One example is the mean seasonal cycle of the surface energy budget over North America (Figs. 1 and 2) that we optimised using a collocation of newer EO radiative energy flux products and machine-learning mapped in-situ land turbulent heat fluxes. At the land surface, energy budget closure requires

 DLR + DSR –ULW – USW –SH –LE = NSF (1),

 where terms from left to right are downward longwave radiation, downward solar radiation, upward longwave radiation, upward shortwave radiation, sensible heat flux, latent heat flux and net surface flux respectively.

 

Figure 1: 2001-2010 mean seasonal cycle of NSF over North America based on original flux datasets (dashed lines) and optimised solutions from the inverse model output (solid lines). 

While both NEWS and our (UoR) optimised energy budgets satisfy the zero annual mean NSF constraint (solid blue and red lines in Fig. 1), the resolved seasonal cycle contrasts notably with one another, in both timing and amplitude. Also, neither of the results compare closely with the DEEP-C NSF (Liu et al. 2015) which is derived using satellite measured TOA radiative fluxes and atmospheric reanalysis convergences. Because we do not have good prior knowledge of constraints on monthly time scales, the seasonal cycle of NSF is determined largely by the input budget components and their uncertainties, whereas annual constraints mostly adjust the seasonal time series up or down as a whole.

  

Figure 2: Optimised surface energy budget for NEWS (dashed lines) and UoR (solid lines) datasets. Positive (negative) values denote downward (upward) fluxes.

 To investigate which balance component accounts for the NSF discrepancy between NEWS and UoR, in Fig. 2 we dissected 6 budget terms on the left-hand side of the closure equation. It shows that discrepancies between NEWS (dashed lines) and UoR (solid lines) exist in all budget components, and none of the single budget terms are capable of explaining the NSF difference. Furthermore, we note that the difference in spring season USW between NEWS and UoR is ~35 Wm-2, one order of magnitude larger than the uncertainty given along with the data. This suggests that the uncertainty in USW might be considerably underestimated so that it restricts the inverse model from tuning the budget towards a more realistic state.

 Ongoing challenges remain for closing the surface energy budget at the continental scale, even though our estimates on global mean energy budget start to converge with increasing availability of observations. Unlike global annual mean budget, there are fewer prior hard constraints at regional and seasonal scales, such that the closure relies heavily on the accuracy of the observations of not only one but all budget terms. As most field measurements – which tends to be the data we “trust” the most – have failed to show closure of the surface energy budget, improving the quality of regional energy and water flux data is truly a long-term community effort. Equally important is the adequate representation of uncertainties in the observations, and there’s still plenty of room for improvement. For instance, structural biases in existing EO data products are likely underestimated and without realistic representation of seasonal variation. Nonetheless, with the current data we have, improvements in the variational modeling approach have been shown useful for producing a more realistic regional budget solution (Thomas et al. 2019), such as explicitly permitting spatially correlated errors in the original EO flux datasets, and incorporating inter-flux error covariance given that some retrievals share the same space-born instrument.

References:

 L’Ecuyer et al., 2015: The Observed State of the Energy Budget in the Early Twenty-First Century. J. Clim. 28(21), 8319–8346, 10.1175/JCLI-D-14-00556.1

 Liu et al., 2015: Combining satellite observations and reanalysis energy transports to estimate global net surface energy fluxes 1985-2012. J. Geophys. Res. Atmos., 120 (18), 9374–9389, https://doi.org/10.1002/2015JD023264

 Rodell et al., 2015: The Observed State of the Water Cycle in the Early Twenty-First Century. J. Clim. 28(21), 8289–8318,  10.1175/JCLI-D-14-00555.1

 Thomas, C., B. Dong, and K. Haines 2019: Global and regional energy and water cycle fluxes from Earth observation data. J. Clim. Under review.

 

Posted in Boundary layer, Climate, earth observation, Energy budget | Leave a comment

30 °C days in Reading

By: Roger Brugge

The temperature in the Reading University Atmospheric Observatory peaked at 32.3°C on Saturday 29 June 2019. Press stories were full of pictures of people sunning themselves across parts of the United Kingdom in glorious sunshine – yet not far across the English Channel even higher temperatures were causing problems of all kinds as temperatures rose 10°C (or more) higher in places than they did in Reading. As Table 1 shows, this was one of the highest June temperatures in the Reading record.

Table 1: The highest June temperatures on record at the University of Reading since 1908.

We seem to expect 30°C to be reached in any good summer these days, but just how common is such a temperature in the Reading record?

Daily observations have been made on Whiteknights campus since 1968 – prior to them measurements were made on the (slightly warmer – due to its location in a built-up area) London Road campus. Much of this blog, therefore, restricts the analysis to the past 52 years.

30°C has been reached, sometime in this period, in each of the three summer months. (In the time when records were kept at London Road, 30°C was reached on seven dates in May – peaking at 31.9°C on 29 May 1944.) Peak temperatures at Whiteknights each month are as follows:

  • June: 34.0°C on 26 June 1976
  • July: 35.3°C on 19 July 2006
  • August: 36.4°C on 10 August 2003.

Since 1968, temperatures have reached 30°C on 17 days in June, 59 days in July and on 34 days in August. The earliest occurrence in the year of the ‘magical number’ was on 18 June (in 2017 when 30.3°C was recorded) while the latest was on 24 August (in 2016 when 30.1°C was reached). So, this year’s 32.4°C is nothing out of the ordinary in some respects.

Figure 1: The number of days each summer when the temperature reached 30°C in Reading. Data for 2019 are valid to 30 June.

Figure 1 shows the annual incidence of such 30°C temperatures. Unsurprisingly, there is a lot of variation from year-to-year. The summer of 1976 stands out, however: 30°C was reached every day for the fortnight of 25 June to 8 July – while the summer of 1995 with nine 30°C days has since come the closest to surpassing that year in the record. There is a slight trend towards an increase of the number of 30°C days each year, from 1-2 in 1968 to 2-3 days each year nowadays. If we remove all the 30°C data in 1976, then the expected value in 1968 would be under 1 day per year. Note also that the year 2019, and the four previous ones, have all attained 30 °C – the first time that five consecutive years have reached this mark.

Despite the warming trend observed in other aspects of Reading (and UK) temperatures, no such trend can be obviously seen in Figure 2, which shows the peak value achieved by the 30°C days.

Figure 2: The highest summer temperature observed in years when 30°C has been reached in Reading. Data for 2019 are valid to 30 June.

Figure 3: The range of dates each summer in Reading with temperatures reaching 30°C. Note that the periods shown may actually contain several spells of 30°C+ temperatures, with cooler days in between.

Figure 3 shows the date ranges each summer during which 30°C has been reached – here there is a suggestion that the 30°C ‘season’ is now starting earlier (e.g. 2017) and ending (e.g. 2016) later than it used to, especially if the unusual summer of 1976 is removed as an outlier.

Finally, perhaps the most remarkable feature of the recent heatwave day was the change in temperature leading into and leaving, Saturday 29th. Maximum temperatures were 24.2°C on the 28th, 32.3°C on the 29th and 22.7°C on the 30th, corresponding to 24-hour changes in the maximum temperature of +8.1 degC and -9.6 degC in successive 24-hour periods. The first of these changes was caused by a change in wind direction and, consequently, air source into the 29th – France and the near continent had been suffering from unusually high temperatures for several days before this hot air reached Reading. The second change was the result of a cold front (albeit a dry affair in Reading) that crossed from the west overnight 29th/30th.

Such 24-hour changes in maximum temperature are relatively rare in summer – in the 111-year period 1908-2018 there have been 174 changes of 8°C or more over two days in the summer months (June-August), with just 29 of these involving the onset or cessation of 30°C temperatures. The largest 24-hour changes involving one day over 30 °C were as follows:

  • 1 degC change, 22-23 August 1918 (30.3°C to 17.2°C)
  • 6 degC change, 7-8 July 1970 (30.6°C to 20.0°C)
  • 5 degC change, 6-7 June 1942 (30.2°C to 19.7°C)
  • 0 degC change, 21-22 June 2017 (32.5°C to 22.5°C)

All these large changes involved a sudden cooling that marked the ending of a 30°C spell; the largest change involving the onset of a 30°C spell (before 2019) was one of 7.6°C (from 22.7°C to 30.3°C) on 11-12 July 1912.

Many hot spells in Reading tend to build up over several days as hot air from a southerly source gradually becomes established over southern England; a sudden end to a hot spell is often marked by a thundery breakdown and noticeable temperature drop.

No pair of these 29 summer temperature changes occurred in two successive 24-hour periods with one day reaching in excess of 30°C. So, the June 2019 heatwave really did come and go within little more than 24 hours in Reading – as the spike in the bold red line in Figure 4 confirms.

Figure 4: Daily maximum and minimum air temperatures, and grass minimum temperatures, in June 2019 in Reading compared to the 1981-2010 daily averages. The spike in the maximum temperature followed some unusually cool days around mid-month.

Posted in Climate, Climate change, University of Reading, Weather | Leave a comment

Science outreach in coastal Arctic communities

By: Lucia Hosekova

Figure 1: NASA image by Robert Simmon based on Goddard Institute for Space Studies (GISS) surface temperature analysis data including ship and buoy data from the Hadley Centre. Caption by Adam Voiland.

Few people are more aware of the rapidity of the changes in our oceans and climate than Polar Scientists. Due to an effect known as polar amplification, temperatures in the Arctic regions have been observed to rise approximately twice as fast as the global average temperature (Fig. 1). It is a result of a complex system of feedbacks, including the effects of declining sea ice and changes to the vertical temperature profile [1]. The Arctic is now considered the canary in the coal mine that is climate change – the place where warnings are quickly turning into a worrying reality.

 In May 2019, I had an opportunity to visit the cities in the north coast of Alaska as part of a small team of scientists hosting outreach events at schools and community centres and hoping to engage in a dialogue with indigenous Inupiat communities that can be beneficial to both sides.

Our first stop was Kaktovik, a town of 300 sitting on an island surrounded by a lagoon on one side, and a beach exposed to Beaufort Sea on the other. This beach, together with many others along the Arctic coastline, is now undergoing erosion at unprecedented rates and leaving many communities exposed to flooding.

 From the moment we stepped off the small twin prop plane capable of landing on a lonely runway that emerged from the surrounding whiteness, I immediately gained respect for the people who, by choice or birth, made their life here in the tundra. With only two flights a day carrying supplies and people along the coast when the weather permits, the cities rely on indigenous ways of hunting and beachcombing to provide supplies and food. Here, the snow machine helps you reach places once the road inevitably ends, a bear gun is as common a tool as an umbrella back home, and every first-grader learns what temperature and wind speed is safe for playing outside.

 The children continued to impress: we spent two days visiting the local school and talking to students of all ages about the climate and ocean, engaging them in interactive demonstrations. We were rewarded by endless curiosity and questions that showed us that they know all too well how vulnerable their island is to permafrost thaw and waves hitting the beach previously protected by sea ice. At the end of our visit, we held a community meeting that served as a showcase of our science and the ways it touches the local life. As we quickly found out, no Inupiat social gathering is complete without a raffle (with prizes ranging from water purifiers to drones) and a generous dinner, and it was up to us to be cooks, hosts and scientists at the same time! It was a lot of fun seeing the children we met during our school visits in the company of their older family members. Here’s a little secret: if you want to make an Inupiat friend, bring Tang.

 After Kaktovik, we headed to Utqiagvik (previously known as Barrow), the largest settlement in the North Slope Borough and the closest the coast has to a town – you can find hotels, restaurants, even a Subway. With access to a large runway and other infrastructure, Utqiagvik is home to a sizeable transient scientific community, occupying a section of the city referred to as NARL (United States Naval Arctic Research Laboratory). In the communal accommodation, we found a vibrant international atmosphere of scientists representing a wide range of fields – from environmental and biological sciences all the way to a NASA team who came to test their new robots in extreme conditions.

 The communal meeting we organised here, called ‘Sandwich ’n Science’, reflected this varied demographic. Scientists were joined by locals and interested parties, who were aware and outspoken about the challenges their communities are facing in the near future. They want to know how long before the road they take every day will be flooded on a regular basis, do they need to move out of their house and, most importantly, who is going to pay for it. These are all very good questions, and scientists can play a key role in answering them. The U.S. funded project CODA (Coastal Ocean Dynamics in the Arctic) that sponsored my outreach trip and further collaboration, aims to study the link between coastal erosion and increasing wave activity in the Arctic caused largely by sea ice retreat and diminishing of the natural protection it used to provide to the coast lines. Waves in the Arctic are a ‘hot’ topic in polar sciences right now as their presence alters the sea ice state, increases energy in the upper ocean and may cause complex thermodynamic feedbacks. Along with other researchers at the Centre for Polar Observation and Modelling at the University of Reading, I am involved in an effort to understand the dominant processes in wave-ice interactions and study their impacts on present and future climate in state-of-the-art sea ice models [2].

 It is one thing to listen to academic seminars and discussions, and it is quite another to come face-to-face with people for whom the sea ice I mostly know from satellite images is the view from their bedroom window, and the effects of polar amplification represent a real threat to their way of life. Not everyone gets to witness the wider consequences of their actions, be it as a scientist or simply an inhabitant of this planet.

The members of the science party in the company of a local guide on a walk around Kaktovik.

Children in Kaktovik launching AEROKATS kites to take aerial photographs of the village.

References: 

  1. Stuecker, Malte & Bitz, Cecilia & C. Armour, Kyle & Proistosescu, Cristian & Kang, Sarah & Xie, Shang-Ping & Kim, Doyeon & Mcgregor, Shayne & Zhang, Peiqun & Zhao, Sen & Cai, Wenju & Dong, Yue & Jin, Fei-Fei. (2018). Polar amplification dominated by local forcing and feedbacks. Nature Climate Change. 8. 10.1038/s41558-018-0339-y.
  2. Bateson, A.W., D.L. Feltham, D. Schröder, L. Hosekova, J.K. Ridley, and Y. Aksenov, Impact of floe size distribution on seasonal fragmentation and melt of Arctic sea ice, The Cryosphere Discuss., https://doi.org/10.5194/tc-2019-44 , in review, 2019.
Posted in Arctic, Climate, Climate change, Cryosphere, Outreach | Leave a comment

How climate modelling can help us better understand the historical temperature evolution

By: Andrea Dittus

Figure 1: Annual global mean surface temperatures from NASA GISTempNOAA GlobalTempHadley/UEA HadCRUT4Berkeley EarthCowtan and WayCopernicus/ECMWF and Carbon Brief’s raw temperature record. Anomalies plotted with respect to a 1981-2010 baseline. Figure and caption from Carbon Brief (https://www.carbonbrief.org/state-of-the-climate-how-world-warmed-2018).

Earth’s climate has warmed by approximately 0.85 degrees over the period from 1880 to 2012 [IPCC, 2013] due to anthropogenic emissions of greenhouse gases. However, the rate of warming throughout the twentieth and early twenty-first centuries has not been uniform, with periods of accelerated warming and cooling (Figure 1). A key player in determining the historical evolution of global temperatures besides greenhouse gases are anthropogenic aerosols. Aerosols are airborne particles that scatter or absorb incoming solar radiation, and affect cloud properties, therefore altering the surface energy budget. Different aerosols species have different properties and climate impacts, but perhaps the most important aerosols in the context of global climate variability are sulphate aerosols, which account for a large proportion of anthropogenic aerosol. As a scattering aerosol, sulphate has a cooling effect on global climate and has partially offset some of the warming induced by emissions of greenhouse gases. Although we know that aerosols play an important role for global climate, the magnitude of historical aerosol forcing remains uncertain [e.g. Stevens, 2015; Kretzschmar et al., 2017; Booth et al., 2018].

In climate models, the representation of aerosol processes is very diverse, resulting in a wide spread in the magnitude of aerosol forcing across different climate models [Wilcox et al., 2015]. Consequently, the climate effects of aerosols are also very different from model to model. Studies have suggested that aerosol forcing can influence the phasing of key modes of multi-decadal variability such as the Atlantic Multidecadal Variability [Booth et al., 2012] and Pacific Decadal Oscillation [Smith et al., 2016], although the degree of influence is still unclear [e.g. Zhang et al., 2013; Oudar et al., 2018]. Key open questions are whether these findings are model dependent, influenced by the magnitude of simulated aerosol forcing, ensemble size, or a combination of these.

Figure 2: Simulated temperatures for each ensemble member across the different aerosol scalings for the period 1941 to 1970. The numbers 0.2 to 1.5 indicate the scaling factor that was applied to the anthropogenic aerosol emissions. Blue indicates that temperatures are cooler than the reference temperature defined as the 1.0 scaling ensemble mean 1850-2014 climatology, red indicates warmer temperatures.

The SMURPHS Project (Securing Multidisciplinary Understanding of Hiatus and Surge Events, https://smurphs.leeds.ac.uk/) is a multi-disciplinary project whose aim is to improve our understanding of the causes of variations in the observed rate of warming. As part of this project, we have designed an ensemble of historical climate simulations with the HadGEM3-GC3.1 climate model, where anthropogenic aerosol emissions were scaled up or down to sample a wide range in historical aerosol forcing. The emergence of large ensembles in the climate modelling community has highlighted the importance of sampling a large number of realisations, to better estimate the forced response (common to all members run with the same forcings) and magnitude of internal variability (individual to each member). As a compromise between the need to sample a wide range of aerosol forcing and multiple initial condition members, we have opted to run four different initial condition members for five different aerosol scalings. Figure 2 illustrates the effect of aerosol forcing on temperature in the SMURPHS ensemble for the period from 1941 to 1970, a period particularly sensitive to aerosol forcing (not shown). Along the x-axis, different magnitudes of aerosol forcing represent the sensitivity of climate model simulations to aerosol forcing. On the y-axis, each line represents a single realisation to highlight the role of internal variability. The simulations with higher aerosol emissions are systematically colder than the simulations with lower aerosol emissions, consistent with the expected response to increasing aerosol forcing across the ensemble. 

Going forward, these simulations will allow us to investigate how variations in historical aerosol forcing have shaped climate variability in the twentieth and early twenty-first century, from global mean surface temperatures to multi-decadal modes of variability and beyond.

References: 

Booth, B. B. B., N. J. Dunstone, P. R. Halloran, T. Andrews, and N. Bellouin (2012), Aerosols implicated as a prime driver of twentieth-century North Atlantic climate variability, Nature, 484, 228-232, doi:10.1038/nature10946

Booth, B. B. B., G. R. Harris, A. Jones, L. Wilcox, M. Hawcroft, and K. S. Carslaw (2018), Comments on “Rethinking the Lower Bound on Aerosol Radiative Forcing,” J. Climate, 31, 9407–9412, doi:10.1175/JCLI-D-17-0369.1.

IPCC, 2013: Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

Kretzschmar, J., M. Salzmann, J. Mülmenstädt, O. Boucher, and J. Quaas (2017), Comment on “Rethinking the Lower Bound on Aerosol Radiative Forcing,” J. Climate, 30, 6579–6584, doi:10.1175/JCLI-D-16-0668.1.

Oudar, T., P. J. Kushner, J. C. Fyfe, and M. Sigmond (2018), No Impact of Anthropogenic Aerosols on Early 21st Century Global Temperature Trends in a Large Initial-Condition Ensemble, Geophysical Research Letters, 45, 9245-9252, doi:10.1029/2018GL078841.

Smith, D. M., B. B. B. Booth, N. J. Dunstone, R. Eade, L. Hermanson, G. S. Jones, A. A. Scaife, K. L. Sheen, and V. Thompson (2016), Role of volcanic and anthropogenic aerosols in the recent global surface warming slowdown, Nature Clim. Change, 6, 936–940, doi:10.1038/nclimate3058.

Stevens, B. (2015), Rethinking the Lower Bound on Aerosol Radiative Forcing, J. Climate, 28, 4794–4819, doi:10.1175/JCLI-D-14-00656.1.

Wilcox, L. J., E. J. Highwood, B. B. B. Booth, and K. S. Carslaw (2015), Quantifying sources of inter-model diversity in the cloud albedo effect, Geophysical Research Letters, 42, 1568–1575, doi:10.1002/2015GL063301.

Zhang, R. et al. (2013), Have Aerosols Caused the Observed Atlantic Multidecadal Variability? J. Atmos. Sci., 70, 1135–1144, doi:10.1175/JAS-D-12-0331.1.

 

Posted in Aerosols, Climate, Climate change, Climate modelling | Leave a comment

The OpenIFS User Workshop

By Bob Plant

I’ve been asked to write a blog post to go live on 17 June, the opening day of the 2019 OpenIFS user workshop. As I’m involved in the organisation, it would almost seem strange not to talk a little about that.

The IFS (Integrated Forecasting System), is the modelling system developed and used at the ECMWF, and which underlies all of their forecasting, data assimilation and reanalysis activity. Brief outlines can be found here for the dynamics and here for the physics.  The OpenIFS version is designed to be used outside of the centre. This allows universities to collaborate more easily with the ECMWF on research projects and supports more teaching-focussed activities.

Students hear a great deal about weather and climate modelling during their studies but have traditionally had little or no opportunity to work directly with the models. Even those whose main interests do not lie in numerical modelling will inevitably rely on modelling results, or will want to analyse model data. So some hands-on modelling experience is valuable, just as those of us who take a more theoretical or model-based perspective nonetheless benefit from being exposed to real experimental data. It’s important that the models should not be looked upon as black boxes that magically generate data, but that students get the opportunity to take out a torch and at least have a bit of a look around in the murky interior.

At the same time, there are obvious practical issues with using full-scale operational-type models in a classroom context. We often look for substantial high-performance computing for model-based research projects and expect to submit jobs that return results after some hours, or perhaps days. Also, while a model might be very nicely designed for the operational or expert research context, it may not be easy for a non-expert to pick up and get started with quickly. The OpenIFS provides a pretty good balance: it is relatively easy to use, but not so easy as to encourage a black-box syndrome.

I was keen to try out OpenIFS for teaching applications in the department, starting with an MSc dissertation project in summer 2015. While not totally plain-sailing, it was sufficiently encouraging to offer something for the MSc team project week in the following year, with Sue Gray and I each supervising a team so that we could help each other out with any teething issues. That worked well, and further team projects and dissertation projects have followed. There is more about those experiences in a short article in the ECMWF newsletter .

Getting back to this week’s workshop, it is a bi-annual event to introduce researchers from across Europe (and occasionally further afield) to the OpenIFS. We also have a scientific theme concerning the impact of moist processes on storm evolution, and there will be various talks and posters on this, alongside others relating to techniques and examples in using the model for research projects.

The key link between the modelling and the theme is our choice of case study. Storm Karl occurred in September 2016. It started out as a tropical system before undergoing an extratropical transition and ultimately produced much rain over Norway. It was observed as part of the NAWDEX (North Atlantic Waveguide and Downstream Impact Experiment) field campaign and there is an overview in this BAMS article. Apparently, it is the first system to undergo an extratropical transition to have been observed with research aircraft at each stage of its evolution, and so I would imagine that it will continue to be a focus of research over the next few years. The article highlights the importance of mid-level moisture, especially for the behaviour of the “warm conveyor belt” in the extratropical regime. Below are example plots from a preliminary OpenIFS simulation. There are also some very nice loops of the satellite imagery, and Met Office global model forecasts at this page, courtesy of Ben Harvey. We plan to perform a variety of modelling experiments and to interpret and understand our results by drawing on ideas from the talks and posters, and of course, plenty of discussions amongst the participants.

Example plots from a preliminary model run, for which thanks to Marcus Koehler. Left: 10m winds at T+42, 18UTC on 26 September. Karl is to the south-east of Greenland. Right: precipitation at the same time.

Numbers are limited for the hands-on computing part of the workshop, but if you are around in Reading and would like to come along to some interesting talks then feel free to join us in GU01 any morning from Tuesday to Friday. Or if you would like to talk about storms or modelling with 50-odd researchers also interested in such things, then again feel free – we’ll be in 1L61 for Tuesday to Friday morning coffee and over the lunch break. Our programme can be seen here.

I mustn’t forget to give credit where it is due. Under the small assumption that all is going to go wonderfully well, that will have been due to Glenn Carver, Gabi Szepszo and Marcus Koehler from ECMWF, and from the Reading side to Sue Gray and myself, Kathryn Boyd, Maria Broadbridge, Ben Harvey and Jake Bland. And finally thanks to our sponsors: we are funded by bringing together contributions from EGU, ESiWACE, the university environment theme, the department visitor fund and ECMWF.

Posted in Academia, Climate, extratropical cyclones, Numerical modelling, Teaching & Learning | Leave a comment