Challenges in the closure of the surface energy budget at the continental scale

By: Bo Dong

Since satellite observations began in the late 1970s, our knowledge of energy flows in and out of the Earth’s climate system has been greatly advanced. Taking advantage of state-of-the-art Earth Observation (EO) programmes such as the Clouds and the Earth’s Radiant Energy System (CERES), energy exchanges at the top of the atmosphere (TOA) can be estimated with satisfying accuracy. EO based energy and water budgets at the surface, however, have not yet come to a consensus, largely because they cannot be directly measured from space but have to be inferred using additional physical or empirical models. As such, with large uncertainties, various combinations of surface energy and turbulent flux datasets can yield an imbalance of more than 20 Wm-2 on global annual mean basis, and even more at regional scales where transports of energy and water further complicate the surface state.

 Bringing necessary expertise from different disciplines together, a variational “Earth system inverse” modelling is one of the best methodologies for achieving a closure of the surface energy budget by optimising each balance flux component. With balance constraints at continental and global scales, the NASA Energy and Water Cycle Study (NEWS) of L’Ecuyer et al. (2015) and Rodell et al. (2015) were among the first to use inverse modelling approach to adjust multiple satellite data products for air-sea-land vertical fluxes of heat and freshwater within their uncertainty ranges, yielding balanced budgets. Although this approach has the advantage of reintroducing energy and water cycle closure information lost in the development of independent flux datasets, one caveat we note is that results are sensitive to the choices of input datasets and the associated uncertainty estimates (Thomas et al. 2019).

 One example is the mean seasonal cycle of the surface energy budget over North America (Figs. 1 and 2) that we optimised using a collocation of newer EO radiative energy flux products and machine-learning mapped in-situ land turbulent heat fluxes. At the land surface, energy budget closure requires

 DLR + DSR –ULW – USW –SH –LE = NSF (1),

 where terms from left to right are downward longwave radiation, downward solar radiation, upward longwave radiation, upward shortwave radiation, sensible heat flux, latent heat flux and net surface flux respectively.

 

Figure 1: 2001-2010 mean seasonal cycle of NSF over North America based on original flux datasets (dashed lines) and optimised solutions from the inverse model output (solid lines). 

While both NEWS and our (UoR) optimised energy budgets satisfy the zero annual mean NSF constraint (solid blue and red lines in Fig. 1), the resolved seasonal cycle contrasts notably with one another, in both timing and amplitude. Also, neither of the results compare closely with the DEEP-C NSF (Liu et al. 2015) which is derived using satellite measured TOA radiative fluxes and atmospheric reanalysis convergences. Because we do not have good prior knowledge of constraints on monthly time scales, the seasonal cycle of NSF is determined largely by the input budget components and their uncertainties, whereas annual constraints mostly adjust the seasonal time series up or down as a whole.

  

Figure 2: Optimised surface energy budget for NEWS (dashed lines) and UoR (solid lines) datasets. Positive (negative) values denote downward (upward) fluxes.

 To investigate which balance component accounts for the NSF discrepancy between NEWS and UoR, in Fig. 2 we dissected 6 budget terms on the left-hand side of the closure equation. It shows that discrepancies between NEWS (dashed lines) and UoR (solid lines) exist in all budget components, and none of the single budget terms are capable of explaining the NSF difference. Furthermore, we note that the difference in spring season USW between NEWS and UoR is ~35 Wm-2, one order of magnitude larger than the uncertainty given along with the data. This suggests that the uncertainty in USW might be considerably underestimated so that it restricts the inverse model from tuning the budget towards a more realistic state.

 Ongoing challenges remain for closing the surface energy budget at the continental scale, even though our estimates on global mean energy budget start to converge with increasing availability of observations. Unlike global annual mean budget, there are fewer prior hard constraints at regional and seasonal scales, such that the closure relies heavily on the accuracy of the observations of not only one but all budget terms. As most field measurements – which tends to be the data we “trust” the most – have failed to show closure of the surface energy budget, improving the quality of regional energy and water flux data is truly a long-term community effort. Equally important is the adequate representation of uncertainties in the observations, and there’s still plenty of room for improvement. For instance, structural biases in existing EO data products are likely underestimated and without realistic representation of seasonal variation. Nonetheless, with the current data we have, improvements in the variational modeling approach have been shown useful for producing a more realistic regional budget solution (Thomas et al. 2019), such as explicitly permitting spatially correlated errors in the original EO flux datasets, and incorporating inter-flux error covariance given that some retrievals share the same space-born instrument.

References:

 L’Ecuyer et al., 2015: The Observed State of the Energy Budget in the Early Twenty-First Century. J. Clim. 28(21), 8319–8346, 10.1175/JCLI-D-14-00556.1

 Liu et al., 2015: Combining satellite observations and reanalysis energy transports to estimate global net surface energy fluxes 1985-2012. J. Geophys. Res. Atmos., 120 (18), 9374–9389, https://doi.org/10.1002/2015JD023264

 Rodell et al., 2015: The Observed State of the Water Cycle in the Early Twenty-First Century. J. Clim. 28(21), 8289–8318,  10.1175/JCLI-D-14-00555.1

 Thomas, C., B. Dong, and K. Haines 2019: Global and regional energy and water cycle fluxes from Earth observation data. J. Clim. Under review.

 

Posted in Boundary layer, Climate, earth observation, Energy budget | Leave a comment

30 °C days in Reading

By: Roger Brugge

The temperature in the Reading University Atmospheric Observatory peaked at 32.3°C on Saturday 29 June 2019. Press stories were full of pictures of people sunning themselves across parts of the United Kingdom in glorious sunshine – yet not far across the English Channel even higher temperatures were causing problems of all kinds as temperatures rose 10°C (or more) higher in places than they did in Reading. As Table 1 shows, this was one of the highest June temperatures in the Reading record.

Table 1: The highest June temperatures on record at the University of Reading since 1908.

We seem to expect 30°C to be reached in any good summer these days, but just how common is such a temperature in the Reading record?

Daily observations have been made on Whiteknights campus since 1968 – prior to them measurements were made on the (slightly warmer – due to its location in a built-up area) London Road campus. Much of this blog, therefore, restricts the analysis to the past 52 years.

30°C has been reached, sometime in this period, in each of the three summer months. (In the time when records were kept at London Road, 30°C was reached on seven dates in May – peaking at 31.9°C on 29 May 1944.) Peak temperatures at Whiteknights each month are as follows:

  • June: 34.0°C on 26 June 1976
  • July: 35.3°C on 19 July 2006
  • August: 36.4°C on 10 August 2003.

Since 1968, temperatures have reached 30°C on 17 days in June, 59 days in July and on 34 days in August. The earliest occurrence in the year of the ‘magical number’ was on 18 June (in 2017 when 30.3°C was recorded) while the latest was on 24 August (in 2016 when 30.1°C was reached). So, this year’s 32.4°C is nothing out of the ordinary in some respects.

Figure 1: The number of days each summer when the temperature reached 30°C in Reading. Data for 2019 are valid to 30 June.

Figure 1 shows the annual incidence of such 30°C temperatures. Unsurprisingly, there is a lot of variation from year-to-year. The summer of 1976 stands out, however: 30°C was reached every day for the fortnight of 25 June to 8 July – while the summer of 1995 with nine 30°C days has since come the closest to surpassing that year in the record. There is a slight trend towards an increase of the number of 30°C days each year, from 1-2 in 1968 to 2-3 days each year nowadays. If we remove all the 30°C data in 1976, then the expected value in 1968 would be under 1 day per year. Note also that the year 2019, and the four previous ones, have all attained 30 °C – the first time that five consecutive years have reached this mark.

Despite the warming trend observed in other aspects of Reading (and UK) temperatures, no such trend can be obviously seen in Figure 2, which shows the peak value achieved by the 30°C days.

Figure 2: The highest summer temperature observed in years when 30°C has been reached in Reading. Data for 2019 are valid to 30 June.

Figure 3: The range of dates each summer in Reading with temperatures reaching 30°C. Note that the periods shown may actually contain several spells of 30°C+ temperatures, with cooler days in between.

Figure 3 shows the date ranges each summer during which 30°C has been reached – here there is a suggestion that the 30°C ‘season’ is now starting earlier (e.g. 2017) and ending (e.g. 2016) later than it used to, especially if the unusual summer of 1976 is removed as an outlier.

Finally, perhaps the most remarkable feature of the recent heatwave day was the change in temperature leading into and leaving, Saturday 29th. Maximum temperatures were 24.2°C on the 28th, 32.3°C on the 29th and 22.7°C on the 30th, corresponding to 24-hour changes in the maximum temperature of +8.1 degC and -9.6 degC in successive 24-hour periods. The first of these changes was caused by a change in wind direction and, consequently, air source into the 29th – France and the near continent had been suffering from unusually high temperatures for several days before this hot air reached Reading. The second change was the result of a cold front (albeit a dry affair in Reading) that crossed from the west overnight 29th/30th.

Such 24-hour changes in maximum temperature are relatively rare in summer – in the 111-year period 1908-2018 there have been 174 changes of 8°C or more over two days in the summer months (June-August), with just 29 of these involving the onset or cessation of 30°C temperatures. The largest 24-hour changes involving one day over 30 °C were as follows:

  • 1 degC change, 22-23 August 1918 (30.3°C to 17.2°C)
  • 6 degC change, 7-8 July 1970 (30.6°C to 20.0°C)
  • 5 degC change, 6-7 June 1942 (30.2°C to 19.7°C)
  • 0 degC change, 21-22 June 2017 (32.5°C to 22.5°C)

All these large changes involved a sudden cooling that marked the ending of a 30°C spell; the largest change involving the onset of a 30°C spell (before 2019) was one of 7.6°C (from 22.7°C to 30.3°C) on 11-12 July 1912.

Many hot spells in Reading tend to build up over several days as hot air from a southerly source gradually becomes established over southern England; a sudden end to a hot spell is often marked by a thundery breakdown and noticeable temperature drop.

No pair of these 29 summer temperature changes occurred in two successive 24-hour periods with one day reaching in excess of 30°C. So, the June 2019 heatwave really did come and go within little more than 24 hours in Reading – as the spike in the bold red line in Figure 4 confirms.

Figure 4: Daily maximum and minimum air temperatures, and grass minimum temperatures, in June 2019 in Reading compared to the 1981-2010 daily averages. The spike in the maximum temperature followed some unusually cool days around mid-month.

Posted in Climate, Climate change, University of Reading, Weather | Leave a comment

Science outreach in coastal Arctic communities

By: Lucia Hosekova

Figure 1: NASA image by Robert Simmon based on Goddard Institute for Space Studies (GISS) surface temperature analysis data including ship and buoy data from the Hadley Centre. Caption by Adam Voiland.

Few people are more aware of the rapidity of the changes in our oceans and climate than Polar Scientists. Due to an effect known as polar amplification, temperatures in the Arctic regions have been observed to rise approximately twice as fast as the global average temperature (Fig. 1). It is a result of a complex system of feedbacks, including the effects of declining sea ice and changes to the vertical temperature profile [1]. The Arctic is now considered the canary in the coal mine that is climate change – the place where warnings are quickly turning into a worrying reality.

 In May 2019, I had an opportunity to visit the cities in the north coast of Alaska as part of a small team of scientists hosting outreach events at schools and community centres and hoping to engage in a dialogue with indigenous Inupiat communities that can be beneficial to both sides.

Our first stop was Kaktovik, a town of 300 sitting on an island surrounded by a lagoon on one side, and a beach exposed to Beaufort Sea on the other. This beach, together with many others along the Arctic coastline, is now undergoing erosion at unprecedented rates and leaving many communities exposed to flooding.

 From the moment we stepped off the small twin prop plane capable of landing on a lonely runway that emerged from the surrounding whiteness, I immediately gained respect for the people who, by choice or birth, made their life here in the tundra. With only two flights a day carrying supplies and people along the coast when the weather permits, the cities rely on indigenous ways of hunting and beachcombing to provide supplies and food. Here, the snow machine helps you reach places once the road inevitably ends, a bear gun is as common a tool as an umbrella back home, and every first-grader learns what temperature and wind speed is safe for playing outside.

 The children continued to impress: we spent two days visiting the local school and talking to students of all ages about the climate and ocean, engaging them in interactive demonstrations. We were rewarded by endless curiosity and questions that showed us that they know all too well how vulnerable their island is to permafrost thaw and waves hitting the beach previously protected by sea ice. At the end of our visit, we held a community meeting that served as a showcase of our science and the ways it touches the local life. As we quickly found out, no Inupiat social gathering is complete without a raffle (with prizes ranging from water purifiers to drones) and a generous dinner, and it was up to us to be cooks, hosts and scientists at the same time! It was a lot of fun seeing the children we met during our school visits in the company of their older family members. Here’s a little secret: if you want to make an Inupiat friend, bring Tang.

 After Kaktovik, we headed to Utqiagvik (previously known as Barrow), the largest settlement in the North Slope Borough and the closest the coast has to a town – you can find hotels, restaurants, even a Subway. With access to a large runway and other infrastructure, Utqiagvik is home to a sizeable transient scientific community, occupying a section of the city referred to as NARL (United States Naval Arctic Research Laboratory). In the communal accommodation, we found a vibrant international atmosphere of scientists representing a wide range of fields – from environmental and biological sciences all the way to a NASA team who came to test their new robots in extreme conditions.

 The communal meeting we organised here, called ‘Sandwich ’n Science’, reflected this varied demographic. Scientists were joined by locals and interested parties, who were aware and outspoken about the challenges their communities are facing in the near future. They want to know how long before the road they take every day will be flooded on a regular basis, do they need to move out of their house and, most importantly, who is going to pay for it. These are all very good questions, and scientists can play a key role in answering them. The U.S. funded project CODA (Coastal Ocean Dynamics in the Arctic) that sponsored my outreach trip and further collaboration, aims to study the link between coastal erosion and increasing wave activity in the Arctic caused largely by sea ice retreat and diminishing of the natural protection it used to provide to the coast lines. Waves in the Arctic are a ‘hot’ topic in polar sciences right now as their presence alters the sea ice state, increases energy in the upper ocean and may cause complex thermodynamic feedbacks. Along with other researchers at the Centre for Polar Observation and Modelling at the University of Reading, I am involved in an effort to understand the dominant processes in wave-ice interactions and study their impacts on present and future climate in state-of-the-art sea ice models [2].

 It is one thing to listen to academic seminars and discussions, and it is quite another to come face-to-face with people for whom the sea ice I mostly know from satellite images is the view from their bedroom window, and the effects of polar amplification represent a real threat to their way of life. Not everyone gets to witness the wider consequences of their actions, be it as a scientist or simply an inhabitant of this planet.

The members of the science party in the company of a local guide on a walk around Kaktovik.

Children in Kaktovik launching AEROKATS kites to take aerial photographs of the village.

References: 

  1. Stuecker, Malte & Bitz, Cecilia & C. Armour, Kyle & Proistosescu, Cristian & Kang, Sarah & Xie, Shang-Ping & Kim, Doyeon & Mcgregor, Shayne & Zhang, Peiqun & Zhao, Sen & Cai, Wenju & Dong, Yue & Jin, Fei-Fei. (2018). Polar amplification dominated by local forcing and feedbacks. Nature Climate Change. 8. 10.1038/s41558-018-0339-y.
  2. Bateson, A.W., D.L. Feltham, D. Schröder, L. Hosekova, J.K. Ridley, and Y. Aksenov, Impact of floe size distribution on seasonal fragmentation and melt of Arctic sea ice, The Cryosphere Discuss., https://doi.org/10.5194/tc-2019-44 , in review, 2019.
Posted in Arctic, Climate, Climate change, Cryosphere, Outreach | Leave a comment

How climate modelling can help us better understand the historical temperature evolution

By: Andrea Dittus

Figure 1: Annual global mean surface temperatures from NASA GISTempNOAA GlobalTempHadley/UEA HadCRUT4Berkeley EarthCowtan and WayCopernicus/ECMWF and Carbon Brief’s raw temperature record. Anomalies plotted with respect to a 1981-2010 baseline. Figure and caption from Carbon Brief (https://www.carbonbrief.org/state-of-the-climate-how-world-warmed-2018).

Earth’s climate has warmed by approximately 0.85 degrees over the period from 1880 to 2012 [IPCC, 2013] due to anthropogenic emissions of greenhouse gases. However, the rate of warming throughout the twentieth and early twenty-first centuries has not been uniform, with periods of accelerated warming and cooling (Figure 1). A key player in determining the historical evolution of global temperatures besides greenhouse gases are anthropogenic aerosols. Aerosols are airborne particles that scatter or absorb incoming solar radiation, and affect cloud properties, therefore altering the surface energy budget. Different aerosols species have different properties and climate impacts, but perhaps the most important aerosols in the context of global climate variability are sulphate aerosols, which account for a large proportion of anthropogenic aerosol. As a scattering aerosol, sulphate has a cooling effect on global climate and has partially offset some of the warming induced by emissions of greenhouse gases. Although we know that aerosols play an important role for global climate, the magnitude of historical aerosol forcing remains uncertain [e.g. Stevens, 2015; Kretzschmar et al., 2017; Booth et al., 2018].

In climate models, the representation of aerosol processes is very diverse, resulting in a wide spread in the magnitude of aerosol forcing across different climate models [Wilcox et al., 2015]. Consequently, the climate effects of aerosols are also very different from model to model. Studies have suggested that aerosol forcing can influence the phasing of key modes of multi-decadal variability such as the Atlantic Multidecadal Variability [Booth et al., 2012] and Pacific Decadal Oscillation [Smith et al., 2016], although the degree of influence is still unclear [e.g. Zhang et al., 2013; Oudar et al., 2018]. Key open questions are whether these findings are model dependent, influenced by the magnitude of simulated aerosol forcing, ensemble size, or a combination of these.

Figure 2: Simulated temperatures for each ensemble member across the different aerosol scalings for the period 1941 to 1970. The numbers 0.2 to 1.5 indicate the scaling factor that was applied to the anthropogenic aerosol emissions. Blue indicates that temperatures are cooler than the reference temperature defined as the 1.0 scaling ensemble mean 1850-2014 climatology, red indicates warmer temperatures.

The SMURPHS Project (Securing Multidisciplinary Understanding of Hiatus and Surge Events, https://smurphs.leeds.ac.uk/) is a multi-disciplinary project whose aim is to improve our understanding of the causes of variations in the observed rate of warming. As part of this project, we have designed an ensemble of historical climate simulations with the HadGEM3-GC3.1 climate model, where anthropogenic aerosol emissions were scaled up or down to sample a wide range in historical aerosol forcing. The emergence of large ensembles in the climate modelling community has highlighted the importance of sampling a large number of realisations, to better estimate the forced response (common to all members run with the same forcings) and magnitude of internal variability (individual to each member). As a compromise between the need to sample a wide range of aerosol forcing and multiple initial condition members, we have opted to run four different initial condition members for five different aerosol scalings. Figure 2 illustrates the effect of aerosol forcing on temperature in the SMURPHS ensemble for the period from 1941 to 1970, a period particularly sensitive to aerosol forcing (not shown). Along the x-axis, different magnitudes of aerosol forcing represent the sensitivity of climate model simulations to aerosol forcing. On the y-axis, each line represents a single realisation to highlight the role of internal variability. The simulations with higher aerosol emissions are systematically colder than the simulations with lower aerosol emissions, consistent with the expected response to increasing aerosol forcing across the ensemble. 

Going forward, these simulations will allow us to investigate how variations in historical aerosol forcing have shaped climate variability in the twentieth and early twenty-first century, from global mean surface temperatures to multi-decadal modes of variability and beyond.

References: 

Booth, B. B. B., N. J. Dunstone, P. R. Halloran, T. Andrews, and N. Bellouin (2012), Aerosols implicated as a prime driver of twentieth-century North Atlantic climate variability, Nature, 484, 228-232, doi:10.1038/nature10946

Booth, B. B. B., G. R. Harris, A. Jones, L. Wilcox, M. Hawcroft, and K. S. Carslaw (2018), Comments on “Rethinking the Lower Bound on Aerosol Radiative Forcing,” J. Climate, 31, 9407–9412, doi:10.1175/JCLI-D-17-0369.1.

IPCC, 2013: Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

Kretzschmar, J., M. Salzmann, J. Mülmenstädt, O. Boucher, and J. Quaas (2017), Comment on “Rethinking the Lower Bound on Aerosol Radiative Forcing,” J. Climate, 30, 6579–6584, doi:10.1175/JCLI-D-16-0668.1.

Oudar, T., P. J. Kushner, J. C. Fyfe, and M. Sigmond (2018), No Impact of Anthropogenic Aerosols on Early 21st Century Global Temperature Trends in a Large Initial-Condition Ensemble, Geophysical Research Letters, 45, 9245-9252, doi:10.1029/2018GL078841.

Smith, D. M., B. B. B. Booth, N. J. Dunstone, R. Eade, L. Hermanson, G. S. Jones, A. A. Scaife, K. L. Sheen, and V. Thompson (2016), Role of volcanic and anthropogenic aerosols in the recent global surface warming slowdown, Nature Clim. Change, 6, 936–940, doi:10.1038/nclimate3058.

Stevens, B. (2015), Rethinking the Lower Bound on Aerosol Radiative Forcing, J. Climate, 28, 4794–4819, doi:10.1175/JCLI-D-14-00656.1.

Wilcox, L. J., E. J. Highwood, B. B. B. Booth, and K. S. Carslaw (2015), Quantifying sources of inter-model diversity in the cloud albedo effect, Geophysical Research Letters, 42, 1568–1575, doi:10.1002/2015GL063301.

Zhang, R. et al. (2013), Have Aerosols Caused the Observed Atlantic Multidecadal Variability? J. Atmos. Sci., 70, 1135–1144, doi:10.1175/JAS-D-12-0331.1.

 

Posted in Aerosols, Climate, Climate change, Climate modelling | Leave a comment

The OpenIFS User Workshop

By Bob Plant

I’ve been asked to write a blog post to go live on 17 June, the opening day of the 2019 OpenIFS user workshop. As I’m involved in the organisation, it would almost seem strange not to talk a little about that.

The IFS (Integrated Forecasting System), is the modelling system developed and used at the ECMWF, and which underlies all of their forecasting, data assimilation and reanalysis activity. Brief outlines can be found here for the dynamics and here for the physics.  The OpenIFS version is designed to be used outside of the centre. This allows universities to collaborate more easily with the ECMWF on research projects and supports more teaching-focussed activities.

Students hear a great deal about weather and climate modelling during their studies but have traditionally had little or no opportunity to work directly with the models. Even those whose main interests do not lie in numerical modelling will inevitably rely on modelling results, or will want to analyse model data. So some hands-on modelling experience is valuable, just as those of us who take a more theoretical or model-based perspective nonetheless benefit from being exposed to real experimental data. It’s important that the models should not be looked upon as black boxes that magically generate data, but that students get the opportunity to take out a torch and at least have a bit of a look around in the murky interior.

At the same time, there are obvious practical issues with using full-scale operational-type models in a classroom context. We often look for substantial high-performance computing for model-based research projects and expect to submit jobs that return results after some hours, or perhaps days. Also, while a model might be very nicely designed for the operational or expert research context, it may not be easy for a non-expert to pick up and get started with quickly. The OpenIFS provides a pretty good balance: it is relatively easy to use, but not so easy as to encourage a black-box syndrome.

I was keen to try out OpenIFS for teaching applications in the department, starting with an MSc dissertation project in summer 2015. While not totally plain-sailing, it was sufficiently encouraging to offer something for the MSc team project week in the following year, with Sue Gray and I each supervising a team so that we could help each other out with any teething issues. That worked well, and further team projects and dissertation projects have followed. There is more about those experiences in a short article in the ECMWF newsletter .

Getting back to this week’s workshop, it is a bi-annual event to introduce researchers from across Europe (and occasionally further afield) to the OpenIFS. We also have a scientific theme concerning the impact of moist processes on storm evolution, and there will be various talks and posters on this, alongside others relating to techniques and examples in using the model for research projects.

The key link between the modelling and the theme is our choice of case study. Storm Karl occurred in September 2016. It started out as a tropical system before undergoing an extratropical transition and ultimately produced much rain over Norway. It was observed as part of the NAWDEX (North Atlantic Waveguide and Downstream Impact Experiment) field campaign and there is an overview in this BAMS article. Apparently, it is the first system to undergo an extratropical transition to have been observed with research aircraft at each stage of its evolution, and so I would imagine that it will continue to be a focus of research over the next few years. The article highlights the importance of mid-level moisture, especially for the behaviour of the “warm conveyor belt” in the extratropical regime. Below are example plots from a preliminary OpenIFS simulation. There are also some very nice loops of the satellite imagery, and Met Office global model forecasts at this page, courtesy of Ben Harvey. We plan to perform a variety of modelling experiments and to interpret and understand our results by drawing on ideas from the talks and posters, and of course, plenty of discussions amongst the participants.

Example plots from a preliminary model run, for which thanks to Marcus Koehler. Left: 10m winds at T+42, 18UTC on 26 September. Karl is to the south-east of Greenland. Right: precipitation at the same time.

Numbers are limited for the hands-on computing part of the workshop, but if you are around in Reading and would like to come along to some interesting talks then feel free to join us in GU01 any morning from Tuesday to Friday. Or if you would like to talk about storms or modelling with 50-odd researchers also interested in such things, then again feel free – we’ll be in 1L61 for Tuesday to Friday morning coffee and over the lunch break. Our programme can be seen here.

I mustn’t forget to give credit where it is due. Under the small assumption that all is going to go wonderfully well, that will have been due to Glenn Carver, Gabi Szepszo and Marcus Koehler from ECMWF, and from the Reading side to Sue Gray and myself, Kathryn Boyd, Maria Broadbridge, Ben Harvey and Jake Bland. And finally thanks to our sponsors: we are funded by bringing together contributions from EGU, ESiWACE, the university environment theme, the department visitor fund and ECMWF.

Posted in Academia, Climate, extratropical cyclones, Numerical modelling, Teaching & Learning | Leave a comment

Climate Action by Reducing Digital Waste

By: John Methven

Climate action has never been higher on the global agenda.

There is a pressing need to change our activities and habits, both at work and home, to steer towards a more sustainable future. National governments, public sector organizations and businesses are setting targets to achieve net zero carbon emissions by 2030. Immediately the term “carbon emissions” focuses attention on activities burning fossil fuel: driving a car, taking a train or catching a flight. However, when our Department first attempted its own carbon budget analysis in 2007, including the contribution of our activities to power consumption and carbon emissions far from Reading, we found that about 63% was attributable to computer usage compared with 24% business travel, 8% gas (heating) and 5% building electricity. Commuting to work was not included, although we are fortunate in that the majority of staff and students walk or cycle to work. The computing carbon cost was not even dominated by local power consumption by our computers (18%), or air-con in server rooms on site (7%), although the indirect carbon costs of the manufacture and ultimate waste disposal of those computers was not accounted for. The overwhelming contribution was from the extensive use of remote computing facilities (38%) – namely the supercomputers used to calculate weather forecasts, climate projections and to extend human knowledge in atmospheric and oceanic science.

What a conundrum! While improved weather forecasts save lives worldwide, through disaster risk mitigation, and also improve business efficiency, the daily creation of the forecasts is contributing to climate change which is increasing environmental risk. Back in 2008, many of the top 100 most powerful supercomputers were used for science, among them the leading global weather forecasting centres and international facilities enabling global climate modelling. Only 10 years on, the global cloud computing industry dwarfs the scientific supercomputing activity; even so, the global climate community takes supercomputing energy demands seriously. For example, scientists plan (Balaji et.al., Geos. Model Devel., 2017) to measure the energy consumed during the next generation simulations of future climate (CMIP6) that will contribute to the United Nations IPCC Sixth Assessment Report. As part of that effort they have developed new tools to share experiment design and simulations so that future computer usage can be minimized (Pascoe et.al., Geos. Model Devel. Disc., 2019). Many supercomputing facilities now have a renewable energy supply and there is even a Green500 list ranking supercomputers by energy efficiency.

 However, the revolutionary surge in digital storage has been outside the science sector: Gigabit magazine lists the top 10 cloud server centres in 2018 by capacity. The electricity consumption in the largest data centres worldwide is cited in the range 150-650 MW. Putting it into context, a single data centre can consume electricity equivalent to 2% of the entire UK electricity demand (34,000 MW)! Although some cloud server centres source electricity from renewables, such as dedicated hydro-electric plants, many do not and the total carbon footprint of cloud servers is huge. For example, Jones (Nature, 2018) states, “data centres use an estimated 200 terawatt hours (TWh) each year. That is more than the national energy consumption of some countries, including Iran, but half of the electricity used for transport worldwide, and just 1% of global electricity demand.” Some estimate that the carbon footprint from ICT (including computing, mobile phones and network infrastructure) already exceeds aviation. Although both sectors are expanding rapidly, cloud storage is expanding much faster with projections that over 20% of global electricity consumption will be attributable to computing by 2030. Much of the electricity is used to cool the computers, as well as power the hardware, and waste heat and water consumption are significant environmental issues. Although renewable power generation reduces the environmental impact, it is worth pausing for thought – what is all this data being stored?

Personal cloud storage is dominated by digital photos. Imagine you have been out with friends, your phone has uploaded the images as soon as it can sync to the cloud. No action required from you, but should you think twice? You have contributed to carbon emissions, and worse still the contribution will keep growing as long as you keep the data. How many of those photos will you look at again? Perhaps at least choose the best photos to keep and delete the rest?

In a work context, the storage for most businesses is dominated by email folders. Globally, 85% of email data volume is spam and 85% of that makes it into the inbox. Few people have time to go through their folders to delete unwanted messages and the volume mounts up. Emails are arriving continuously many with attachments, containing unsolicited images and hidden data on fonts (the content could have been relayed in plain text messages). Relentlessly piling up into a teetering heap of digital waste – requiring power to keep it alive – like a Doctor Who monster waiting just in case its master wants to visit tomorrow (artist’s impression?). Is the neglected monster sad? Perhaps a topic for AI fans.         

What can we do? What can you do? An effective contribution to climate action now would be to clear out your waste (somewhere out there on spinning disk), junk those emails and rubbish photos and feel good about it. Sorting tens of thousands of items into “keeps” and junk is a daunting task. Moving forward, wouldn’t it be good if all senders put a “use by date” on their emails and the recipient’s mail tool automatically deleted the message when expiry date was reached? Then we would know that the messages we send, even if unloved, at least do not contribute long-term to global digital waste.

All images have been spared in the creation of this article.

 

Posted in Climate, Climate change | Leave a comment

Teaching in China and some Good and Bad Teaching Practices

By: Hilary Weller

In April 2019 I visited the Nanjing University Institute of Information, Science and Technology (NUIST) where students are studying for a degree in Meteorology jointly between Reading and NUIST. Staff from Reading visit a couple of times a year to observe the lectures taught by the NUIST staff, teach the students and make new research links. Students study for years 1 and 2 in Nanjing and then come to Reading for their 3rd year. This is an interesting teaching challenge because Reading staff teach a random 2 weeks from 3 undergraduate modules.

I was sent PowerPoint files of seminar style slides — some bullet points, nice pictures and topics for discussion. I can imagine that these could make a terrific lecture if delivered by a charismatic visionary in the field who people would flock to hear talking, using the slides as illustrative prompts. But this is not me and these were not my slides. So, I needed to plan my teaching more carefully. I was asked to teach about measurements and instrumentation and tropical meteorology ­ a particular challenge as my area of expertise is numerical modelling of the atmosphere. I spent plenty of time learning about these subjects and planning my teaching.

I enjoyed learning about measurements of atmospheric radiation from Giles Harrison’s book Meteorological+Measurements+and+Instrumentation so much so that I have started making some YouTube teaching videos. and some online quizzes.

I observed loads of lectures while visiting NUIST, I have observed lectures in Reading, I have attended lectures as a student and I have delivered good and bad lectures myself. Based on this I will describe some difficult teaching situations and how they can be turned around, with or without preparation.

An Example: A Derivation

A lecturer (you?) plans to go through a derivation with students. In Meteorology it might be, for example, deriving thermal wind balance. You would like them to be able to provide a clear, thorough derivation, explaining each step-in full sentences. You have prepared some slides which outline the derivation but do not include complete sentences because you do not want to clutter your slides with words. You will say the linking sentences instead. This is a problem. You cannot expect the students to be able to write a good derivation if you haven’t given them a complete example. So, you might write it out in full for them and give them a copy of the lecture notes before class. But then they have nothing to do other than try to listen during your class. This breaks my first rule:

Make sure that the students have something to do during your class.

To give the students something to do, you go through the derivation on the board and ask them to volunteer what they think the next step might be. This is a natural way of explaining a derivation. However, if you do this with a class, one or two students may give you the answers you want, and the rest might be getting lost without asking questions. After the same person has answered a few questions, you direct your next question to a student who has so far remained quiet. Who cannot answer and is now humiliated. My next rule:

Do not single students out to answer questions.

These two rules seem to be mutually exclusive. I do not know the best way to teach while following these two rules, but I have some suggestions which will also work for large classes.

  1. Notes with gaps.

You could supply the students with printed notes with gaps and the students fill in the gaps during the lecture. They may copy the text for the gaps from the board or work it out for themselves. This way, the students take away a well written derivation, with all of the linking sentences between equations, and they have something to do and think about during the class. After you have gone through the derivation you could give them a couple of similar derivations to work through in pairs, asking for help if needed.

  1. Flipped classroom.

This teaching style can work very well but can also take a lot of preparation ­ you need to prepare material for the students to work through before and during class. The pre-class activity might be to read a section of a book or watch some SHORT videos. But you need to be careful not to overload the students. The activity before class should be straightforward and not take longer than the homework would have taken (which you must cancel). During class you can help them with more challenging material (assuming they have had time to go through the material before class). The tasks during class might be similar derivations or using the equation derived to explain some observed phenomena.

  1. Multiple choice questions.

I am keen on these as a quick way of engaging the whole class and they do not need to take a lot of preparation. When you come to a point when you would like to ask the class a question, you can instead write 3 or 4 possible answers on the board. They don’t all need to be plausible, you are just trying to encourage engagement. Then ask the students to show 1, 2, 3 or 4 fingers in front of their chest. That way you can see all the answers, the students cannot easily see each other’s answers (to avoid copying or humiliation) and every student is required to try to think of an answer. You may need to encourage them to guess and their answer doesn’t count for anything.

Another trap that people sometimes fall into:

If a student answers a question wrong, do not ask them to justify their answer. Ask someone else or explain it yourself.

  1. More challenging questions.

If you want to ask more challenging questions you will need to give the students more time to think about their answer, perhaps reread their notes or discuss with their neighbour. You should find out about “think-pair-share” or peer instruction. You can also use online quizzes which are popular with students but more time consuming to set up. Another rule:

Do not ask difficult or open-ended questions without giving the students time to think about, research or discuss an answer.

Also, make sure that your questions make sense and have well-defined answers. Check with a colleague to make sure that they are clear.

  1. Old fashioned teaching.

It may seem old fashioned but when I was teaching in China I asked the students to read a sentence in turn from the slides and fill in some simple gaps and copy text from the board. In the feedback, some of the students liked this approach, having an opportunity to practice speaking English and answer simple questions.

I would welcome more ideas for engaging all students while not humiliating anyone. Please leave a comment.

Posted in Academia, Climate, Teaching & Learning | Leave a comment

What sets the pattern of dynamic sea level change in the Southern Ocean?

By: Matthew Couldrey

Figure 1a: Multi-model mean projection of dynamic and steric (i.e. due to thermal and/or haline expansion/contraction) sea level rise averaged over 2081-2100 relative to 1986-2005 forced with a moderate emissions scenario (RCP4.5), including 0.18 m +/- 0.05 m of global mean steric sea level change. b: Root-mean square spread (deviation) of projections from the 21 model ensemble. (From Church et al 2013, their Figure 13.16)

Greenhouse gas forced climate change is expected to cause the global mean sea level to rise over the coming century, which will affect millions of people (Brown et al 2018) and cost trillions of US dollars (Jevrejeva et al. 2018). However, local factors are important in determining how much sea level change any particular place will experience, and these regional effects can double or entirely counteract the global mean change (Figure 1a). Furthermore, regional patterns of sea level change are challenging to predict, and climate models differ in their projections of this spatial pattern (Figure 1b). My research as part of the FAFMIP project (Flux Anomaly Forced Model Intercomparison Project, http://fafmip.org) aims to better understand why models disagree on the distribution of future sea level change.

Dynamic sea level (ζ) is the local sea surface height (above a geopotential surface) deviation from its global mean. Dynamic sea level is zero when averaged over the whole ocean surface, and its change over time (Δζ) shows the local change relative to the global mean. Therefore, positive values of Δζ indicate locations where sea level rise is larger than the global mean. Note that negative values of Δζ can correspond to locations of sea level rise (where the local change is smaller than the global mean, but still a rise) as well as sea level fall.

The hotspots in Figure 1b show locations where models from the previous generation of coupled climate (CMIP5) models disagree on the spatial pattern of sea level rise. The Southern Ocean is one of the regions where the pattern is uncertain, owing to a mixture of inter-model spread in 1) the ocean response to wind forcing, 2) changes in circulation, and 3) the redistribution of heat and freshwater. In an attempt to disentangle these causal processes, my research makes use of simulations where the oceans of several different models are forced with exactly the same changes in air-sea fluxes of heat, momentum (wind) and freshwater.

Figure 2: Thermal and haline contributions to dynamic sea level change across five Atmosphere-Ocean models, rows correspond to different models (named in left hand legends). Left panels: Zonally integrated change in ocean heat content per degree of latitude. Right panels: Zonal mean dynamic sea level change (Δζ, solid lines), and contributions from thermal expansion alone (dotted lines) and thermal plus haline effects (dashed lines).

The Southern Ocean dynamic sea level response is characterised by a strong north-south gradient, with relatively little change near the Antarctic continent and a northward-increasing rise (Figure 2, solid lines of right panels). This change arises partly because more heat gets added to lower latitudes of the Southern Ocean, peaking around 40 ˚S: note the ‘hump’ in the zonal ocean heat content change (left panels of Figure 2). However, the zonal dynamic sea level change (Δζ) shows a gradient then plateau (Figure 2, solid lines of right panels), rather than a ‘hump’, unlike the zonal heat content change. This is because of two reasons: the tendency of seawater to expand or contract changes markedly as you move from 70 ˚S to 45 ˚S. This means that the same heat input causes more dynamic sea level change at lower latitudes (where seawater is warmer) than at higher latitudes (where the temperature is lower). This ‘thermosteric’ or thermal expansion effect alone (Figure 2, dotted lines of right panels) would act to emphasise the ‘hump’ in sea level change suggested by the heat content change. In fact, the ‘haline contraction’ effect is what works against the thermal effects and flattens the hump into the gradient-plateau feature that we observe (Figure 2, dashed lines of right panels closely match the solid lines).

This work highlights that while ocean heat uptake sets the broad patterns of sea level change in the Southern Ocean, it’s the salinity changes that set the details. Furthermore, all the different models shown in Figure 2 were forced with the same pattern and magnitude of air-sea heat flux change. This means that the diversity in patterns of dynamic sea level change across different models largely arises due to differing ocean responses to climate change, rather than each model’s climate sensitivity (i.e. how much a particular model warms per unit of greenhouse gas emitted).

References

Brown, S., R. J. Nicholls, P. Goodwin, I. D. Haigh, D. Lincke, A. T. Vafeidis, & J. Hinkel, 2018, Quantifying Land and People Exposed to Sea-Level Rise with no Mitigation and 1.5∘C and 2.0∘C Rise in Global Temperatures to Year 2300, Earth’s Future, 6, 583-600, DOI 10.1002/2017EF000738

Church, J. A. , Clark, P. U., Cazenave, A., Gregory, J. M., Jevrejeva, S., Levermann, A., Merrifield, M. A., Milne,  G. A., Nerem, R. S., Nunn, P. D., Payne, A. J., Pfeffer, W. T., Stammer, D., and Unnikrishnan, A. S.: Sea Level Change, in: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, 2013. DOI 10.1017/CBO9781107415324.026

Jevrejeva, S., L. P. Jackson, A. Grinsted, D. Lincke, and B. Marzeion, 2018: Flood damage costs under the sea level rise with warming of 1.5 ∘C and 2 ∘C, Environ. Res. Lett., 13, DOI 10.1088/1748-9326/aacc76

 

Posted in antarctica, Climate, Climate change, Climate modelling, Oceans | Leave a comment

What do we do with weather forecasts?

By: Peter Clark

As I sat in the Kia Oval in Kennington having taken a day off to watch the first One Day International between England and Pakistan, I had plenty of time to appreciate the accuracy and utility of weather forecasts. The afternoon proved to be a microcosm of both the successes of modern weather forecasting and issues surrounding the use of forecasts in more serious applications (though I may well join in with the cries of “there’s nothing more serious than Cricket!”).

First question: to go to the match or not? When we bought the tickets 6 months ahead, we just had climatology to go on. Early May is a risk, but not very different from later in the season. By the time forecasts become available the question is then “is it worth turning up?” By the Friday five days before, there was a very strong consensus amongst computer forecasts that a cyclone would be tracking across England on the day, most likely during the first half of the day. In fact, the Met Office’s ‘deterministic’ forecast proved very accurate, with the continuous heavy rain passing through London by midday. However, behind the surface front close to the cyclone centre, cold air aloft was overrunning warmer air at the surface, which was given an additional boost as it came from the Atlantic and passed over land.  Warm (and moist) air beneath colder air leads to the likelihood of dreaded convective showers in the afternoon!

There have been real ‘revolutions’ in forecasting over the last few decades. At the centre lies the combination of vast improvements to computer power, more accurate computer models, vast increases in observations to ‘correct’ the data in the models, and development of much more powerful methods to use (or ‘assimilate’) those observations. An extratropical cyclone, or ‘low-pressure system’, is relatively large and long-lived. In this case, the system was at the small end of the scale and quite intense, roughly the size of England – say 500 km across with a life cycle of at least a day. 30 years ago, our computer models had to represent these systems with a grid of points not much better than 100 km apart (see the Met Office’s history of NWP, for example). Today our forecast models have little problem actually representing a cyclone. In practice, they are often predicted in forecast models even before there’s any clear sign of them in observations. While there will still be uncertainty in track and intensity, on the whole they are astonishingly well forecast several days ahead.

Here lies the problem. Showers are much smaller, say 10 km across with the core less than 1 km, and have a lifetime of an hour or so. These cannot even be directly represented in our global models. The most recent ‘revolution’ in forecasting has been the development of so-called ‘convection-permitting’ models (Clark et al. 2016). Regional models (with a grid spacing around 2 km) at last can represent showers, but not well. Something resembling showers can form and give us some very useful guidance on the probability that we’ll be affected by a shower. Such models are now helping produce more accurate flood forecasts, especially for smaller, faster reacting catchments (Dance et al. 2019). Within the ParaCon project we are working hard to find ways to improve the models.

Figure: Radar estimates of the surface rainfall rate at 17:00, 18:00 and 19:00 BST with inset showing the hail storm that hit the Kia Oval at 17:00 BST. (Courtesy of the Met Office). Showers are triggered along a ‘peninsular convergence’ line extending from Cornwall all the way to London that is present for several hours. Clearly, much depended on whether one was beneath or to one side of this.

The message was the same in the morning before the game. As the rain from the cyclone cleared, a high probability of seeing one or two showers or even thunderstorms during the afternoon – which is precisely what happened. We had a couple of flurries of not very intense rain, which did little to interrupt play, plus two hail storms; pea-sized hail fairly typical of a British summer shower. Each lasted about 5 minutes. The inset in the figure shows the hail storm that hit the oval around 17:00 BST. A mere speck on the scale of England, but locally extremely intense. A perfect forecast! However, a computer model run even a couple of hours before could not predict the precise shower hitting our precise location.

What more could we do? I spent the afternoon trying to look at the Met Office’s weather radar composites on my phone. A new rainfall picture is produced every 5 minutes. On the intermittent occasions when I could access data, the showers were very clearly tracked; interestingly they were forming along a broad ‘peninsular convergence’ line that could be tracked back to Land’s End. Along this line, air coming from either side of the south west peninsula meets and so is forced upwards, triggering showers (Golding et al, 2005). This is shown in the three radar images in the figure. Each is an hour apart, but this convergence line is very persistent. These lines were the topic of the COPE field campaign in 2013 (Leon, et al. 2016). This organisation by topography radically changed the overall predictability of the showers. The sharp-eyed reader might also notice an arc of showers moving east from central England into East Anglia, and it is probably no coincidence that the heaviest storm happened where this met the convergence line. Nevertheless, as we sat on the edge of this line, the best we could hope for several hours ahead was a realistic assessment of the probability of having a shower.

This example illustrates very well that the weather forecast is not the only piece in the jigsaw. First, and foremost, there is the investment in resilience; the Oval ground is very well prepared and drained, but there is a limit to what it can cope with. Similarly, investment in flood defences is often controversial, and the Environment Agency have recently announced that climate change is forcing a ‘new approach to flood and coastal resilience’ that may mean not investing in flood defences in some regions.

Second, there is preparedness. The available forecasts had prepared us well for the likelihood of showers. We equipped ourselves as well as we could. I kept a ‘weather eye’ on the radar, at least as far as technology allowed me. I could see the hail storms coming. In this case, the covers were deployed fast enough to protect the pitch and run-ups. Use of forecasts could enable the deployment of defences that take longer to deploy but ultimately save playing time. Currently, forecasts are used by the authorities to help emergency services prepare for likely (but rarely certain) flooding. How best to educate and prepare users including the public to respond to forecasts is one of the leading questions driving research, for example the World Meteorological Organisation’s ‘HIWeather Project’, which recognises the key importance of “better understanding by social scientists of the challenges to achieving effective use of forecasts and warnings” (HIWeather Impact plan). A key part of this is understanding the inevitability of false alarms. We have to be prepared to see play stopped because a forecast (in this case with a very short lead time) says there is a probability of a heavy shower. The price for not being pre-emptive may be the abandonment of the match. Which happened two and a half hours after the rain and hail stopped.

The modern challenge of forecasting is not just to improve the forecast (which may be an exercise in diminishing returns) but also to find ways to make sure that systems are in place to make full use of them and users are well-prepared to take action and understand the actions of others.

References:

Golding, B.W., Clark, P.A. and May, B., 2005, The Boscastle Flood: Meteorological Analysis of the Conditions Leading to Flooding on 16 August 2004, Weather60, 230-235,

Clark, P., Roberts, N., Lean, H., Ballard, S. P. and Charlton-Perez, C., 2016: Convection-permitting models: a step-change in rainfall forecasting. Meteorological Applications, 23 (2). 165-181. ISSN 1469-8080 doi: https://doi.org/10.1002/met.1538

Dance, S. L., Ballard, S. P., Bannister, R. N.Clark, P.Cloke, H. L., Darlington, T., Flack, D. L. A.Gray, S. L., Hawkness-Smith, L., Husnoo, N., Illingworth, A. J., Kelly, G. A., Lean, H. W., Li, D., Nichols, N. K.Nicol, J. C., Oxley, A., Plant, R. S., Roberts, N. M., Roulstone, I., Simonin, D., Thompson, R. J. and Waller, J. A., 2019: Improvements in forecasting intense rainfall: results from the FRANC (forecasting rainfall exploiting new data assimilation techniques and novel observations of convection) project. Atmosphere, 10 (3). 125. ISSN 2073-4433 doi: https://doi.org/10.3390/atmos10030125

Leon, D. C., French, J. R., Lasher-Trapp, S., Blyth, A. M., Abel, S. J., Ballard, S., Barrett, A., Bennett, L. J., Bower, K., Brooks, B., Brown, P., Charlton-Perez, C., Choularton, T., Clark, P., Collier, C., Crosier, J., Cui, Z., Dey, S., Dufton, D., Eagle, C., Flynn, M. J., Gallagher, M., Halliwell, C., Hanley, K., Hawkness-Smith, L., Huang, Y., Kelly, G., Kitchen, M., Korolev, A., Lean, H., Liu, Z., Marsham, J., Moser, D., Nicol, J., Norton, E. G., Plummer, D., Price, J., Ricketts, H., Roberts, N., Rosenberg, P. D., Simonin, D., Taylor, J. W., Warren, R., Williams, P. I. and Young, G., 2016: The COnvective Precipitation Experiment (COPE): investigating the origins of heavy precipitation in the southwestern UK. Bulletin of the American Meteorological Society, 97 (6). 1003-1020. ISSN 1520-0477 doi: https://doi.org/10.1175/BAMS-D-14-00157.1

Posted in Climate, Predictability, Weather, Weather forecasting | Leave a comment

Rescuing the Weather

By: Ed Hawkins

Over the past 12 months, thousands of volunteer ‘citizen scientists’ have been helping climate scientists rescue millions of lost weather observations. Why?

Figure 1: Data from Leighton Park School in Reading from February 1903.

If we are to inform decisions about adapting to a changing climate we need to better understand the risk from extreme weather events, and whether this risk is changing. This requires long and detailed records of the weather. In the UK we are fortunate that meteorologists have recorded the weather across the country for over 150 years. However, most of their observations are still only available as the original paper copies, stored in large archives (Figure 1).

Currently, the only way to transform these observations into useful data is to manually transcribe them from paper to computer. This is an enormous task and would be much easier if it was performed by thousands of people, rather than just a single PhD student.

The WeatherRescue.org website has been set up to enable anyone to help. The first phase of the project recovered 1.5 million observations that were taken on the summit of Ben Nevis and in the nearby town of Fort William between 1883 and 1904. The volunteers then transcribed 1.8 million observations from more than 50 locations across Europe taken between 1900 and 1910. They are now digitising observations taken in the 1860s and 1870s.

So, what can we do with all this data?

Figure 2: Map of pressure observations in the ISPD database for 27th February 1903, including from ships (yellow), with newly rescued data (black) and locations where we have images of the observation logbooks, but the data has not yet been digitised (red).

As a case study, there was a very intense storm on February 26th-27th 1903 which hit Ireland and northern England, uprooting thousands of trees, causing significant structural damage and several fatalities. Hundreds of pressure observations taken across the UK during this storm are not in our digital climate databases. Figure 2 shows the existing data (yellow), newly rescued data (black) and potential data still waiting to be rescued (red) for the period of the intense storm.

Figure 3: The 26th-27th February 1903 storm in the 20th Century Reanalysis (left) and an estimate of how it would look with the new observations (right). The black contours are isobars, and the green shading shows confidence in their position.

The new data allows us to better reconstruct the path and intensity of the storm. Figure 3 shows how the storm appears in the new 20th Century Reanalysis (left) – it is too weak to cause the damage that we know occurred, and the image appears fuzzy because there is much uncertainty about the storm’s location. The right-hand panel shows how the storm should appear with the newly rescued observations (black dots in figure above) – more intense and more certain, with strong winds over eastern Ireland and northern England where the damage occurred. The minimum central pressure is now simulated to be around 955mb.

Severe windstorms are relatively rare but cause significant damage. We need to learn as much about them as possible which means delving back into the past. Thousands of volunteers are helping us determine how the weather changed hour-by-hour over a century ago and to learn about such extreme events. Anyone can help at WeatherRescue.org.

 

Posted in Climate, data assimilation, Data processing, Historical climatology, Outreach, Weather | Leave a comment