The Core-cloak Convection Model

By: Jian-Feng Gu

Moist convection plays a fundamental role in large-scale circulations and climate, ranging from cumulus clouds smaller than 100m to organized weather systems of several thousands of kilometers. Limited by their grid spacing, numerical models are not able to fully resolve moist convection across its broad range of scales, therefore, we represent moist convection in numerical models using a simplified process, called parameterization.

Figure 1:. A schematic diagram of bulk mass flux approximation. The left panel shows many individual different clouds. The right panel shows how the clouds are represented using the top-hat assumption.

In current climate models, convection is mostly parameterized using bulk plume models, which estimate the vertical transport of heat, moisture and momentum of a large group of clouds. To understand the general idea, imagine a grid box of around (100×100) km2 – that’s about half the size of Wales. Imagine a large number of different clouds randomly scattered across the domain (see the left panel of figure 1). Representing each cloud individually is very computationally expensive and hence simplification is necessary. The simplest idea to describe the overall behavior of these clouds is to consider them as a single entity, that is, a bulk cloud (see the right panel of figure 1).

A further simplification is to assume that each property of the bulk cloud is the averaged value of all the individual clouds, and that these values are distributed evenly within the bulk cloud. This is called the top-hat assumption (Randall et al. 1992). This means that the overall vertical transport by these clouds can be described as the transport of mean properties by the bulk cloud, which is called the bulk mass flux approximation. It approximates the sub-grid vertical flux of a quantity as being the product of the convective mass flux with the departure from the grid-box average of the transported quantity. However, this approximation can underestimate the vertical fluxes by 30-50% (Yano et al. 2004), depending on the variables considered and the resolution of the model. Therefore, a parameterization of the neglected contributions to the vertical flux is necessary. How might this be achieved without sacrificing computational efficiency?

The vertical flux underestimation arises from the physical assumptions we make in the bulk plume model. In Gu et al. (2020), we show both the mean properties of the clouds, and the departures from these mean properties contribute toward the total vertical flux. By representing many clouds as a single bulk cloud, we are removing differences between individual clouds. By assuming a top-hat distribution, we are neglecting the inhomogeneity within each cloud. These two neglected variabilities are called inter-object and intra-object variability, respectively. As a result, the bulk mass flux approximation underestimates the total vertical transport by moist convection. However, it can be improved by relaxing these assumptions. For example, a spectral model that deals with clouds of different sizes is able to minimize the inter-object variability because it takes into account the differences of mean properties between different types of clouds. But it still underestimates the vertical heat fluxes because of neglecting the intra-object variability.

Figure 2: A schematic of the core-cloak representation of convection. Both updrafts and downdrafts are represented as the combination of a strong core surrounded by a weak cloak.

To improve the representation of both the vertical heat and water fluxes, we proposed the “core-cloak” conceptual model (Figure 2, Gu et al. 2020). In this model, we decompose the flow into different types of drafts depending on the strength of vertical motions. More specifically, we collect the strong updrafts together as the updraft “core” and the weak updrafts together as the updraft “cloak”. This core-cloak structure can also be applied to downdrafts. This flow decomposition partly includes the inter-object and intra-object variability and therefore better represents the vertical heat and water transport.

To evaluate our conceptual model, we performed simulations of shallow convection (dx=25 m, 50 m, 100 m) and deep convection (dx=100 m, 200 m, 400 m), and cloud resolving simulations of organized deep convection (dx=1 km). Our results show that the “core-cloak” conceptual model can significantly improve the representation of vertical heat and moisture fluxes, compared to the bulk mass flux approximation. The improvement can be seen in both shallow and deep convection, and even in organized convection.

We also found that the clouds which have a “core-cloak” structure contribute most of the vertical fluxes. Therefore, the “core-cloak” conceptual model provides simply a possible decomposition of the flow that gives a reasonable and efficient description of turbulent fluxes using a mass flux approximation. Parameterization of this core-cloak model would need careful treatment of exchanges between the different types of drafts. We intend to pursue the practical implications of this conceptual model within the future development of a convection parameterization.

References:

Gu, J.-F., Plant, R. S., Holloway, C. E., Jones, T. R., Stirling, A., Clark, P. A., Woolnough, S. J. and Webb, T. L., 2020: Evaluation of the bulk mass flux formulation using large eddy simulations. J. Atmos. Sci., 76, 2297–2324, doi: https://doi.org/10.1175/JAS-D-19-0224.1

Randall, D. A., Q. Shao, and C.-H. Moeng, 1992: A second-order bulk boundary-layer model. J. Atmos. Sci.49, 1903-1923, doi: https://doi.org/10.1175/1520-0469(1992)049<1903:ASOBBL>2.0.CO;2

Yano, J.-I., F. Guichard, J.-P. Lafore, J.-L. Redelsperger, and P. Bechtold, 2004:  Estimations of mass fluxes for cumulus parameterizations from high-resolution spatial data. J. Atmos. Sci., 61, 829–842, doi: https://doi.org/10.1175/1520-0469(2004)061<0829:EOMFFC>2.0.CO;2.

Posted in Climate, Clouds, Convection, Numerical modelling | Leave a comment

People are in the cities – how could we provide the weather climate information they need?

By: Sue Grimmond

Climate services provide climate information to help individuals and organizations make climate smart decisions. Climate services work by integrating high quality meteorological data (temperature, rainfall, wind, soil moisture and ocean conditions); as well as maps, risk and vulnerability analyses, assessments, and long-term projections and scenarios; with socio-economic variables and non-meteorological data (such as agricultural production, health trends, human settlement in high-risk areas, road and infrastructure maps for the delivery of goods). The data and information collected is transformed into customized products such as projections, trends, economic analyses and services. The aim is to equip decision makers in climate-sensitive sectors with better information to help society adapt to climate variability and change.  At the core are users and their needs.

The demands for such services are wide ranging – for public health; for disaster risk reduction and response; the prediction of energy and water demands or food production; or the operation of climate sensitive infrastructure. These services though need to go further, they need to be developed for a wider range of time scales and conditions (for weather and climate) and to integrate other elements of the environment and human behaviours and responses.

Nowhere is this more important than in cities. Increasingly dense, complex and interdependent urban systems leave cities particularly vulnerable to extreme weather and to changes in climate. Through domino effects, a single extreme event can lead to a wide-scale breakdown of a city’s infrastructure (Figure 1).

Figure 1:  The ‘domino effect’ partially shown for a typhoon or hurricane event, which produces multiple hydro-meteorological hazards (blue) that have immediate effects (green) and follow-on impacts (purple) that can be both short- and long-term. Source: Grimmond et al. 2020)

Integrated Urban Systems (IUS) are needed (Baklanov et al. 2018, Grimmond et al. 2020), yet globally there are few (if any) fully operational systems (e.g. Baklanov et al. 2020). Those systems that do exist often were developed because of a major event (e.g. Olympics; Pan-Am games, Expos) or a significant weather-related disaster (hurricane Sandy. New York City; heat waves in Paris and London in 2003). Recognising this need, the World Meteorological Organization (WMO) is advocating for the development of Integrated Urban Weather, Environment and Climate Services (IUS) for safe, healthy and resilient cities. The concept and methodology for developing these systems was adopted by the 70th WMO Executive Council in 2018 (WMO 2019). The need for Demonstration Cities was adopted by the 71st WMO Executive Council in 2019.

The research community, working collaboratively with others, has an important role to play across the many activities required. This includes contributing to and identifying critical research challenges, developing impact forecasts and warnings, promoting and delivering IUS internationally, and supporting national and local communities in their implementation (Figure 2).

Figure 2: An Integrated Urban Hydrometeorological, Climate and Environmental Service (IUS) System has several components.  Here a generic framework is shown for impact-based prediction systems. Integration may occur in the various boxes with a mature IUS aiming for integration in all components. Source: WMO (2019)

At Reading, we have been working with a range of collaborators and stakeholders to develop UMEP (Urban Multi-scale Environmental Predictor), a city-based climate service tool, that combines models and tools essential for climate simulations (Lindberg et al. 2018). Alongside climate information transformed for the urban context, detailed information about the materials and morphology of the city are integrated through GIS, with information about residents and their behaviour through agent-based models and tools (e.g. Capel- Timms et al. 2020). UMEP has been used to identify heat waves and cold waves; the impact of green infrastructure on runoff; the effects of buildings on human thermal stress; solar energy production; and the impact of human activities on heat emissions. It has been applied in many cities across the world, from London to Shanghai (Google Citations 2020).

UMEP includes tools to enable users to input atmospheric and surface data from multiple sources, to characterise the urban environment, to prepare meteorological data for use in cities, to undertake simulations and consider scenarios, and to compare and visualise different combinations of climate indicators. An open-source tool, UMEP is designed to be easily and widely used. This summer we offered the international urbisphere UMEP workshop, that was planned to be in person in Reading, online.

Much more work is needed in this realm. IUS need to be developed to meet the special needs of cities through a combination of dense observation networks, high-resolution forecasts (weather, climate, air quality, hydrological), multi-hazard early warning systems, disaster management plans and climate services. Such an approach will give cities the tools they need to reduce emissions, build thriving and resilient communities and implement the UN Sustainable Development Goals. This focus on urban environments is particularly important given the large and ever increasing fraction of the world’s population that live in cities (more than 3.5 billion) and the importance of cities, not only regionally but nationally and globally, to the world’s economy.

References: 

Baklanov A, CSB Grimmond, D Carlson, D Terblanche, X Tang, V Bouchet, B Lee, G Langendijk, RK Kolli, A Hovsepyan 2018: From urban meteorology, climate and environment research to Integrated City services, Urban Clim., 23, 330-341,  10.1016/j.uclim.2017.05.004

Baklanov A, B Cárdenas, T Lee, S Leroyer, V Masson, L Molina, T Müller, C Ren, FR Vogel, J Voogt 2019: Integrated urban services: Experience from four cities on different continents, Urban Clim.3210.1016/j.uclim.2020.100610

Capel-Timms I, ST Smith, T Sun, S Grimmond 2020: Dynamic Anthropogenic activities impacting Heat emissions (DASH v1.0): Development and evaluation Geosci. Model Dev. https://doi.org/10.5194/gmd-2020-52

Grimmond S, V Bouchet, L Molina, A Baklanov, J Tan , H Schluenzen, G Mills, B Golding, V Masson, C Ren, J Voogt, S Miao, H Lean, B Heusinkveld, A Hovespyan, G Terrug, P Parrish, P Joe 2020: Integrated Urban Hydrometeorological, Climate and Environmental Services: Concept, Methodology and Key Messages Urban Climate https://doi.org/10.1016/j.uclim.2020.100623

Lindberg F, CSB Grimmond, A Gabey, B Huang, CW Kent, T Sun, NE Theeuwes, L Järvi, H Ward, I Capel-Timms, YY Chang, P Jonsson, N Krave, DW Liu, D Meyer, KFG Olofson, JG Tan, D Wästberg, L Xue, Z Zhang 2018: Urban multiscale environmental predictor (UMEP) – An integrated tool for city-based climate services Environmental Modelling and Software, 99, 70–87 https://doi.org/10.1016/j.envsoft.2017.09.020

WMO 2019: Guidance on Integrated Urban Hydrometeorological, Climate and Environmental Services Volume I: Concept and Methodology WMO-No. 1234 https://library.wmo.int/doc_num.php?explnum_id=9903

Posted in Climate, Urban meteorology, Weather forecasting | Leave a comment

Large and irreversible future decline of the Greenland ice-sheet

By: Jonathan Gregory

Sea-level rise is one of the most serious consequences of global warming. By the end of this century, if emissions of greenhouse gases continue to increase (mostly carbon dioxide, from burning oil, natural gas and coal), global mean sea level could be more than a metre higher than now. About a quarter of a billion people currently occupy land less than a metre above present sea level (Kulp and Strauss, 2019). By the end of this century, most coastal locations around the world will annually experience extreme sea levels which have historically occurred as a result of violent storms only about once in 100 years (Ocean, cryosphere and climate change, Royal Society briefing, 2019).

Whereas global warming itself and some consequences of climate change could be mostly halted within in a few decades by ceasing greenhouse-gas emissions (although that would be hard enough to achieve), sea level would continue to rise for centuries or millennia under any climate as warm as or warmer than present. By 2300 sea level is projected to rise by 0.6-1.1 metres even if climate is stabilised in coming decades, and by 2.3-5.4 metres if emissions of greenhouse gases are large. It is hard to envisage the seriousness of this for some areas of the world. For instance, two-thirds of Bangladesh is less than 5 metres above sea level, and the highest point in the Maldives is 2.4 metres above sea level.

Large and irreversible future decline of the Greenland ice-sheet 

Figure 1: An image of the Greenland ice-sheet (CPOM/UCL/ESA).

There are several contributions to sea-level rise. In recent decades, a third to a half of the total has been due to the expansion of sea water as it gets warmer (Chambers et al., 2017), and this will continue to be an important effect. The future of the Antarctic ice-sheet is the largest uncertainty in projections of the current century. Although it is smaller, the Greenland ice-sheet (figure 1) is presently contributing more than the Antarctic, and more than all the world’s mountain glaciers together (Special report on the ocean and cryosphere in a changing climate, IPCC i.e. Intergovernmental Panel on Climate Change, 2019). As the climate gets warmer, both surface melting and snowfall on Greenland increase. (Precipitation generally increases in a warmer climate, and on Greenland most precipitation is snow.) The melting increases more rapidly with global warming, so there is a net loss of ice, which ends up as water in the ocean. If the ice-sheet was completely eliminated, global mean sea level would be 7.4 metres higher.

My colleagues Steve George, Robin Smith and I have recently studied the future of the Greenland ice-sheet under a range of climates (work under open review), illustrative of those expected in the late 21st century under various scenarios. In our experiments, the climates were constant. We wanted to see what happens if you maintain a warm climate indefinitely. We used an atmosphere general circulation climate model and an ice-sheet model coupled together. The climate model is like those used for IPCC projections but with less geographical detail because it has to run faster, since we needed to carry out experiments simulating tens of millennia. We ran about 50 experiments, at about 2000 simulated years per day of computer time. This is the first time the future of Greenland has been investigated with a model of such complexity; it has many unavoidable approximations and inaccuracies, but it’s more physically realistic and complete than previous models.

Large and irreversible future decline of the Greenland ice-sheet

Figure 2: Contribution of the Greenland ice-sheet to global-mean sea-level rise in our experiments. The coloured lines are the results of the first set of experiments under constant climates. The colours indicate the global warming with respect to the climate of the late 20th century, from blue (little warming) to red (5 degrees Celsius warmer). The black lines show the second set of experiments. The ice-sheet at the point along a coloured line where each black line begins was instantaneously transplanted into a late 20th-century climate. The solid black lines are from states below the threshold, in which the ice-sheet regrows to around its present size; the dashed black lines are those which began above the threshold, in which it never fully regrows.

Under all climates like the present or warmer the ice-sheet loses mass and contributes positively to sea level (figure 2). It takes tens of thousands of years to reach a new constant state: the warmer the climate, the smaller the final ice-sheet, and the larger the sea-level rise. Unlike in some previous studies, there is no sharp threshold dividing scenarios in which the ice-sheet suffers little reduction from those in which it is mostly lost. Rather, there is a broad range of outcomes. In the warmest climate we consider (about 5 degrees Celsius warmer than recent, which is similar to the most extreme scenarios for 2100), the ice-sheet is reduced over about 10,000 years to a small ice-cap with 1.5% of its present volume. Initially it contributes about 3 millimetres per year to sea level, which is similar to the current observed rate of rise due to all effects. On the other hand, in climates resulting from strongly mitigated emissions during this century (roughly consistent with the Paris target), the final contribution to sea-level rise is less than 1.5 metres.

In a second set of experiments, we took some of the reduced states of the ice-sheet and put them back in a steady climate like the late 20th century, to see if it would regrow, meaning that sea level would consequently fall. The ice-sheet gained mass in all cases, taking even longer than it did in a warm climate to reach a constant state, because snowfall adds mass more slowly than melting can remove it. We found that the final states constitute two groups. If the sea-level rise under the warm climate remained below about 3.5 metres, the ice-sheet eventually regrew to around its present size. If sea level passed this threshold, the ice-sheet did not fully regrow. In this case, about 2 metres of the sea-level rise was irreversible under recent climate. (The full ice-sheet could probably be regenerated in an ice-age climate.) The reason for the irreversibility is that the ice-sheet is a large object which affects its own local climate, like a high cold mountain. Without the ice-sheet, but with present-day temperatures of the surrounding seas, Greenland would a warmer place. In our model, the ice-sheet cannot readvance into the northern part of the island once ice-free, because the snowfall is less than at present.

While our result requires corroboration by other workers with their own models, it illustrates the importance of the coupling of the ice-sheet and its climate. Each affects the other and modelling them independently may lead to unrealistic projections. The experiments also underline the global practical importance of mitigating global warming. Precautionary action to mitigate the threat of irreversible damage is a principle of the Framework Convention of Climate Change, even when there is not full scientific certainty. According to our results, in order to avoid partially irreversible loss of the ice-sheet, climate change must be reversed (not just stabilised in a warmer state, but put back to how it was) before the ice-sheet has declined to the threshold mass, which would be reached in about 600 years at the highest rate of mass-loss for this century in the IPCC assessment.

References:

Kulp, S. A. and B. H. Strauss, 2019, New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding, Nat. Commun, 10, 4844, https://doi.org/10.1038/s41467-019-12808-z

Chambers, D. P, A. Cazenave, N. Champollion, H. Dieng, W. Llovel, R. Forsberg, K. V. Schuckmann and Y. Wada, 2017, Evaluation of the global mean sea level budget between 1993 and 2014, Surv. Geophys. 38, 309-327, https://doi.org/10.1007/s10712-016-9381-3

 

Posted in Climate, Climate change, Climate modelling, IPCC, Polar | Leave a comment

Recent progress in simulating North Atlantic weather regimes

By: Alex Baker

Weather is chaotic. Low-pressure weather systems bring rainfall; areas of high pressure block the passage of these weather systems. Take this year so far, for instance. February and March were much wetter than average, and April and May much drier. May, in particular, was England’s driest—and the U.K.’s sunniest—on record, with similar conditions enjoyed across much of Western and Central Europe. Such swings in weather are down to where low- or high-pressure conditions prevail, and for how long these synoptic situations persist.

One way to make sense of this variability is by identifying so-called weather regimes, reoccurring patterns of high and low pressure across the central and eastern North Atlantic and Europe. Conventionally, meteorologists recognise four Euro-Atlantic regimes: the positive and negative phases of the North Atlantic Oscillation (NAO+ and NAO–, respectively), Scandinavian blocking (SB), and a North Atlantic ridge pattern (AR)—more on each presently. How often these regimes occur not only dictates regional weather, but also plays a role in whether a season becomes wetter or drier over time, and is important on longer, climatological timescales too.

Weather regimes exhibit characteristic spatial patterns of high- and low-pressure centres, visualised here using geopotential height data from which the climatological mean seasonal cycle was removed (see Figure 1). (Geopotential height is a common variable used to infer atmospheric circulation patterns; it tells us altitude above mean sea level, accounting for gravitational variations over Earth’s surface.) The North Atlantic Oscillation’s positive and negative phases describe variability between the Icelandic Low and the Azores High. During NAO+, high-pressure conditions prevail over much of Central Europe and the Mediterranean. During NAO–, the high-pressure anomaly sits over Greenland and low pressure spans much of continental Europe. Scandinavian blocking is the occurrence of a high-pressure anomaly over western Scandinavia and the North Sea. The Atlantic Ridge pattern is characterised by high pressure over the central North Atlantic at a latitude of about 55°N. Each regime roughly corresponds to a preferred position of the North Atlantic jet stream.

Figure 1: The Euro-Atlantic weather regimes, based on daily geopotential height data at the 500-mb isobaric level from the ERA40 and ERA-Interim reanalyses. The four regimes are the positive and negative phases of the North Atlantic Oscillation (NAO+ and NAO–, respectively), Scandinavian blocking (SB), and a North Atlantic ridge pattern (AR). Regimes patterns visualised following removal of the climatological mean seasonal cycle. Figure adapted from Fabiano et al., 2020.

In a recent paper published in Climate Dynamics, led by Federico Fabiano of the Institute of Atmospheric Sciences and Climate, Consiglio Nazionale delle Ricerche, we examined how well six current-generation, fully coupled global climate models are able to represent Euro-Atlantic weather regimes—their spatial patterns, their persistence, and how realistically distinct the regimes are. Here, I focus on regime patterns, and how well those simulated by low- and high-resolution climate models compare with reference datasets: the ERA-40 and ERA-Interim reanalyses.

Establishing whether or not global climate models can reproduce each regime’s characteristic spatial pattern is important because these patterns are related to where westerly storm systems track and make landfall downstream over Europe—and where these storms’ impacts will be felt. Do high-resolution models reproduce real-world regime patterns better than standard, low-resolution models? To assess this, we calculated pattern correlations between the models and reanalyses. Overall, we found that the NAO+, SB and AR regime patterns are better represented at high resolution, but the NAO– regime is not (see Figure 2). Why NAO– is something of an outlier here will be the subject of future research. Additionally, the AR regime shows greater variance than the other regimes. We also found that simulated regimes are more realistically distinct from one another (to use the jargon, more tightly ‘clustered’) at high resolution and better match the reanalyses.

Figure 2: Pattern correlations between low- (teal) or high-resolution (red) models and reanalyses (black) for each weather regime. Perfect model representation of a regime’s observed spatial pattern is indicated by a pattern correlation coefficient of 1. From distributions of 30-yr bootstrapping for each model, the ensemble mean (dot), median (horizontal line), interquartile range (boxes), 10th and 90th percentiles (bars), and minimum and maximum values across all available ensemble members (triangles) are shown. Figure adapted from Fabiano et al., 2020.

However, increasing models’ resolution had little impact on the frequency and duration of weather regimes. The evidence suggests that these errors are due to biases in simulated sea-surface temperatures and the mean geopotential height field. Simulating realistic regime persistence in models is important because prolonged wet and dry periods, like those seen across Europe earlier this year, are very often related to the persistence of a single regime. This research suggests that increasing model resolution alone is not enough; developments in model physics and dynamics are needed to better simulate North Atlantic weather regimes.

Author’s note

The climate models in this study participate in the sixth phase of the World Climate Research Programme’s Coupled Model Intercomparison Project (CMIP6), the modelling framework underpinning the Intergovernmental Panel on Climate Change’s Assessment Reports that are indispensable for global climate policy-making. These model simulations (hist-1950) were supported by the European Commission-funded PRIMAVERA project, the European contribution to HighResMIP, a CMIP6-endorsed and coordinated assessment of the impact of increasing model resolution, which is documented by Haarsma et al., 2016.

References

Fabiano, F. et al., 2020. Euro-Atlantic Weather Regimes in the PRIMAVERA coupled climate simulations: impact of resolution and mean state biases on model performance. Climate Dynamics 54, 5031–5048. . https://doi.org/10.1007/s00382-020-05271-w

Haarsma, R. J. et al., 2016. High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6. Geoscientific Model Development 9, 4185–4208.  https://doi.org/10.5194/gmd-9-4185-2016 

Posted in Atlantic, North Atlantic, Weather Regimes | Leave a comment

Do urban heat islands provide thunderstorm predictability?

By: Suzanne Gray 

The UK and the rest of western Europe experienced a heatwave in the middle of August 2020 with temperatures exceeding 30oC in Reading. Fortunately for us this was broken by a heavy downpour on the afternoon of Wednesday 12th August (Figure 1(a)) yielding over 4 mm of rainfall at the Reading University Atmospheric Observatory over a period of only about 20 minutes. While watching the rain cascading down the road towards my front door (we live at the bottom of a small, but very steep, hill) I naturally searched up the latest radar images and analysis charts to see the system causing the rainfall and to try to predict how long it might last.

a)     b) 

Figure 1: (a) Temperature observations from the Reading University Atmospheric Observatory for the 11th and 12th August 2020 and (b) cutout from a Met Office analysis with mean sea level pressure contours and marked fronts from 12 UTC on 12th August 2020 (copyright Met Office).

The Met Office had issued a yellow warning for thunderstorms and associated flooding for most of the UK but, as is usual with thunderstorm warnings, was not able to tell us exactly where and when the thunderstorms would occur. According to the Met Office’s synoptic analysis (Figure 1(b)), the mean sea level pressure gradient over the UK was very weak and this was associated with weak easterly winds (~2-3 ms-1 at our Observatory). A small-scale weak low-pressure system to the west of the UK was associated with an upper-level trough directly above, with an extension of the trough axis towards northern Spain. This synoptic situation transported the warm plume of air up from Africa, across Spain and into southern England giving rise to the heatwave. This is not an unusual occurrence and even has a name: “The Spanish plume”. More precisely, this situation compares well with the modified Spanish plume synoptic situation described in Lewis and Gray (2010).    

Spanish plumes often lead to large thunderstorms over the UK, either initiated locally or imported from France. When thunderstorms are organised into a large single cloud system we call it a mesoscale convective system. The challenge is knowing when and where these storms will initiate because that initiation often depends on small-scale variations in the environment (such as due to local hills) that are not well captured by the numerical models used to generate weather forecasts. The storm that broke the heatwave in Reading was a mesoscale convective system that initiated over west London and tracked westwards as clearly shown by satellite imagery (Figure 2) and associated radar (Figure 3).

a)b) c)

Figure 2: Sequence of Meteosat Infrared Satellite imagery for (a) 14, (b) 15 and (c) 16 UTC on 12th August 2020 (copyright EUMETSAT).

a)      b)

Figure 3: Radar imagery for 12th August 2020 at (a) 1425 UTC and (b) 1615 UTC captured from www.netweather.tv/live-weather/radar. Note that the times in the panels are in BST, so one hour ahead of UTC.

So, that got me wondering whether the existence of London as a major urban area could have had a role in initiating this event. Large urban areas are known to affect their local environment in many ways including generating locally enhanced temperatures known as urban heat islands. A bit of research led me published papers examining the relationship of urban heat islands to thunderstorms. For example, a review by Han et al. in 2013 found that updraughts produced by heat islands initiate clouds, and rainfall can be enhanced by high aerosol levels due to pollution; the enhanced surface roughness associated with cities doesn’t play a major role in thunderstorm initiation, though it may affect systems passing over them. Of course, more analysis would be required to tell whether London’s urban characteristics were important in initiating this storm, and so potentially provided some predictability, in this case or whether the storm initiated over London for some other reason. Whatever the cause though, I appreciated the consequent temperature crash and excuse to do some meteorological-based web surfing.  

References:    

Han, J., Baik, J. and Lee, H. (2014) Urban impacts on precipitation. Asia-Pacific J Atmos Sci 50, 17–30. https://doi.org/10.1007/s13143-014-0016-7

Lewis, M. W. and Gray, S. L. (2010) Categorisation of synoptic environments associated with mesoscale convective systems over the UK. Atmospheric Research, 97, 194-213. https://doi.org/10.1016/j.atmosres.2010.04.001

 

Posted in Climate, Predictability, Thunder Storms, Urban meteorology | Leave a comment

Deep Water Formation In The Mediterranean Sea

By: Giorgio Graffino

“The Mediterranean Sea is a small-scale ocean”, as my old teacher used to tell me. All right, that was probably a bit exaggerated. Still, it’s true that the Mediterranean Sea provides an almost unique environment to study important ocean processes, such as general circulation, air-sea interactions, and climate change, in a fairly small basin. In particular, the Mediterranean Sea is home to some of the few deep water formation regions of the World Ocean.

Deep water formation is when surface waters sink down below 2000m of depth. This occurs in small ocean regions, usually found at high latitudes. The most famous are in the Labrador Sea (between Canada and Greenland), and in the Weddell Sea (close to Antarctica). These regions are particularly important for the global climate, for they are the sinking branches of the thermohaline circulation. Key ingredients of deep water formation are: weak stratification of the surface waters, and strong air sea-interactions. These conditions are usually met during winter, when large surface heat fluxes cause a large buoyancy loss of the surface waters. The effects of this are felt at great depths (even below 2000 m). For this reason, deep water formation is also called open-ocean convection.

It is not easy to explain how open-ocean convection works, but let’s give it a try. Convection is triggered locally, in plumes with widths less than 1 km. This happens when the surface water is forced to stay in contact with low atmospheric temperatures and strong winds. There is little mass exchange involved in open-ocean convection. Rather, convective plumes help to mix temperature and salinity over the water column. This causes buoyancy loss over great depths, right down to the abyss (the region of ocean between 2000 m and 6000 m of depth).

How can these small structures mix water properties across such great vertical extent? Why do lateral exchanges (like entrainment) not dissipate these plumes away? Because of the Earth’s rotation! In fact, rotation makes convective plumes more “rigid”, which inhibits entrainment. Figure 1 shows an example of convective plumes created through an experiment using a rotating tank. Two fans at the edge of the tank force the water into the centre, which then sinks to the bottom of the tank. This process is called Ekman pumping. A dye tracer was added to the water to show the downward convection occurring in the centre, in the form of small columns. The small shapes visible in the centre of the tank in Figure 1 are the plumes seen from above.

Figure 1: Convective plumes (seen from above) in a rotating tank.

Let’s now move to our area of interest. Although the Mediterranean Sea is not among the most famous deep water formation sites, there are still four regions where deep water formation occurs (Figure 2). For the sake of simplicity, I will focus on one region. I chose the Gulf of Lion for my master’s thesis, and I will do the same here. The formation process has three phases.

  1. Preconditioning: A cyclonic gyre, formed by the action of the wind on the sea surface and by the sea floor structure, causes subsurface waters to rise to the surface
  2. Violent mixing: The Mistral and the Tramontane (cold and dry northerly winds) cause violent mixing in small convective plumes where the surface waters are weakly stratified
  3. Lateral exchange: Lateral exchange occurs between the mixed patch and the surrounding water, which restores the initial seawater conditions

As usual, timing is fundamental. Convection can only occur if the buoyancy loss is suitably intense. This means that the buoyancy losses must be concentrated in few strong events, rather than be evenly distributed over the whole winter season. Usually the preconditioning takes place during December and January, and the violent mixing phase occurs during February and March.  Lateral exchange occurs either simultaneously with the mixing phase, or shortly thereafter.

Figure 2: Deep water formation regions in the Mediterranean Sea (Pinardi et al. 2015).

How is deep water formation measured? As the vertical mass exchange is negligible, it is difficult to measure the velocity of this mass. Instead, we assess the seawater density. The convective plumes create a uniform-density patch of water. As all water masses have a characteristic density range, we can compute the amount of water formed in that density class over time, i.e. the water mass formation rate.

Unfortunately, extensive seawater density measurements are not easy to achieve in both space and time, but we can “fill the gaps” in observations with ocean reanalysis products, such as the data provided by the Mediterranean Forecasting System (MFS). Figure 3 shows the water mass formation rate in the Gulf of Lion from 1987 to 2012, calculated using MFS data. The black line shows monthly averages, which peak during winter. The red line shows the winter average (November to April) each year, and the green line shows the average February and March water mass formation rate in each year. There are big variations from year to year. That depends on how much the preconditioning phase is weakening the surface stratification, and on how intense is the atmospheric forcing.

Figure 3: Water mass formation rate (1 Sverdrup = 106 m3 s-1) in the Gulf of Lion, computed from MFS reanalysis data. The black line is the monthly average rate, the red line is the seasonal average rate and the green line is the February+March average rate. Adapted from Graffino (2015).

Deep water formation is just one example of the many processes observed in the Mediterranean Sea. To have such a rich and diverse environment right in our backyard is quite a stroke of luck for ocean and climate scientists in Europe (yes, the UK is still part of Europe). The Mediterranean Sea is currently experiencing big changes due to climate change, and its ecosystems are undergoing increasing pressure. Understanding its importance helps to protect it. So, let’s do it!

References:

 Graffino, G. (2015). A study of air-sea interaction processes on water mass formation and upwelling in the Mediterranean Sea. Master’s thesis, University of Bologna, https://amslaurea.unibo.it/8337/.

Marshall, J., and F. Schott, (1999). Open‐ocean convection: Observations, theory, and models, Rev. Geophys., 37(1), 1-64, https://doi.org/10.1029/98RG02739

Pinardi, N. and Coauthors, (2015). Mediterranean Sea Large-Scale Low-Frequency Ocean Variability and Water Mass Formation Rates from 1987 to 2007: a retrospective analysis. Prog. Oceanogr. 132, 318-332. https://doi.org/10.1016/j.pocean.2013.11.003

 

Posted in Climate, Fluid-dynamics, Oceanography, Oceans | Leave a comment

Are Eurasian winter cooling and Arctic Sea ice loss dynamically connected?

By: Rohit Gosh

The observed sea ice concentration (SIC) in the Arctic has been declining in recent decades. Temperatures have been rising all over the planet, but warming has been much faster over the Arctic, a phenomenon known as Arctic Amplification.  We have also seen some extremely cold Eurasian winters during the same period. These cold winters lead to a Warm Arctic-Cold Eurasia (WACE) pattern in the observed surface air temperature (SAT) trend (Figure 1a). Indeed, previous studies have found links between the warming Arctic and the cooling over Eurasia. However, many opposing studies claim the observed WACE trend is simply a result of climate noise or internal atmospheric variability (Ogawa et al. 2018). Over the last five years, the observed Eurasian cooling trend has been decreasing (Figure 1), whilst SIC has continued to fall, which supports the theory that the links found can be explained by noise in the climate data. But does the recent reduced Eurasian cooling really imply that Arctic sea-ice loss plays no role in creating the WACE trend? We can figure out the answer, if we look at the two main modes of SAT variability over Eurasia and their associated dynamics.  

Figure 1: a) December-January-February (DJF) surface air temperature (SAT) trend over Eurasia (20°-90°N,0-180°E) for the period 1980 to 2014 (35 years) from ERA Interim reanalysis, and b) 1980 to 2019 (40 years). Units are in K/year.

Applying principal component analysis to the winter (December-January-February, DJF) SAT variability data over Eurasia from 1980 to 2019, the first mode (EOF1) shows a Eurasian warming pattern (Figure 2a). The associated sea level pressure (SLP) shows a low centered on the Barents Sea (north of Scandinavia and Russia). This low is part of the Arctic Oscillation (AO), the main cause of Northern Hemisphere SLP variability, as the AO index has a strong correlation (Pearson correlation coefficient: 0.81) with the principal component (PC1) of the EOF1 (Figure 2c). The second mode of Eurasian SAT variability (EOF2) shows the WACE pattern, with a warm centre over the Barents Sea and a cold centre over central and eastern Eurasia (Figure 2b). The WACE pattern is associated with an SLP high centered on northern Eurasia/Siberia, which is known as the Ural blocking or Siberian high.

Figure 2: The spatial patterns (in shading) of the a) PC1/EOF1 and b) PC2/EOF2 principal component modes of winter (DJF) SAT variability over Eurasia (20°-90°N,0-180°E) in the ERA Interim reanalysis (1979-2019). The upper right corners of each panel show the explained variance fraction of each component. The EOF patterns are scaled to correspond to the one standard deviation variation of the respective principal component time series, and thus have units in K. The black contours are the SLP (in hPa) fields associated with the respective EOFs, derived by linear regression of the SLP field on the respective normalized PC time series. c) The normalized PC1 time series (in black) associated with the EOF1 patter in a) and the Arctic Oscillation index (in red), which is the normalized PC1 time series associated with the EOF1 of Northern Hemisphere (20°-90°N,180W-180°E) SLP. d) Th normalized PC2 Eurasian SAT timeseries (in black) associated with EOF2/WACE  pattern in b) and the normalized sign reversed timeseries of the winter area averaged (74°N-80°N, 20°E-68°E) Barents Sea SIC (in blue). Light gray vertical lines in c) and d) shows the year 2014, when the AO changed to a positive phase.

The principle component associated with the EOF2 or WACE pattern (PC2), shows a persistent positive trend, especially after 2005 (black time series in Figure 2d). This indicates a strengthening Ural blocking. Moreover, the time series is highly correlated with the SIC anomalies over the Barents Sea (Pearson correlation coefficient: 0.85). This is the area in the Arctic which has seen the highest SIC decline (red contoured area in Figure 3), situated below the warming center of the WACE pattern. This correlation suggests that the WACE pattern is in fact, dynamically coupled with the Barents Sea-ice variations (Mori et al. 2014) and therefore not simply due to climate noise. Moreover, the WACE pattern has strengthened over the last five years, leading to an enhanced Eurasian cooling. So, if the WACE-sea-ice relation holds, how did the overall Eurasian cooling decrease?                         

Figure 3: The winter (DJF) mean sea-ice concentration (SIC) trend in percent/year over the Arctic Ocean from HadISST-SIC data from 1979 to 2019. The red contour shows the Barents Sea region (74°N-80°N, 20°E-68°E).

The reduction of Eurasian cooling over the last five years is instead a result of the change in the PC1 trend from negative to positive after 2014 (black time series in Figure 2c). This change in trend effects the overall Eurasian SAT trends shown in Figure 1, which is a linear combination of the trends contributed by each principal component or EOF. The trend in PC1 is not significant as it arises mainly due to AO related internal variability. Nevertheless, until 2014, PC1 has a negative trend due to the negative phase of the AO from 2009 (Figure 2c). This brings a central Eurasian cooling response and reinforces the Barents Sea-ice forced cooling trend from the WACE pattern (Figure 2b) and enhances the Eurasian cooling signal (Figure 1a). However, by 2019, PC1 trend becomes positive due to the positive phase of AO after 2014. This leads to central Eurasian warming and competes with the significant cooling trend from the WACE pattern. The net effect is a reduced Eurasian cooling signal in the overall SAT trend (Figure 1b). Hence, in spite of an increasing WACE trend, Eurasian SAT cooling has weakened over the last five years due to the phase change of the Arctic Oscillation.

References:

Masato, M.,  M. Watanabe, H. Shiogama, J. Inoue, J. and M. Kimoto, 2014: Robust Arctic Sea-Ice Influence on the Frequent Eurasian Cold Winters in Past Decades, Nat. Geosci., 7, 869-873, https://doi.org/10.1038/ngeo2277

Ogawa, F., and Coauthors, 2018: Evaluating Impacts of Recent Arctic Sea Ice Loss on the Northern Hemisphere Winter Climate Change. Geophys. Res. Lett., 45, 3255–63, https://doi.org/10.1002/2017GL076502

 

 

Posted in Arctic, Climate, Cryosphere, Polar, Teleconnections | Leave a comment

Keeping the lights on: A new generation of research into climate risks in energy systems

By: Paula Gonzalez, Hannah Bloomfield, David Brayshaw

The Department’s Energy Meteorology Group recently hosted an online 2-day workshop on the Next Generation Challenges in Energy-Climate Modelling, supported by the EU-H2020 PRIMAVERA project. The event took place on June 22-23, and though it was planned to physically take place in Reading, it evolved into a Zoom meeting due to the COVID-19 pandemic. The workshop was joined by 81 participants from 22 countries in 6 continents.

Climate variability and change have a two-way relationship with the energy system.  On the one hand, the need to reduce greenhouse gasses emissions is driving an increase in the use of weather-sensitive renewable energy sources, such as wind and solar power, and the electrification of fossil fuel intensive sectors such as transport. On the other, a changing climate impacts the energy system through changing resource patterns and the need for heating and cooling. As a result, the energy system as a whole is becoming more sensitive to climate and energy researchers are becoming increasingly aware of the risks associated with climate variability and change.

Recent years have therefore seen a trend towards the incorporation of climate risk into energy system modelling. Significant challenges remain, and in many cases climate risk and uncertainty are neglected or handled poorly (e.g., by focussing on ‘Typical Meteorological Years’, or very limited sets of meteorological data rather than extensive sampling of long-term climate variability and change – Bloomfield et al. 2016; Hilbers et al. 2019). Many of the choices made by energy scientists concerning climate are well-founded, being driven by practical limitations (e.g., computational constraints), but in several other cases there is also a poor appreciation of the potential role of climate uncertainty in energy system applications (often focused on system resilience rather than design). Moreover, even when the two communities actively seek to collaborate, they often feel as if they ‘don’t speak the same language’.

The workshop was thus intended to encourage deeper engagement and interaction between energy and climate researchers.  It had two main objectives: to encourage an active collaboration between the relevant research communities, and to jointly pinpoint the challenges of incorporating weather and climate risk in energy system modelling while fostering opportunities to address them.  Each day of the meeting was designed around a topic and a pre-defined set of research/discussion questions. Day 1 was focused on the use of historical data to investigate climate risks in energy system modelling, whereas Day 2 was centred on the use of future climate data for the assessment of climate change impacts on the energy system. A combination of short ‘thought-provoking’ invited talks, small breakout groups and plenary sessions was used to address the proposed questions.

The outputs from the workshop are being prepared as a manuscript for submission later this summer.  However, some of the key outcomes of the discussions are highlighted were:

  • Climate data is abundant. The problems that energy modellers face range around data selection, downscaling, bias-correction, sub-sampling. This point was creatively illustrated by the “data truck” in one of the invited talks by Dr Sofia Simões (Figure 1).
  • Energy models and data are not always accessible or adequate. Information necessary to run or calibrate energy models (observed generation output, system grid and design, etc.) is not always readily available or of high quality. Additionally, climate scientists are ill-prepared to extract the weather and climate signal from those timeseries which are also impacted by non-meteorological factors (e.g., plant degradation, maintenance, cost decisions, etc.).
  • It is important to recognise that weather and climate are just one of the sources of uncertainty affecting the energy system. Energy modellers also face several other unknowns when representing the system, such as policies, market conditions, socio-economic factors, technological changes, etc. More research is needed to understand the extent to which climate uncertainty may affect the outcomes of energy-modelling studies targeting other problems (e.g., technological choices or policy design).   
  • There is need for a common language. The complexities of the tools of each community and the use of jargon often lead to confusion. Providing training that targets people working on the interface of the communities would be very beneficial.

Figure 1: The ‘climate data truck’ cartoon illustrates an incompatibility between climate data supply and the ability to ingest it into energy system models. Figure courtesy of Dr Sofia Simões and the Clim2Power project (https://clim2power.com/).

The switch to an online event was unexpectedly beneficial for the workshop, which ended up having a much wider reach than anticipated. Firstly, we were able to accommodate more participants than we would have done in a face-to-face workshop. And secondly, the fact that participants did not need to incur in any travel expenses meant that more early career scientists (ECSs) were able to join the event. Given the nature of “energy-climate” as a very new and rapidly evolving research field, the ECS community was one that the workshop purposefully sought to target and support.

The participant feedback was overwhelmingly positive and there was strong interest in organising a similar workshop next year, as well as in exploring the provision of training opportunities such as a Summer School, a YouTube channel, webinars, etc. The members of the organising committee (itself a highly international and multi-disciplinary group of researchers) continue to work together on developing these suggestions, and warmly welcome contributions and advice from interested parties (please see the workshop website for details).

References:

Bloomfield, H.C., D.J. Brayshaw, L.C. Shaffrey, P.J. Coker, and H.E. Thornton, 2016. Quantifying the increasing sensitivity of power systems to climate variability. Environ. Res. Lett., 11(12), p.124025. https://iopscience.iop.org/article/10.1088/1748-9326/11/12/124025

Hilbers, A.P., D.J. Brayshaw, and A. Gandy, 2019. Importance subsampling: improving power system planning under climate-based uncertainty. Appl. Energy, 251, p.113114. https://arxiv.org/abs/1903.10916

 

Posted in Climate | Leave a comment

Climate is changing. What are the risks for you and me?

Forewarned is forearmed

by Anna Freeman

The weather conditions prevailing in an area over a long period of time influence nearly every aspect of our lives and present both a resource and a hazard. Seasonal temperature cycles conditioning crop growth and energy demands are known as ‘climate resource’, while hot spells, floods and droughts are examples of ‘climate hazards’. More hazardous events and the variation in climate resource are known as climate risks. You have probably heard of wildfires in Australia and Siberia, heatwaves in Europe and floods in Britain, and the image of just how dangerous some climate risks could be is clear.

Measure the risk

Climate change might alter the magnitude, duration, frequency, timing, and spatial extent of events, all of which could be challenging. We can use these to measure climate risk. ‘Magnitude’, for instance, can be an extreme value over several years. ‘Duration’ defines how long an event lasts or how long conditions are within a specific range – such as the duration of the growing season. ‘Timing’ tells us when something occurs, and ‘frequency’ defines how often an event occurs. For example, heatwaves can be calculated as numbers of events per year (‘magnitude’) or number of days per year (‘duration’). 

Then we need to consider the ‘exposure’ – the livelihoods, assets, and ecosystems that could be negatively affected by hazard or change in climate resource – plus our ‘vulnerability’ to suffering harm or loss.

Climate risks could be presented as future impacts, but to do this we really need to assume how the economy adapts to climate change. Another approach is to calculate a series of climate risk indicators, which relate to, but do not measure the socio-economic impact. I’m currently working on a project, led by Prof. Nigel Arnell and Dr. Alison L Kay, identifying and estimating these indicators for the UK.

Indicators

The project has identified several indicators relevant to climate risks

  • Health and well-being indicators relate to ‘Met Office heatwave’ and the NHS ‘amber alert’ temperature thresholds.
  • Energy indicators are proxies for heating and cooling energy demand, based on thresholds used in building management.
  • Transport indicators are based on thresholds leading to increased operational risks of road surface melting or failure of railway track and signalling equipment etc.
  • Agri-climate indicators are proxies for agricultural productivity.
  • Drought indicator is expressed as the proportion of time in ‘drought’.
  • Wildfire indicators are based on fire warning systems currently used by the Met Office.
  • Water indicators are proxies for the effect of climate change on river flood risk and on water resource drought. 

Projections – 100 years ahead

The Met Office UK Climate Projections (UKCP) describe how the UK’s climate might change over the 21st century over the UK. The new UKCP18 projections (Lowe et al., 2018) combine results from the most plausible climate models at 60km, 25km, 12km and even 1km grid resolution over the country. In our study, we applied the UKCP18 changes in climate to the observed 1981-2010 baseline climatology (Met Office, 2018) to produce a series of projections of future climate, and we calculated our climate risk indicators.

Initial results

Figure 1: Indicators for transport, agriculture, and wildfire (MOFSI – The Met Office’s Fire Severity Index) between 1981-2100 estimated as 30-year mean. These are worst case scenario (high emissions) risks.

Figure 1 shows that in the worst-case scenario (high carbon emissions) climate risks for transport, agriculture, and wildfire will increase across the country. This is also true for public health, floods, and droughts. Demand for cooling energy will increase, but demand for heating energy will decline. The warmer southern and eastern England will see more heat extremes, but the rate of warming may be greater further north and west.

Bad news: If we don’t reduce carbon emissions in the atmosphere, we will follow the high emission scenario and face dangerous climate risks. Good news: by reducing emission, nationally and globally, the risks can be reduced, and by understanding how risks are changing we can develop adaptation and resilience strategies to lessen the impacts of climate change. For you and me this means that the severity of climate risks rests in the hands of humanity.

For more in-depth results please follow the University of Reading’s news updates.  If you want to know more about the climate risks project, please email: dr.anna.freeman@gmail.com

References:

Lowe, J.A. et al. (2018) UKCP18 Science Overview Report. Met Office Hadley Centre, version 2.0 https://www.metoffice.gov.uk/pub/data/weather/uk/ukcp18/science-reports/UKCP18-Overview-report.pdf

Met Office (2018) HadUK-Grid Gridded Climate Observations on a 12km grid over the UK for 1862-2017. Centre for Environmental Data Analysis, 15/07/2019. http://catalogue.ceda.ac.uk/uuid/dc2ef1e4f10144f29591c21051d99d39

 

Posted in Climate | Leave a comment

What Does A Probability Of Rainfall Mean?

By: Tom Frame

Here is a question that you may think has a simple answer – but surveys have often indicated people misinterpret it. So why is this question difficult to answer? This blog entry is about why the probability of rainfall is sometimes misunderstood. First however some context: in recent decades weather forecasts have moved from simply giving a definite statement of what will happen (‘Tomorrow noon it will rain”) to giving probabilistic statements (“Tomorrow noon there is a 50% chance of rain”). This is particularly true of many mobile phone apps which issue forecasts based on your location and show information about the amount of rainfall (e.g. a dark cloud with raindrops, a word such as heavy or light, or a numeric amount in mm) along with a probability value, usually expressed as a percentage.

So what does this probability actually mean?

To start, before considering rainfall, let’s consider a much simpler and familiar problem. Think of rolling a standard six sided unbiased die. What is the probability of rolling a six? Simple – there are six sides each with equal probability of occurring, therefore the probability is 1 in 6. Within this there are some hidden assumptions – for example it is unspoken, but assumed, that the die will always come to rest on one of its faces (not on a corner or edge), and that if it doesn’t, the roll is deemed invalid and it must be rolled again. This constraint guarantees that the result is always defined to be 1, 2, 3, 4, 5, or 6 and more importantly everyone understands what it means to “roll a die” and what the event “roll a 6” is. The same is true for example of gambling on sporting events – at a bookmakers you are given odds on the outcome of the game, the game has a set of rules and a referee to oversee the implementation of the rules so that the final score is defined exactly and everyone involved will know that it is 3-nil – even if they disagree with the referee’s decisions. The bookmakers will have some stated procedure to deal with other eventualities – e.g. cancellation of the match. Either way the event (role a die or a 3-nil victory) is well defined, so it can be ascribed a probability and the result can be observed a verified.

Now let us consider the case of a probability of rainfall. In order for the probability of the event to be calculated, first it is necessary to define what the event is. For weather apps, the probability shown is typically the Probability of Precipitation (PoP) rather than probability of rainfall. For the end user this is the probability of any form of precipitation (rain, sleet, snow, hail, drizzle) occurring at their location within a specified time interval (e.g. within a particular hour long interval). These probabilities are not static so if you look at the Apps forecast for noon tomorrow at 6am and then look again at 6pm you might well see that the probability value has changed. These changes are associated with new information being available to the forecast provider. A simple (and topical!) analogy would be to imagine if this time last year you had been asked to estimate the probability of the whole of UK being locked down in May this year. Chances are you would have given a value close to zero, whereas if you had been asked the same question in February this year you would probably give a higher probability. The new information you had available to you about COVID-19 lead you to revise your estimate. This is the essence of what a probabilistic forecast is – an estimate of the probability of an event occurring given the information available at the time it was issued.

So what exactly is the event that is being predicted by PoP? To understand the definition of the event, the simplest way is to imagine what you could do to determine whether or not the event occurs. To do this you would simply need to stand in the same place for the designated time window (e.g. if it is a forecast of hourly precipitation, stand there for the designated hour). If there is some precipitation then the event occurred, if there is not, then the event didn’t occur. If you do this many times you could then assess whether the probability forecasts were “correct” (meteorologists call this verification) – for example, if you stand in the same location every time the PoP forecast is 10%, then 1 in 10 times you should experience precipitation (meteorologists call this property reliability).   

In practice, forecasting centres define much more specific quantitative definitions of PoP, because in order verify and improve their PoP forecasts by “post-processing” raw forecast data they need to be able to routinely observe the precipitation and recalibrate their forecasts to make them reliable. For example, PoP is usually defined as precipitation exceeding some minimal value which is greater than zero related to the smallest amount of precipitation observable by rain-gauges (typically around 0.2 mm), although other observations such as rainfall radar may be used too. There may also be some spatial aggregation involved so that strictly speaking probabilities are not calculated for specific geographic locations but for larger areas with some assumptions about local homogeneity. The details of such calculations change as methodologies improve and may not be explicitly stated in publically available forecast guidance – but the guidance will (or at least should) state how the PoP forecast should be interpreted by the end user, so it is well worth reading through the guidance associated with any app you use.

So why the confusion? In surveys both long past (Murphy et al., 1980) and more recent (Fleischhut et al., 2020) the confusion seems to occur from end users not knowingthe definition of the event to which the probability is being assigned rather than misunderstanding  the nature of probability itself. One interesting result is that, when surveyed, people often erroneously interpret PoP to refer to the fraction of the area covered with rain rather than the probability of precipitation at a specific location. While not the correct interpretation, there are cases where the PoP may be closely related to the area of rainfall covered or is at least assumed so for practical reasons. For example, people often model rainfall statistically, particularly showers and convective cells, as Poisson point processes – essentially a stochastic process in which there is a fixed probability of shower appearing at any location within a fairly large area and time. In such a system the PoP forecast would be approximately equivalent to the fraction of the area covered by rainfall. Similarly in the calculation of rainfall probabilities using “neighbourhood processing” (Theis et al. 2005) the probability of rainfall at a point is estimated from the fraction of the surrounding area covered by rainfall in the forecast – making an explicit link to between the two.

Speaking recently with people I know who are not meteorologists, but regularly use weather Apps I realised that they associated the PoP value with the intensity of rainfall: higher PoP meaning more intense rainfall. This is of course not the correct interpretation of PoP and in part these conversations motivated the subject of this blog. Thinking it over I suspect I know the reason for their misinterpretation. Firstly, of course, they had not read the guidance for the app they were using so were simply unaware of what the percentage values they see on the app actually refer to. But how did they come to associate them with rainfall intensity? My hypothesis here (which is untested) is that there is a tendency for forecasts of heavier rainfall, particularly associated with fronts in autumn and winter, to be associated with higher PoP than weaker “showery” rain – simply because showers are inherently more uncertain than coherent features such as fronts can be forecast with more confidence. Therefore, as they look at the app they see PoP increase and decrease in line with the intensity of rainfall forecast and began to use it as a “pseudo-intensity” forecast.

References

Murphy, A.H., S. Lichtenstein, B. Fischhoff and R. L. Winkler, 1980:: Misinterpretations of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 61(7), 695-701. doi:10.1175/1520-0477(1980)061<0695:MOPPF>2.0.CO;2

Fleischhut, N., S. M. Herzogand R. Hertwig, 2020: Weather literacy in times of climate change. Wea. Climate Soc, 12(3), 435-452.doi:10.1175/WCAS-D-19-0043.1

Theis, S.E., A. Hense and U. Damrath, 2005: Probabilistic precipitation forecasts from a deterministic model: A pragmatic approach. Met. Apps., 12(3), 257-268. doi:10.1017/S1350482705001763

Posted in Climate, Statistics | Tagged , | Leave a comment