Are Eurasian winter cooling and Arctic Sea ice loss dynamically connected?

By: Rohit Gosh

The observed sea ice concentration (SIC) in the Arctic has been declining in recent decades. Temperatures have been rising all over the planet, but warming has been much faster over the Arctic, a phenomenon known as Arctic Amplification.  We have also seen some extremely cold Eurasian winters during the same period. These cold winters lead to a Warm Arctic-Cold Eurasia (WACE) pattern in the observed surface air temperature (SAT) trend (Figure 1a). Indeed, previous studies have found links between the warming Arctic and the cooling over Eurasia. However, many opposing studies claim the observed WACE trend is simply a result of climate noise or internal atmospheric variability (Ogawa et al. 2018). Over the last five years, the observed Eurasian cooling trend has been decreasing (Figure 1), whilst SIC has continued to fall, which supports the theory that the links found can be explained by noise in the climate data. But does the recent reduced Eurasian cooling really imply that Arctic sea-ice loss plays no role in creating the WACE trend? We can figure out the answer, if we look at the two main modes of SAT variability over Eurasia and their associated dynamics.  

Figure 1: a) December-January-February (DJF) surface air temperature (SAT) trend over Eurasia (20°-90°N,0-180°E) for the period 1980 to 2014 (35 years) from ERA Interim reanalysis, and b) 1980 to 2019 (40 years). Units are in K/year.

Applying principal component analysis to the winter (December-January-February, DJF) SAT variability data over Eurasia from 1980 to 2019, the first mode (EOF1) shows a Eurasian warming pattern (Figure 2a). The associated sea level pressure (SLP) shows a low centered on the Barents Sea (north of Scandinavia and Russia). This low is part of the Arctic Oscillation (AO), the main cause of Northern Hemisphere SLP variability, as the AO index has a strong correlation (Pearson correlation coefficient: 0.81) with the principal component (PC1) of the EOF1 (Figure 2c). The second mode of Eurasian SAT variability (EOF2) shows the WACE pattern, with a warm centre over the Barents Sea and a cold centre over central and eastern Eurasia (Figure 2b). The WACE pattern is associated with an SLP high centered on northern Eurasia/Siberia, which is known as the Ural blocking or Siberian high.

Figure 2: The spatial patterns (in shading) of the a) PC1/EOF1 and b) PC2/EOF2 principal component modes of winter (DJF) SAT variability over Eurasia (20°-90°N,0-180°E) in the ERA Interim reanalysis (1979-2019). The upper right corners of each panel show the explained variance fraction of each component. The EOF patterns are scaled to correspond to the one standard deviation variation of the respective principal component time series, and thus have units in K. The black contours are the SLP (in hPa) fields associated with the respective EOFs, derived by linear regression of the SLP field on the respective normalized PC time series. c) The normalized PC1 time series (in black) associated with the EOF1 patter in a) and the Arctic Oscillation index (in red), which is the normalized PC1 time series associated with the EOF1 of Northern Hemisphere (20°-90°N,180W-180°E) SLP. d) Th normalized PC2 Eurasian SAT timeseries (in black) associated with EOF2/WACE  pattern in b) and the normalized sign reversed timeseries of the winter area averaged (74°N-80°N, 20°E-68°E) Barents Sea SIC (in blue). Light gray vertical lines in c) and d) shows the year 2014, when the AO changed to a positive phase.

The principle component associated with the EOF2 or WACE pattern (PC2), shows a persistent positive trend, especially after 2005 (black time series in Figure 2d). This indicates a strengthening Ural blocking. Moreover, the time series is highly correlated with the SIC anomalies over the Barents Sea (Pearson correlation coefficient: 0.85). This is the area in the Arctic which has seen the highest SIC decline (red contoured area in Figure 3), situated below the warming center of the WACE pattern. This correlation suggests that the WACE pattern is in fact, dynamically coupled with the Barents Sea-ice variations (Mori et al. 2014) and therefore not simply due to climate noise. Moreover, the WACE pattern has strengthened over the last five years, leading to an enhanced Eurasian cooling. So, if the WACE-sea-ice relation holds, how did the overall Eurasian cooling decrease?                         

Figure 3: The winter (DJF) mean sea-ice concentration (SIC) trend in percent/year over the Arctic Ocean from HadISST-SIC data from 1979 to 2019. The red contour shows the Barents Sea region (74°N-80°N, 20°E-68°E).

The reduction of Eurasian cooling over the last five years is instead a result of the change in the PC1 trend from negative to positive after 2014 (black time series in Figure 2c). This change in trend effects the overall Eurasian SAT trends shown in Figure 1, which is a linear combination of the trends contributed by each principal component or EOF. The trend in PC1 is not significant as it arises mainly due to AO related internal variability. Nevertheless, until 2014, PC1 has a negative trend due to the negative phase of the AO from 2009 (Figure 2c). This brings a central Eurasian cooling response and reinforces the Barents Sea-ice forced cooling trend from the WACE pattern (Figure 2b) and enhances the Eurasian cooling signal (Figure 1a). However, by 2019, PC1 trend becomes positive due to the positive phase of AO after 2014. This leads to central Eurasian warming and competes with the significant cooling trend from the WACE pattern. The net effect is a reduced Eurasian cooling signal in the overall SAT trend (Figure 1b). Hence, in spite of an increasing WACE trend, Eurasian SAT cooling has weakened over the last five years due to the phase change of the Arctic Oscillation.

References:

Masato, M.,  M. Watanabe, H. Shiogama, J. Inoue, J. and M. Kimoto, 2014: Robust Arctic Sea-Ice Influence on the Frequent Eurasian Cold Winters in Past Decades, Nat. Geosci., 7, 869-873, https://doi.org/10.1038/ngeo2277

Ogawa, F., and Coauthors, 2018: Evaluating Impacts of Recent Arctic Sea Ice Loss on the Northern Hemisphere Winter Climate Change. Geophys. Res. Lett., 45, 3255–63, https://doi.org/10.1002/2017GL076502

 

 

Posted in Arctic, Climate, Cryosphere, Polar, Teleconnections | Leave a comment

Keeping the lights on: A new generation of research into climate risks in energy systems

By: Paula Gonzalez, Hannah Bloomfield, David Brayshaw

The Department’s Energy Meteorology Group recently hosted an online 2-day workshop on the Next Generation Challenges in Energy-Climate Modelling, supported by the EU-H2020 PRIMAVERA project. The event took place on June 22-23, and though it was planned to physically take place in Reading, it evolved into a Zoom meeting due to the COVID-19 pandemic. The workshop was joined by 81 participants from 22 countries in 6 continents.

Climate variability and change have a two-way relationship with the energy system.  On the one hand, the need to reduce greenhouse gasses emissions is driving an increase in the use of weather-sensitive renewable energy sources, such as wind and solar power, and the electrification of fossil fuel intensive sectors such as transport. On the other, a changing climate impacts the energy system through changing resource patterns and the need for heating and cooling. As a result, the energy system as a whole is becoming more sensitive to climate and energy researchers are becoming increasingly aware of the risks associated with climate variability and change.

Recent years have therefore seen a trend towards the incorporation of climate risk into energy system modelling. Significant challenges remain, and in many cases climate risk and uncertainty are neglected or handled poorly (e.g., by focussing on ‘Typical Meteorological Years’, or very limited sets of meteorological data rather than extensive sampling of long-term climate variability and change – Bloomfield et al. 2016; Hilbers et al. 2019). Many of the choices made by energy scientists concerning climate are well-founded, being driven by practical limitations (e.g., computational constraints), but in several other cases there is also a poor appreciation of the potential role of climate uncertainty in energy system applications (often focused on system resilience rather than design). Moreover, even when the two communities actively seek to collaborate, they often feel as if they ‘don’t speak the same language’.

The workshop was thus intended to encourage deeper engagement and interaction between energy and climate researchers.  It had two main objectives: to encourage an active collaboration between the relevant research communities, and to jointly pinpoint the challenges of incorporating weather and climate risk in energy system modelling while fostering opportunities to address them.  Each day of the meeting was designed around a topic and a pre-defined set of research/discussion questions. Day 1 was focused on the use of historical data to investigate climate risks in energy system modelling, whereas Day 2 was centred on the use of future climate data for the assessment of climate change impacts on the energy system. A combination of short ‘thought-provoking’ invited talks, small breakout groups and plenary sessions was used to address the proposed questions.

The outputs from the workshop are being prepared as a manuscript for submission later this summer.  However, some of the key outcomes of the discussions are highlighted were:

  • Climate data is abundant. The problems that energy modellers face range around data selection, downscaling, bias-correction, sub-sampling. This point was creatively illustrated by the “data truck” in one of the invited talks by Dr Sofia Simões (Figure 1).
  • Energy models and data are not always accessible or adequate. Information necessary to run or calibrate energy models (observed generation output, system grid and design, etc.) is not always readily available or of high quality. Additionally, climate scientists are ill-prepared to extract the weather and climate signal from those timeseries which are also impacted by non-meteorological factors (e.g., plant degradation, maintenance, cost decisions, etc.).
  • It is important to recognise that weather and climate are just one of the sources of uncertainty affecting the energy system. Energy modellers also face several other unknowns when representing the system, such as policies, market conditions, socio-economic factors, technological changes, etc. More research is needed to understand the extent to which climate uncertainty may affect the outcomes of energy-modelling studies targeting other problems (e.g., technological choices or policy design).   
  • There is need for a common language. The complexities of the tools of each community and the use of jargon often lead to confusion. Providing training that targets people working on the interface of the communities would be very beneficial.

Figure 1: The ‘climate data truck’ cartoon illustrates an incompatibility between climate data supply and the ability to ingest it into energy system models. Figure courtesy of Dr Sofia Simões and the Clim2Power project (https://clim2power.com/).

The switch to an online event was unexpectedly beneficial for the workshop, which ended up having a much wider reach than anticipated. Firstly, we were able to accommodate more participants than we would have done in a face-to-face workshop. And secondly, the fact that participants did not need to incur in any travel expenses meant that more early career scientists (ECSs) were able to join the event. Given the nature of “energy-climate” as a very new and rapidly evolving research field, the ECS community was one that the workshop purposefully sought to target and support.

The participant feedback was overwhelmingly positive and there was strong interest in organising a similar workshop next year, as well as in exploring the provision of training opportunities such as a Summer School, a YouTube channel, webinars, etc. The members of the organising committee (itself a highly international and multi-disciplinary group of researchers) continue to work together on developing these suggestions, and warmly welcome contributions and advice from interested parties (please see the workshop website for details).

References:

Bloomfield, H.C., D.J. Brayshaw, L.C. Shaffrey, P.J. Coker, and H.E. Thornton, 2016. Quantifying the increasing sensitivity of power systems to climate variability. Environ. Res. Lett., 11(12), p.124025. https://iopscience.iop.org/article/10.1088/1748-9326/11/12/124025

Hilbers, A.P., D.J. Brayshaw, and A. Gandy, 2019. Importance subsampling: improving power system planning under climate-based uncertainty. Appl. Energy, 251, p.113114. https://arxiv.org/abs/1903.10916

 

Posted in Climate | Leave a comment

Climate is changing. What are the risks for you and me?

Forewarned is forearmed

by Anna Freeman

The weather conditions prevailing in an area over a long period of time influence nearly every aspect of our lives and present both a resource and a hazard. Seasonal temperature cycles conditioning crop growth and energy demands are known as ‘climate resource’, while hot spells, floods and droughts are examples of ‘climate hazards’. More hazardous events and the variation in climate resource are known as climate risks. You have probably heard of wildfires in Australia and Siberia, heatwaves in Europe and floods in Britain, and the image of just how dangerous some climate risks could be is clear.

Measure the risk

Climate change might alter the magnitude, duration, frequency, timing, and spatial extent of events, all of which could be challenging. We can use these to measure climate risk. ‘Magnitude’, for instance, can be an extreme value over several years. ‘Duration’ defines how long an event lasts or how long conditions are within a specific range – such as the duration of the growing season. ‘Timing’ tells us when something occurs, and ‘frequency’ defines how often an event occurs. For example, heatwaves can be calculated as numbers of events per year (‘magnitude’) or number of days per year (‘duration’). 

Then we need to consider the ‘exposure’ – the livelihoods, assets, and ecosystems that could be negatively affected by hazard or change in climate resource – plus our ‘vulnerability’ to suffering harm or loss.

Climate risks could be presented as future impacts, but to do this we really need to assume how the economy adapts to climate change. Another approach is to calculate a series of climate risk indicators, which relate to, but do not measure the socio-economic impact. I’m currently working on a project, led by Prof. Nigel Arnell and Dr. Alison L Kay, identifying and estimating these indicators for the UK.

Indicators

The project has identified several indicators relevant to climate risks

  • Health and well-being indicators relate to ‘Met Office heatwave’ and the NHS ‘amber alert’ temperature thresholds.
  • Energy indicators are proxies for heating and cooling energy demand, based on thresholds used in building management.
  • Transport indicators are based on thresholds leading to increased operational risks of road surface melting or failure of railway track and signalling equipment etc.
  • Agri-climate indicators are proxies for agricultural productivity.
  • Drought indicator is expressed as the proportion of time in ‘drought’.
  • Wildfire indicators are based on fire warning systems currently used by the Met Office.
  • Water indicators are proxies for the effect of climate change on river flood risk and on water resource drought. 

Projections – 100 years ahead

The Met Office UK Climate Projections (UKCP) describe how the UK’s climate might change over the 21st century over the UK. The new UKCP18 projections (Lowe et al., 2018) combine results from the most plausible climate models at 60km, 25km, 12km and even 1km grid resolution over the country. In our study, we applied the UKCP18 changes in climate to the observed 1981-2010 baseline climatology (Met Office, 2018) to produce a series of projections of future climate, and we calculated our climate risk indicators.

Initial results

Figure 1: Indicators for transport, agriculture, and wildfire (MOFSI – The Met Office’s Fire Severity Index) between 1981-2100 estimated as 30-year mean. These are worst case scenario (high emissions) risks.

Figure 1 shows that in the worst-case scenario (high carbon emissions) climate risks for transport, agriculture, and wildfire will increase across the country. This is also true for public health, floods, and droughts. Demand for cooling energy will increase, but demand for heating energy will decline. The warmer southern and eastern England will see more heat extremes, but the rate of warming may be greater further north and west.

Bad news: If we don’t reduce carbon emissions in the atmosphere, we will follow the high emission scenario and face dangerous climate risks. Good news: by reducing emission, nationally and globally, the risks can be reduced, and by understanding how risks are changing we can develop adaptation and resilience strategies to lessen the impacts of climate change. For you and me this means that the severity of climate risks rests in the hands of humanity.

For more in-depth results please follow the University of Reading’s news updates.  If you want to know more about the climate risks project, please email: dr.anna.freeman@gmail.com

References:

Lowe, J.A. et al. (2018) UKCP18 Science Overview Report. Met Office Hadley Centre, version 2.0 https://www.metoffice.gov.uk/pub/data/weather/uk/ukcp18/science-reports/UKCP18-Overview-report.pdf

Met Office (2018) HadUK-Grid Gridded Climate Observations on a 12km grid over the UK for 1862-2017. Centre for Environmental Data Analysis, 15/07/2019. http://catalogue.ceda.ac.uk/uuid/dc2ef1e4f10144f29591c21051d99d39

 

Posted in Climate | Leave a comment

What Does A Probability Of Rainfall Mean?

By: Tom Frame

Here is a question that you may think has a simple answer – but surveys have often indicated people misinterpret it. So why is this question difficult to answer? This blog entry is about why the probability of rainfall is sometimes misunderstood. First however some context: in recent decades weather forecasts have moved from simply giving a definite statement of what will happen (‘Tomorrow noon it will rain”) to giving probabilistic statements (“Tomorrow noon there is a 50% chance of rain”). This is particularly true of many mobile phone apps which issue forecasts based on your location and show information about the amount of rainfall (e.g. a dark cloud with raindrops, a word such as heavy or light, or a numeric amount in mm) along with a probability value, usually expressed as a percentage.

So what does this probability actually mean?

To start, before considering rainfall, let’s consider a much simpler and familiar problem. Think of rolling a standard six sided unbiased die. What is the probability of rolling a six? Simple – there are six sides each with equal probability of occurring, therefore the probability is 1 in 6. Within this there are some hidden assumptions – for example it is unspoken, but assumed, that the die will always come to rest on one of its faces (not on a corner or edge), and that if it doesn’t, the roll is deemed invalid and it must be rolled again. This constraint guarantees that the result is always defined to be 1, 2, 3, 4, 5, or 6 and more importantly everyone understands what it means to “roll a die” and what the event “roll a 6” is. The same is true for example of gambling on sporting events – at a bookmakers you are given odds on the outcome of the game, the game has a set of rules and a referee to oversee the implementation of the rules so that the final score is defined exactly and everyone involved will know that it is 3-nil – even if they disagree with the referee’s decisions. The bookmakers will have some stated procedure to deal with other eventualities – e.g. cancellation of the match. Either way the event (role a die or a 3-nil victory) is well defined, so it can be ascribed a probability and the result can be observed a verified.

Now let us consider the case of a probability of rainfall. In order for the probability of the event to be calculated, first it is necessary to define what the event is. For weather apps, the probability shown is typically the Probability of Precipitation (PoP) rather than probability of rainfall. For the end user this is the probability of any form of precipitation (rain, sleet, snow, hail, drizzle) occurring at their location within a specified time interval (e.g. within a particular hour long interval). These probabilities are not static so if you look at the Apps forecast for noon tomorrow at 6am and then look again at 6pm you might well see that the probability value has changed. These changes are associated with new information being available to the forecast provider. A simple (and topical!) analogy would be to imagine if this time last year you had been asked to estimate the probability of the whole of UK being locked down in May this year. Chances are you would have given a value close to zero, whereas if you had been asked the same question in February this year you would probably give a higher probability. The new information you had available to you about COVID-19 lead you to revise your estimate. This is the essence of what a probabilistic forecast is – an estimate of the probability of an event occurring given the information available at the time it was issued.

So what exactly is the event that is being predicted by PoP? To understand the definition of the event, the simplest way is to imagine what you could do to determine whether or not the event occurs. To do this you would simply need to stand in the same place for the designated time window (e.g. if it is a forecast of hourly precipitation, stand there for the designated hour). If there is some precipitation then the event occurred, if there is not, then the event didn’t occur. If you do this many times you could then assess whether the probability forecasts were “correct” (meteorologists call this verification) – for example, if you stand in the same location every time the PoP forecast is 10%, then 1 in 10 times you should experience precipitation (meteorologists call this property reliability).   

In practice, forecasting centres define much more specific quantitative definitions of PoP, because in order verify and improve their PoP forecasts by “post-processing” raw forecast data they need to be able to routinely observe the precipitation and recalibrate their forecasts to make them reliable. For example, PoP is usually defined as precipitation exceeding some minimal value which is greater than zero related to the smallest amount of precipitation observable by rain-gauges (typically around 0.2 mm), although other observations such as rainfall radar may be used too. There may also be some spatial aggregation involved so that strictly speaking probabilities are not calculated for specific geographic locations but for larger areas with some assumptions about local homogeneity. The details of such calculations change as methodologies improve and may not be explicitly stated in publically available forecast guidance – but the guidance will (or at least should) state how the PoP forecast should be interpreted by the end user, so it is well worth reading through the guidance associated with any app you use.

So why the confusion? In surveys both long past (Murphy et al., 1980) and more recent (Fleischhut et al., 2020) the confusion seems to occur from end users not knowingthe definition of the event to which the probability is being assigned rather than misunderstanding  the nature of probability itself. One interesting result is that, when surveyed, people often erroneously interpret PoP to refer to the fraction of the area covered with rain rather than the probability of precipitation at a specific location. While not the correct interpretation, there are cases where the PoP may be closely related to the area of rainfall covered or is at least assumed so for practical reasons. For example, people often model rainfall statistically, particularly showers and convective cells, as Poisson point processes – essentially a stochastic process in which there is a fixed probability of shower appearing at any location within a fairly large area and time. In such a system the PoP forecast would be approximately equivalent to the fraction of the area covered by rainfall. Similarly in the calculation of rainfall probabilities using “neighbourhood processing” (Theis et al. 2005) the probability of rainfall at a point is estimated from the fraction of the surrounding area covered by rainfall in the forecast – making an explicit link to between the two.

Speaking recently with people I know who are not meteorologists, but regularly use weather Apps I realised that they associated the PoP value with the intensity of rainfall: higher PoP meaning more intense rainfall. This is of course not the correct interpretation of PoP and in part these conversations motivated the subject of this blog. Thinking it over I suspect I know the reason for their misinterpretation. Firstly, of course, they had not read the guidance for the app they were using so were simply unaware of what the percentage values they see on the app actually refer to. But how did they come to associate them with rainfall intensity? My hypothesis here (which is untested) is that there is a tendency for forecasts of heavier rainfall, particularly associated with fronts in autumn and winter, to be associated with higher PoP than weaker “showery” rain – simply because showers are inherently more uncertain than coherent features such as fronts can be forecast with more confidence. Therefore, as they look at the app they see PoP increase and decrease in line with the intensity of rainfall forecast and began to use it as a “pseudo-intensity” forecast.

References

Murphy, A.H., S. Lichtenstein, B. Fischhoff and R. L. Winkler, 1980:: Misinterpretations of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 61(7), 695-701. doi:10.1175/1520-0477(1980)061<0695:MOPPF>2.0.CO;2

Fleischhut, N., S. M. Herzogand R. Hertwig, 2020: Weather literacy in times of climate change. Wea. Climate Soc, 12(3), 435-452.doi:10.1175/WCAS-D-19-0043.1

Theis, S.E., A. Hense and U. Damrath, 2005: Probabilistic precipitation forecasts from a deterministic model: A pragmatic approach. Met. Apps., 12(3), 257-268. doi:10.1017/S1350482705001763

Posted in Climate, Statistics | Tagged , | Leave a comment

Practical Problems when Simulating the Earth

By: David Case

In principle, to simulate the earth should be a doddle. We know that it’s made of such things as molecules, crystals and atoms, and the forces between these derive from charged particles, and these do little more than move around and interact via Coulomb’s law (plus a little symmetry). So how hard could it be to start from this and scale up?

Unfortunately, if one consults the ancient tomes (such as my PhD thesis), one realises that all of this has been known for a while, and we aren’t there yet.  To make a calculation of molecular forces which is as accurate as an experiment, a typical cost may scale at around the seventh power of the size of the basis set; so as we double the size of our simulated system, we need to perform 27=128 times as many calculations, and this barrier is impossibly steep. Whilst the approach has been known since before WWII, progress in these types of simulations really started to increase when people decided to just use parameterised models and call them ab initio. And even by cheating we can only get so far.

More recently, I’ve moved to Meteorology, and the approach here is to start from the big (the atmosphere/ocean) and move towards the small. Points upon the system are mapped to a grid, and more of these points are added until the resolution is sufficient to describe the interesting physical phenomena. Encouraged by a manageable scaling in the number of computations required (although new to the game, I’m yet to see seven nested loops in a meteorology code), we throw more processors at it. One of the first things that I did when I joined the NCAS-CMS (Computational Modelling Services) team was to graph the scaling of the Met Office Unified Model for the atmosphere (below), so as to advise researchers with resource allocations. When we double the number of processing elements, we don’t double the rate at which we are performing calculations, because the communication between them starts to hit bottlenecks. Further profiling, especially for bigger models, reveals that the code spends ever increasing amounts of time calling things with names like ‘barrier’ or ‘waitall’, i.e. it’s stuck.

Figure 1: The amount of actual simulation achievable (y) for a typical UM job shows diminishing returns with number of cores (x).

When scaling up the number of processors working on a problem, there is a step which appears trivial, which can be the major bottleneck: reading and writing the data (and in Meteorology there is a lot of data). As we parallelize the calculation of data, we must also try to parallelize the reading and writing of it, which can be hard because writing to disks imposes a physical bottleneck. The Met Office and NERC Cloud model (MONC) previously wrote the large 3D fields in parallel, but an optimisation that I implemented applied this to (far smaller) 2D fields too. The message from the profiles below is that the number of times in which you write data may be as important as the amount which is written.

Figure 2: Darshan profiles of IO for processors (y-axis) vs time (x) for MONC. Blue lines indicate that the processors are writing. In the bottom profile, 2D fields are written in parallel, and both writing and overall runtimes are shorter.

Following the logic that the biggest calculations hit the most trivial problems, we note that a major consideration in huge calculations is the electricity bill. In fact, for this and other reasons, people are turning to a wide range of technologies when designing the current generation of supercomputers, some using graphical processor units or other accelerators. A practical problem with this is that you need to write code with different instructions for these different machines, which may take many hours to learn the tricks of and successfully port. A collaboration that I have started on recently with the Science and Technologies Facilities Council seeks to implement their tool for parsing and rewriting code, PSyclone, to target GPU cards, starting with the NEMOVAR data assimilation code. 

Figure 3: Fancy GPU from a well-known company

In the above, I have touched on a few of the practical problems that we face in big simulations and mentioned my own career story to get here. One last thing that I have noted since moving to Meteorology, and working within the structure of NCAS, is that there is a lot of teamwork which we apply to solving these problems. I was lying when I said that these were trivial, but, between us we can keep pushing through them.

Posted in Climate, Numerical modelling | Leave a comment

How Can We Improve the Health Sector’s Climate Resilience?

By: Katty Huang and Andrew Charlton-Perez

The Problem

Weather and climate can have great impacts on human health. One aspect of this is in relation to temperature exposure. In the UK, around 9% of deaths are associated with too warm or too cold outdoor temperatures. The majority (around 40,000) of this is related to cold weather, but as the climate warms, around 7000 additional deaths per year will be associated with heat exposure by the 2050s if the population does not adapt to the changes. Health consequences of heat and cold include increased risks of heart attacks, strokes, and respiratory diseases. Aging of the population compounds the problem, as the elderly are particularly at risk due to their increased vulnerability.

The Current System

The adverse health impacts of heat and cold events can be avoided by taking preventative measures to minimize exposure, especially for the vulnerable. Severe weather forecasting systems for health are now common around the world and form one key part of the global impact-based forecasting system. In the UK, Public Health England has a Heatwave Plan and a Cold Weather Plan. Each contains an alert system based on temperature threshold definitions of a “severe event”, and an increase in alert level is triggered when a severe event is forecasted 2 to 3 days in advance. Each level of the alert system is associated with advice and action plans for different sectors of society, particularly in health and social services.

Our Aim

With increasing health risks associated with non-optimal outdoor temperatures in the future, there is a growing incentive to develop a UK climate service for health. We have been funded as part of the Strategic Priorities Fund UK Climate Resilience Programme to help the Met Office to develop the technology to build this climate service, and since January, we have been working to bring some recent weather and climate research to bear on this important problem.

To ensure the development of a climate service benefiting the end users, it is key that we engage the stakeholders in conversation about their needs and to discuss the usability of our research in practical decision making. To this end, we have held a short workshop with Public Health England and are in active engagement with them and other public health stakeholders in Scotland, Wales, and Northern Ireland. We are keen to hear from all sectors and encourage anyone interested in a climate service for health to get in touch with us directly.

Weather Regimes

One field of weather and climate research that has seen a great deal of growth in recent years is the classification of atmospheric circulation patterns into groups, called weather regimes. These give an indication of the state of the large-scale atmospheric circulation, with particular consequences for temperature and weather and climate impacts, for example, in the UK. Understanding and quantifying how weather regimes and their local influences on temperature might change in the future is one way to develop a more sophisticated description of future climate.

There is some evidence to suggest that there is greater forecast skill in predicting future weather regimes than specific outputs such as temperature. For health risk prevention, this provides an opportunity to provide additional information to decision makers in advance of a potentially severe event, allowing for a longer response time for agencies and service workers.

Our Approach

We recently completed the first stage of our work looking at the relationship between weather regimes and mortality in the UK. We first use statistical modelling to establish the temperature-mortality relationship for 12 administrative regions in the UK (e.g. North East, North West). The statistical model allows us to calculate how many deaths on a given day could be attributed to non-optimal outdoor temperatures. By matching these attributed deaths with daily classification of weather regimes, we can see in detail which weather regimes lead to high mortality in the UK. Based on this analysis, we can already tell a lot about which weather conditions are most harmful to mortality in the UK.

Winter: Mostly about NAO-

In winter, the negative phase of the North Atlantic Oscillation (NAO-) is most likely to be associated with high mortality for all regions, confirming our previous work. The NAO- regime has a weakened jet stream across the UK, with winds more often coming from the east and north-east, bringing cold and dry air from the European continent in winter. For an idea of how the regime looks like on a weather map, see Figure 1a.

Figure 1:Average sea level pressure (in black lines) and its difference from the typical sea level pressure (in colours) during a particular weather regime which is (a) associated with high winter mortality, (b) associated with high summer mortality related to heat, and (c) associated with high summer mortality related to cold.

Summer: Both Cold and Warm Days

In summer, the picture is more complicated. We find that, in the current climate, many of the days in summer with large temperature related mortality have colder than average temperatures. In most regions, between 30 to 43% of the days with the highest 5% temperature-related mortality is associated with summer cold spells. The ratio is lower in London and East Midlands, but larger in Scotland and Northern Ireland.

This finding can be explained by the temperature-mortality relationship in the UK, which can be roughly described as U-shaped, with increased mortality risks at warm and cold temperature extremes (for an example, see Figure 2). Risks are also slightly elevated for moderately cold temperatures. This means that cold days in summer (where temperatures are around 10°C) can lead to similar numbers of additional deaths as mild warm days (with temperatures around 20°C).

Figure 2:The top panel is the mortality risk for each daily average outdoor temperature, here shown as an example for South West England. The mortality risk is expressed as a ratio relative to the regional optimal temperature where the overall risk is at its lowest (17°C in this case). Relative risk of 2 indicates that the mortality risk is twice the risk at the optimal temperature. The bottom panel shows how frequently each daily average temperature occurs in South West England on average. The vertical dashed lines indicate the maximum and minimum daily average temperatures observed between 1991 and 2018.

One major difference between the mortality associated with warm and cold temperatures is how long their impact on deaths lasts. While deaths related to heat mostly occur during or within the first days after high temperatures occur, cold-related deaths can be delayed by many days or weeks. This means that even though the total number of deaths associated with summer cold spells can be significant, they are spread out over more days and can be less noticeable than the rapid spike in deaths caused by a heatwave.

High mortality due to summer heatwaves is most likely to occur when there is a high pressure system over the North Sea and Scandinavia (shown in Figure 1b), which leads to clear sunny days in the UK. Cold-related mortality is most frequently associated with weather patterns with higher than usual pressure over the North Atlantic Ocean (see Figure 1c). The jet stream forms a ridge over the mid-Atlantic and recurves southeastward as it crosses the UK, bringing cool and humid air from the ocean.

The Future

During the next phase of our work, we are looking at applying our ideas to the UK Climate Projections produced by the Met Office, to help quantify and understand how climate-related mortality could change in the future. There are also important differences at even smaller, city-scales which can make climate-related mortality worse not least because of the socio-economic inequalities in some major cities in the UK. Urban modelling as part of the project will help us to map and understand some of these impacts.

References

Charlton-Perez, A. J., R. W. Aldridge, C. M. Grams, and R. Lee, 2019: Winter pressures on the UK health system dominated by the Greenland Blocking weather regime. Weather and Climate Extremes, 25, 100218, https://doi.org/10.1016/j.wace.2019.100218.

Gasparrini, A., and Coauthors, 2015: Mortality risk attributable to high and low ambient temperature: a multicountry observational study. The Lancet, 386, 9991, 369-375, https://doi.org/10.1016/S0140-6736(14)62114-0.

Hajat, S., S. Vardoulakis, C. Heaviside, and B. Eggen, 2014: Climate change effects on human health: projections of temperature-related mortality for the UK during the 2020s, 2050s and 2080s. J. Epidemiol. Community Health, 68, 641-648, http://dx.doi.org/10.1136/jech-2013-202449.

 

Posted in Climate, Environmental hazards, Health | Leave a comment

How can we contribute to extreme event attribution in the Arctic?

By: Daniela Flocco

News of broken temperature records, droughts and extreme climate events are nowadays constantly present in newspapers and on social media. The study of the connection between extreme and global climate changes has become subject of an area of research called ‘extreme event attribution’, defined as the science of detecting whether anthropogenic (human-made) global warming contributed to the occurrence of extreme events. Scientists at National Oceanic and Atmospheric Administration (NOAA) have produced yearly reports since 2011 with the scope of explaining the causes of the previous year’s extreme events (see www.climate.gov for more details).

Extreme climatic events have become of wide interest for their economic consequences, especially when they concern urban areas of the globe (Frame et al. 2020). The anthropogenic impact on extreme events can be also observed in less populated regions such as the Poles, even though it is more difficult to estimate its “cost” in the short term. A recent study (Kirchmeier-Young, et al., 2017) looked at the recent extreme Arctic sea ice September minima (focusing on the record-minimum of 2012) and assessed the human impact on these events. They found that the occurrence of extreme sea ice extent minima is consistent with a scenario including anthropogenic influence and is extremely unlikely in a scenario excluding anthropogenic influence. They also state that the inclusion of anthropogenic forcing is a necessary but not a sufficient cause at present to explain the observed sea ice extent lows.

Attribution of extreme events is strongly based on statistical studies of model forecasts and therefore relies on high resolution, physically-sophisticated models. This is particularly challenging in a changing climate where the parameterization of physical processes need to be able to capture the behaviour of unprecedented scenarios and produce representative results. In fact, the statistical analysis needed for event attribution requires climate models that have skills in forecasting the expected behaviour with respect to less predictable, rare events (Nature News, 2012). 

Researchers are engaged in a common effort to improve models performance and assess it. An example of how the improvement of the physics in a sea ice model can lead to improvement in prediction skills is work that our group (Centre for Polar Observations and Modelling), has carried out during the past few years: the implementation of a melt pond parameterization in the sea ice component of a global climate model and the analysis of the consequent improvements (Flocco et al., 2012, Schröder et al., 2014).

Figure 1: Melt ponds on Arctic sea ice (©NASA/Kate Ramsayer).

Changes in the Arctic and the Antarctic are faster and amplified with respect to the lower latitudes. A contributor of the ‘polar amplification’ is the so called the ice-albedo feedback: sea ice reflects almost entirely the solar radiation because of its high reflectance (albedo). When sea ice melts, larger areas of the ocean become exposed to sunlight; these absorb large part of the solar radiation inducing further melt. This is true also on the sea ice itself where melt ponds, puddles of water forming in spring in topographic lows from sea ice and snow melt, cause a strong increase in sea ice melt forming more melt ponds (Fig. 1). This process links the presence of ponds, in particular in the early melt season, to the amount of summer ice melt and consequently the amplitude of the minimum ice extent in September.

Figure 2: Annual cycle of Arctic mean fraction of sea-ice area covered by exposed melt-ponds in our CICE simulation. (Schroeder et al., 2014).

Figure 3: Predicted ice extent verified by use of SSM/I data for the period 1979–2013 (Schroeder et al., 2014).

The presence of melt pond on sea ice has increased over the past decades (Fig. 2), making it crucial to develop a parameterization suitable for a climate model that would be able to deal with a changing sea ice state. In fact, the increase in melt pond presence could be thought of as a proxy for air temperature rise. The inclusion of the new melt pond physical description allows skilful predictions of the sea ice extent minima in September depending on the presence of melt ponds in May (Fig. 3) and in particular, the improved model was able to predict with high confidence the sea ice extent minimum of 2012.

References

 Flocco, D., D. Schröder, D. L. Feltham, and E. C. Hunke, 2012: Impact of melt ponds on Arctic sea ice simulations from 1990 to 2007, J. Geophys. Res. 117, C9, https://doi.org/10.1029/2012JC008195

Frame, D.J., M. F. Wehner, I. Noy, and S. M. Rosier, 2020:  The economic costs of Hurricane Harvey attributable to climate change. Climatic Change 160, 271–281, https://doi.org/10.1007/s10584-020-02692-8.

Kirchmeier-Young, M. C., F. W. Zwiers, and N. P. Gillett, 2017: Attribution of Extreme Events in Arctic Sea Ice Extent. J. Climate, 30, 553–571, https://doi.org/10.1175/JCLI-D-16-0412.1.

Schröder, D., D. Feltham, D. Flocco, and M. Tsamados, 2014: September Arctic sea-ice minimum predicted by spring melt-pond fraction. Nat. Climate Change, 4, 353–357, https://doi.org/10.1038/nclimate2203.

Nature news: Nature 489, 335–336 (20 September 2012) doi:10.1038/489335b

 

 

 

 

Posted in Climate change, Climate modelling, Cryosphere | Leave a comment

Towards a marginal Arctic sea ice cover

By: Danny Feltham

As the winter night descends on the polar oceans, the surface mixed layer cools and begins to freeze, forming a floating layer of sea ice. Sea ice is a complex and dynamic component of the climate system; it is strongly influenced by, and in turn influences, air and ocean temperatures, winds and ocean currents, and undergoes large seasonal changes, growing in extent and thickness in winter, and receding to a minimum in late summer.

The planet is warming at ~1oC per century, and amplification processes have roughly doubled the Arctic regional warming rate in recent decades. The strong decline of Arctic sea ice is a striking indicator of climate change, with the last 15 years (2005—2019) seeing the 15 lowest September Arctic ice extents in the satellite record. This decline has been a wake-up call to scientists, policy-makers, and the general public. Studies show that the loss of sea ice has already contributed to Arctic amplification of global warming, has influenced biological productivity, species interactions and disease transmission, and is impacting indigenous peoples, trade, and oil exploration, including the promotion of a growing polar ecotourism industry.

Figure 1: Schematic cartoon of the Arctic sea ice food web.  Credit to Hugo Ahlenius.

The sea ice cover, of either pole, features a dense inner pack ice zone surrounded by a marginal ice zone (MIZ) in which the sea ice properties are modified by interaction with the ice-free open ocean, particularly ocean wave-ice interaction that can break up the ice cover. (See Figure 1.) The MIZ is some 100 km or so wide and is a region of low ice area concentration consisting of a disperse collection of small sea ice floes: the reduced sea ice cover exposes greater areas of the ocean to the atmosphere, and intensifies and prolongs air-ocean exchanges of heat, moisture, and momentum, altering the circulation and properties of air, ocean, and ice, air-sea gas exchange, and carbon exchange across the air-sea interface.

The conspicuous reduction of Arctic sea ice extent, combined with the observations that the MIZ is getting wider over the last decade, has often led to the impression that the MIZ is getting larger and, in the science journalism literature (and quite a few scientific papers also), one often comes across the assumption (assertion) that the “MIZ is increasing”, often in conjunction with comments on what this will mean for the future. (I will not mention names here to spare some blushes, except to note that I have sometimes found myself guilty of such woolly thinking and found myself in good company.)

Figure 2: Arctic sea ice extent (solid line) and MIZ extent (dashed line) from model and four remote sensing products (see legend). MIZ extent is defined as the area of ocean with sea ice area fraction of between 15 and 80%. Sea ice extent is the area of ocean with ice area fraction above 80%. An error bar of 10% has been applied to all observational products.

A recent study by Rolph et al [2020] has analysed sea ice concentration data from a range of sources and, using a commonly-used definition of the MIZ extent as the area of that region of the ocean with ice area fraction between 15 and 80%, analysed changes in the Arctic MIZ extent for the first time. While there are some significant caveats concerning the accuracy of ice concentration data, particularly in the summer, a conclusion from this study is that there is little evidence that the MIZ extent is increasing or decreasing and, in fact, appears to be fairly constant over the last three to four decades. You can see this from the top panel of Figure 2 which shows Arctic sea ice extent as estimated from various means (solid lines) and the MIZ extent (dashed lines).

What appears to have been happening is that, on a monthly average basis (to average over wind-induced fluctuations of the ice cover) there has been a decadal trend for the central pack ice of the Arctic Ocean to recede and move north and the MIZ has moved north with the pack. While the MIZ region has widened at a rate of ~1.5 km/year, its extent has remaining roughly constant. This is possible because the perimeter of the MIZ has, on average, decreased in proportion to the increase in width. (This is further evidence, should one need it, that the Earth really is round!)

So, the MIZ has been migrating north but not changing in area. Does this matter? As indicated above, the MIZ is a region of enhanced air-ocean heat, moisture and momentum exchanges and the location of these exchanges is relevant to local weather and oceanography. But, perhaps more dramatically, the MIZ is also a region of marine primary production, delivery of nutrients to the euphotic zone, and a hunting platform for polar bears and indigenous communities (Figure 1). Movement of the ice edge northwards is transforming the lives of local peoples and wildlife.

While the MIZ extent may not have been changing in recent times, the fraction of the Arctic ice cover that is the MIZ has been increasing, see the bottom panel of Figure 2.  So, among other things, it seems the processes that dominate in the MIZ, such as wave-ice interaction, are becoming increasing important for the remaining Arctic ice cover whereas they have in the past been of only marginal significance (pun intended).

Figure 3: Left: current and projected changes in the MIZ [Strong and Rigor, 2013; Aksenov et al, 2017]. Right: projected MIZ in the 2030s in summer (June-August) [Aksenov et al, 2017].

If the increasing trend of MIZ fraction were to continue, one may expect the entire Arctic Ocean to eventually become marginal (before being eliminated entirely). Figure 3 shows a projection of Arctic MIZ area fraction and a snap-shot map in 2030. The implications of loss of Arctic sea ice cover are still being worked out in climate modelling and field studies (notably the recent US Office of Naval Research MIZ field program) but, likely as not, there will be as many unknown unknowns as known unknowns. One thing, however, seems fairly clear: the nature of air-sea ice-ocean exchanges and feedbacks will alter in the coming decades and these interactions will depend on physical representations of MIZ sea ice processes that have never needed to be included in models before.

References

Aksenov, Y., E. E. Popova, A. Yool, A. J. G. Nurser, T. D. Williams, L. Bertino, and J. Bergh, (2017) On the future navigability of Arctic sea routes: High-resolution projections of the Arctic Ocean and sea ice, Mar. Policy, 75, 300-317, https://doi.org/10.1016/j.marpol.2015.12.027

Strong, C. and I. G. Rigor, (2013) Arctic marginal ice zone trending wider in summer and narrower in winter, Geophys. Res. Lett., 40(18), 4864–4868, https://doi.org/10.1002/grl.50928

Rolph, R. J., D. L. Feltham, and D. Schroeder, (2020) Changes of the Arctic marginal ice zone during the satellite era. The Cryosphere. ISSN 1994-0424 doi: https://doi.org/10.5194/tc-2019-224  (In Press)

 

 

Posted in Climate | Leave a comment

Covid-19: Using tools from geophysics to assess, monitor and predict a pandemic

By: Alison Fowler, Alberto Carrassi, Javier Amezcua

The emergence of a new coronavirus disease, known as Covid-19, that could be transmitted between people was identified in China in December 2019. By 3rd March 2020 it had spread to every continent except Antarctica, totalling 92,840 confirmed cases and 3,118 deaths.

As scientists worldwide scrambled to understand this new virus, a fundamental and immediate question was how many more people are likely to die and what impact can governmental interventions have?

To answer this question, we have two valuable resources available to us. The first are numerical models, which have identified the key equations that can be used to explain a pandemic. The second are observational data, which detail the number of deaths and hospitalisations due to Covid-19 that have occurred to date. Neither models nor observations are perfect but by combining (assimilating) them, we can utilise the best parts of both whilst minimising their flaws. A huge benefit of data assimilation is that it also provides a robust estimate of the uncertainties of the output, offering an understanding of the worst, best as well as the most likely situation.

Assimilating observational data with models is routinely performed in geophysics. In fact, data assimilation is fundamental to modern day weather forecasting. Evidence for this is provided by the step change in the accuracy of weather forecasts that has been possible with the increasing availability of information from satellites orbiting the earth.

 Can data assimilation tools that have been developed for the geosciences be applied to pandemic modelling?

A team of scientists from 8 different countries (Argentina, Canada, England, France, Netherlands, Norway, Brazil and the United States of America) diverted their attention from geophysics for a few months to examine this very question. Each employed a state-of-the-art data assimilation tool typically used for geophysical problems to explain and predict the course of the pandemic in their own host country. The evolution of the epidemic is seen to vary widely between these 8 countries. Factors affecting this include differences in location (e.g. hemisphere), population densities, social habits, health-care systems, and importantly the government interventions employed. 

It was found that by using data assimilation to derive key parameters of the pandemic we could fit a classic metapopulation model to explain the reported deaths and hospitalisations in each of the 8 countries. The model itself is a version of the Susceptible-Exposed-Infected-Recovered (SEIR) compartment model that has been adapted to Covid-19 by including age-stratification and additional compartments for quarantine and care-homes. This is analogous to compartmental models often used in geophysics such as those used for studying carbon dynamics. Using this approach, we were able to successfully represent the impact of the (very different) interventions taken in the 8 different countries; visualising the rapid drop off in person-person transmission on different dates of lockdown.

Given the success of data assimilation to explain the reported deaths, the next step is to provide predictions under different possible scenarios. For England we took three possible scenarios from the 1st June when lockdown began to be eased. These were defined in terms of, the now familiar variable, the R number, which quantifies the average number of people an infected person will pass the virus onto. The three values that were chosen were 0.5 (reduction in number of cases with time), 1 (steady number of cases with time) and 1.2 (increase in number of cases with time). 

As of 1st June, approximately 45,000 deaths were attributed to Covid-19 in England in all settings (source, ONS). Our projections under the three different scenarios predict that by the 1st September the total deaths will be 57,000±1,900 (R=0.5), 63,600±2,700 (R=1) and 76,400±4,900 (R=1.2).  Given how widespread Covid-19 already is in England, these results highlight the potential of measures, which reduce a large amount of person-person contact, to save tens of thousands of lives. The uncertainty in the numbers reflects the uncertainty in the simple model and the uncertainty in the reported values. The collection of data on deaths, hospitalisation and number of positive cases is marred by a myriad of political and social complications, problems we do not normally need to consider when measuring winds and rainfall.

Figure: Evolution of the Covid-19 epidemic in England. Top: Total deaths. Middle: Number in hospital.  Bottom: The estimated R value (average number of person-person transmission). The black dots show the reported values up to 5th June for deaths (source, ONS) and up to 12th June for number in hospital (source, daily Gov. Press Conference). Blue lines indicate the initial estimates and the red lines indicate the values after assimilation, with the bold line indicating the most likely value. After 1st June three predictions are made based on three different R values.

References

Geir Evensen, Javier Amezcua, Marc Bocquet, Alberto Carrassi, Alban Farchi, Alison Fowler, Peter Houtekamer, Christopher K. R. T. Jones, Rafael de Moraes, Manuel Pulido, Christian Sampson, Femke Vossepoel: An international assessment of the COVID-19 pandemic using ensemble data assimilation. Submitted to Foundations of Data Science. Preprint on medRxiv. doi: https://doi.org/10.1101/2020.06.11.20128777

Posted in Climate, Numerical modelling | Tagged | Leave a comment

Finding the skill of forecasts of extreme precipitation in Southeast Asia

By: Samantha Ferrett

Forecasting weather in Southeast Asia

Southeast (SE) Asia is prone to high‐impact weather and is often subject to flooding and landslides as a result of heavy rainfall. Just last month Indonesia was hit by heavy rainfall that resulted in floods and landslides because of a rainy season that lasted longer than was initially forecast. Global computer models used for Numerical Weather Prediction (NWP) have been known to fail to accurately capture Maritime Continent rainfall, limiting predictions of high‐impact weather in the region. I am a Research Scientist on a Weather and Climate Science for Service Partnership (WCSSP) Southeast Asia project, “FORecasting for SouthEast Asia” (FORSEA), that aims to improve forecasts in SE Asia to reduce social and economic losses from high impact weather events. In this blog, I will provide an overview of some of my recent work that examines how well newly developed ensemble forecasts reproduce extreme precipitation in SE Asia.

What is an ensemble forecast?

A deterministic forecast is a single forecast from a computer model using one initial condition and producing one final estimate of future weather. The initial condition is an estimation of the observed weather at the start of the forecast. There are multiple reasons for an incorrect forecast. For example, one cause is that the model may not be able to fully replicate processes that drive weather in the real world. This is why forecasts are just an estimate of future weather. Unfortunately, there is also some uncertainty even to the observed weather that can result in large errors in the final forecast, even with a ‘perfect’ forecast model. An ensemble forecast consists of multiple forecasts from the same model, each with slightly different initial conditions, representing the uncertainty in observations. This results in an ensemble of estimates of future weather that can then be used to gain an understanding of the uncertainty of the forecast.

Are convection-permitting ensemble forecasts worth it?

Figure 1: Schematic of rainfall in a coarser resolution forecast model (left) and rainfall in a high-resolution convection permitting model (right). Darker blues indicate more rainfall.

A forecast model divides the region to be forecast into a grid. Convection-Permitting (CP) forecasts are those that use NWP models with such small grid sizes that they can better represent processes associated with rainfall. A schematic showing the difference between a coarser grid and a high-resolution grid used in CP models is shown in Fig. 1. The downside is that CP models, and ensembles, are more computationally expensive. Modellers face a difficult task in striking a balance between cost and benefit; this is where those of us who analyse such models hope to be useful! It’s important for the modelling community to know if the resources invested in these forecasts are worth it.

In my work I examine how “skilful” forecasts of extreme rainfall are for CP ensembles of forecasts in Malaysia, Indonesia and the Philippines. These are ensembles of 17 forecasts at a resolution of 4.5km (like the schematic in Fig. 1 shows) and were run by the Met Office between October 2018 to March 2019. SE Asia has a strong daily cycle of precipitation where precipitation is over land during the day and moves over ocean during the night. A question to answer is if these normal daily variations of rainfall remove the need for CP forecasts – is rainfall so dominated by the daily cycle that there is no need for these high resolution forecasts?

Figure 2: Fractions Skill Score (FSS) of 3 hourly accumulated precipitation at 8pm-11pm local time (Malaysia) exceeding 95th percentile aggregated over all forecasts in Oct 2018-Mar 2019 as function of spatial scale (x-axis) a) Malaysia, b) Indonesia and c) Philippines. The horizontal line shows the FSS=0.5 “skilful” threshold. Lines show results from the ensemble forecast for 1, 3 and 5 days into the forecast (black, mid grey and light grey solid lines) and results from a forecast based on observed weather from 1, 3 and 5 days before the day to be forecast (black, mid grey and light grey dashed lines).

I compare the skill (using a metric called the Fractions Skill Score) of the ensemble forecasts, shown by the solid lines in Fig. 2, to a “persistence” forecast, shown by the dashed lines in Fig. 2. The persistence forecast does not use a model but instead uses observed weather from the days prior to the day being forecast to estimate the weather. The forecast is considered skilful at the spatial scale shown on the x-axis if the metric exceeds the threshold shown by the horizontal black line. The ensemble forecast is much more skilful. The larger skill at lower spatial scales means that smaller scale features can be more accurately forecast by the ensemble. Even skill five days into the ensemble forecast (shown by light grey solid line) is higher than that of the first day of the forecast based on observations (black dashed line). This means there is value in using such a forecast in all three regions.

It’s not over…

This is promising news for the use of CP models in the tropics, but questions still remain to be addressed in FORSEA:

  • How do common large scale features known to modulate SE Asia rainfall, such as the Madden Julian Oscillation or equatorial waves, influence forecast skill?
  • Shall we go smaller? This suite of forecasts also includes sub-kilometre scale forecasts. Is there benefit to using these?

References

Clark, P., N. Roberts, H. Lean, S. P. Ballard and C. Charlton‐Perez, 2016: Convection‐permitting models: a step‐change in rainfall forecasting. Met. Apps, 23, 165-181. https://doi.org/10.1002/met.1538

Ferrett, S., G.‐Y. Yang, S. Woolnough, et al., 2020: Linking extreme precipitation in Southeast Asia to equatorial waves. Q J R Meteorol Soc., 146, 665– 684. https://doi.org/10.1002/qj.3699

Love, B.S., A. J. Matthews, and G. M. S. Lister, 2011: The diurnal cycle of precipitation over the Maritime Continent in a high‐resolution atmospheric model. Quarterly Journal of the Royal Meteorological Society137, 934– 947, https://doi.org/10.1002/qj.809

Roberts, N.M. and H.W. Lean, 2008: Scale-Selective Verification of Rainfall Accumulations from High-Resolution Forecasts of Convective Events. Mon. Wea. Rev., 136, 78–97, https://doi.org/10.1175/2007MWR2123.1

Posted in Numerical modelling, Weather forecasting | Tagged | Leave a comment