Spotlight on aviation CO2 emissions

By Emma Irvine

Climate change, resulting from emissions of CO2 amongst other factors, is a major topic of research here at Reading. This blog focusses on the emissions from one particular sector, aviation, and progress on tackling them.

I write this as the 2016 Farnborough Airshow is taking place, with major aircraft manufacturers such as Boeing and Airbus showcasing their latest technological innovations, and eco-efficiency is the buzzword. Just this week at Farnborough, General Electric tweeted that their technological developments to the engines on Boeing-737s make them 15% more fuel efficient. Not to be outdone by their US rivals, Airbus have been showing-off their A350-XWB which claims to be 25% more fuel efficient than its (unnamed) nearest competitor (although probably not when it’s doing this near vertical take-off).

Back in 2009, when 2020 seemed a long way off and 2050 the distant future, the International Air Transport Association (IATA) set itself a set of environmental targets which included: to achieve carbon neutral growth by 2020, and to reduce CO2 emissions by 50% by 2050 (relative to 2005 levels). Other international aviation organisations have similar pledges. So how is the industry doing? The European Environment Agency’s latest annual greenhouse gas report is not particularly encouraging. In 2014, emissions from international aviation rose by 1.4% while those from domestic aviation fell by 0.8%.

So while the technological developments from the manufacturers are encouraging, they aren’t enough by themselves. Amongst other initiatives to reduce CO2 emissions are increasing the use of biofuels (disappointingly, British Airways project to turn London’s rubbish into biofuel for their planes was recently scrapped) and improvements to air traffic management. In Europe, the big air traffic initiative takes the form of the Single European Sky Air Traffic Management Research (SESAR), which is aiming at a 2.8% reduction per flight in environmental impact as well as a 40% reduction in accident risk and 27% increase in capacity. Here at Reading University we are part of one of the new SESAR projects investigating the potential of reducing the overall environmental impact of European flights through optimising the routing of aircraft over Europe.  Having proven the feasibility of climate-optimised routing over the relatively unconstrained airspace of the north Atlantic, we are applying this novel concept to some of the busiest airspace in the world. With over 28,000 flights a day occurring in or passing through European airspace, optimising the routes to minimise their environmental impact will be quite a challenge.

This brings me to the ultimate in climate-optimal flight: Solar Impulse. This innovative aircraft produces zero CO2 emissions (or emissions of any kind) as it flies, being powered purely by solar energy it receives through the 17,000 solar cells in its wings. It’s currently about to embark on the final leg of its around the world tour, from Cairo to Abu Dhabi (you can follow its progress here). Setting records along the way, it made the first trans-Atlantic crossing without using fuel, flying from New York to Seville in 70 hours (at the same time achieving the more dubious accolade of ‘selfie of the year’). Although solar power is unlikely to prove the answer to aviation’s CO2 problems – at least with current technology – solar impulse is an inspiring demonstration project harnessing the power of ‘green’ energy. #futureisclean

 

 

Posted in aviation, Climate change, Renewable energy | Tagged | Leave a comment

An artefactually introduced monthly cycle in the ensemble fields constrained by HadISST2

By Xiangbo Feng

ECMWF has recently developed two major ensemble products for 20th century climate, i.e. ERA-20CM and CERA-20C, within ERA-CLIM and ERA-CLIM2 projects. ERA-20CM is in ECMWF’s public datasets now, and CERA-20C has been scheduled to disseminate in the near future. There is no doubt that these two exciting climate datasets will become hugely popular in the research fields and also in other relevant communities. It is worth being aware of some uncertainty in their data before implementing them.

The former is a 10 member ensemble of atmosphere model integrations, using observationally based reanalysis HadISST2 to describe SST and sea ice and using CMIP5 radiative forcing to force model. There is no data assimilation applied. CERA-20C is a 10 member ensemble of coupled ocean-atmosphere reanalysis, also using HadISST2 to constrain SST of model via heat relaxation scheme, with data assimilation applied in ocean and atmosphere individually. They both provide daily atmospheric and SST fields (and ocean in CERA-20C) at 3 hours resolution. For more details on these two products, read Hersbach etal. (2015) and Laloyaux etal. (2016) respectively. One key point within these two products is that SST is strongly constrained by HadISST2, a 10 member ensemble of realizations produced by UK Met Office within ERA-CLIM project. Thanks to the involvement in ERA-CLIM2 project, I have limited internal access to CERA-20C data.

Interestingly, we recently found an unexpected monthly cycle in ensemble spread of daily SST field in both ERA-20CM and CERA-20C. The shape of the cycle is similar in both products, but phase is slightly lagged in CERA-20C (suspected to be due to the relaxation scheme applied in the data assimilation schemes). For demonstration, the monthly cycle in January (2005-2007) seen in CERA-20C is shown in Figure 1. It can be characterised by following features:

  • A month cycle consistently exists in SST ensemble spread at all latitudes and all months in all years from 1900 to 2010! In January 2005-2007, global average of the amplitude is 0.015 deg.C (Figure 1 top left), which is about 15% of spread mean.
  • The amplitude is larger in summer hemisphere and smaller winter hemisphere. This follows the pattern of SST spread mean (not shown).
  • In regions with strong western boundary currents, such as the Gulf Stream and Kuroshio, where SST uncertainty is usually the largest, the monthly cycle is not more significant than other regions. This indicates this signal is more likely produced at large scales.
  • This cycle generally has the lowest and highest values around 5th and 20th in each month, but with noticeable seasonal variations (5-10 days) at mid-high latitudes (Figure 1 top right).
  • This signal also exists in forecasting fields (Figure 1 bottom). The amplitude gradually becomes smaller when forecasting time steps are longer. This means that this signal is presumably propagating into atmosphere through ocean-atmosphere coupling.

2016 07 08 Xiangbo Feng - Picture1

Figure 1. Amplitude (top left) and phase lag (top right) of monthly cycle statistically fitted in time series of ensemble spread (standard deviation) of daily SST analysis, in January 2005-2007, and time series of global average (60°N-60°S) of ensemble spread of daily SST at different forecast leadtimes (0-24 h) on each day of January 2005-2007 (bottom). Note that the monthly cycle shown in maps is significant at 95% confidence level. Data are obtained from CERA-20C.

It turns out this is being imposed from the SST reconstruction method as an artefact of the HadISST2 data processing, which is briefly reported in section 3.1.1 of Hersbach et al. (2015). HadISST2 is constructed as a 10 member ensemble of realizations with a monthly window, based on methods separately considering large-scale variability and small-scale perturbations. Daily fields were then obtained by temporal interpolation of monthly analysis fields from adjacent months with weights such that the average of all daily fields in one analysis window equals the monthly analysis again. However, because of locally strong small-scale perturbations dominating the area-averaged ensemble spread at monthly window, this interpolation method leads to interference between the small-scale perturbations. As a result, the ensemble spread appears smaller at start dates of each month and larger at middle of month. In other words, a monthly cycle, which is supposed not to exist, is artefactually introduced by the interpolation process.

So, the next question is to what extent could this monthly spread variability modulate an atmosphere response through air-sea interactions?  It is expected that any changes in SST uncertainty will be reflected in the lower atmospheric fields, depending of course on the time scales and regions considered. This is especially expected in the case of CERA-20C which uses fully coupled ocean-atmosphere models.

The answer is that, at large scales, unfortunately we have not found a clear indication of this artefactual signal in the air so far. It is not surprising as at daily time scales the atmosphere usually has higher-frequency variations and much larger ensemble uncertainty than SST does, and this increases the difficulty of statistically distinguishing a cycle that has a relatively small amplitude. For example, in CERA-20C the global mean of ensemble spread of daily 2 m temperature (T2m) in January 2005-2007 is about 0.3 degC, which is 3 times that of SST. This is true even for the case where the atmosphere is forcing SST, like the Western Tropical Pacific with strong deep convection.

However, in the dry and calm regions of ocean, where the sensible heat flux is thought to be more responsible for atmospheric heating, a monthly cycle in T2m was found, although the indication is much less significant than in SST. Figure 2 shows the regional average of ensemble spread of SST at different forecast leadtimes (0-24 h) in the south-east Pacific, with corresponding T2m ensemble spread (note that the ensemble spread in T2m is increasing with forecast leadtimes). Despite the strong high-frequency variations, T2m does show a hint of a monthly cycle which well matches the phase and amplitude observed in SST. In all, we believe that this monthly variability does have an impact on the atmosphere, but it might need better ways to extract it from a background with a lot noise.

2016 07 08 Xiangbo Feng - Picture2

Figure 2. Time series of regional average (the south-east Pacific, 10°S-40°S and 120°W-80°W) of ensemble spread of SST (top), and T2m (bottom) at different forecast leadtimes (0-24 h) on each day of January 2005-2007. Note that the time series of T2m spread is detrended. Data are obtained from CERA-20C.

Suggestions:

  • Be aware of this monthly cycle that is artefactually introduced into the ensembles of daily SST field in both ERA-20CM and CERA-20C, and of its potential impact on the atmosphere. If the daily fields of these two datasets are used in your work, please keep the possibility in your mind that the ensemble uncertainty is systemically varying with dates. This adds more uncertainty than originally designed.
  • This problem only exists in data with daily scale, and is not expected to influence the assessments at long term.
  • This problem can only be solved by improving the daily data processing scheme in HadISST2.

REFERENCES:

Hersbach, H., Peubey, C., Simmons, A., Berrisford, P., Poli, P. and Dee, D., 2015. ERA-20CM: a twentieth-century atmospheric model ensemble. Q.J.R. Meteorol. Soc., 141: 2350–2375. doi: 10.1002/qj.2528

Laloyaux, P., Balmaseda, M., Dee, D., Mogensen, K. and Janssen, P., 2016. A coupled data assimilation system for climate reanalysis. Q.J.R. Meteorol. Soc., 142: 65–78. doi: 10.1002/qj.2629

Posted in Climate modelling, Numerical modelling | Tagged , , | Leave a comment

A month’s worth of rain …

By Ben Harvey

Phrases like a month’s worth of rain fell in just one day are often seen in media reports of extreme precipitation. But what does this statistic actually mean? How rare is it to see a month’s worth of rain fall in a day? Are certain locations or seasons more susceptible than others to such events? This blog post takes a brief look at some UK raingauge observations to find out.

To achieve the status of a month’s worth of rain in a day, a daily accumulation (by convention, the 09-09 UTC total) should exceed the corresponding climatological mean monthly precipitation value. The blue bars in Figure 1 show the climatology of mean monthly precipitation at the Reading University Atmospheric Observatory for the period 1981-2010. During the last one hundred years (1916-2015), the monthly climatology has been exceeded by a daily accumulation on only ten occasions (as indicated by the solid line). The most recent events were 9 August 1999 and 18 August 2011. Interestingly, all ten events occurred during July-September, so were presumably associated with intense convective storms rather than large-scale frontal systems.

2016 06 29 Ben Harvey dept_blog_jul2016_RdgByMonthFigure 1. The climatology of mean monthly precipitation in Reading (blue bars) for 1981-2010 and the number of occurrences of each of the three thresholds discussed in the text (lines) during the 100 year period 1916-2015.

The other two lines show similar but less severe thresholds also seen in media headlines: the number of daily accumulations exceeding just half the monthly climatology (dashed) and the number of two-day accumulations exceeding the full monthly climatology (dot-dashed). As for the solid line, both are largest in summer. The number of occurrences in the 100 year period are 134 and 38 respectively: whilst a month’s worth of rain falls in a day typically only once a decade in Reading, half a month’s worth of rain falls in a day typically more than once a year.

Do these numbers vary much across the UK? The total occurrences for each threshold from stations across the UK are shown in Figure 2 (each threshold is based on the local climatology). These data are from the MIDAS database and only cover the 30 year period 1981-2010. The 47 stations are those climate network stations which consistently reported daily rainfall amounts during the period. The number of occurrences of a month’s worth of rain falling in a day vary from 0 to 3 (except for one outlier station which recorded 8 such events) – Reading had 2 events in that time, and the number of occurrences of half a month’s worth of rain falling in a day vary from 0 to 53 – Reading had 27. Crucially then, how rare a given month’s worth of rain event is depends strongly on location.

2016 06 29 Ben Harvey dept_blog_jul2016_map

Figure 2. The total number of occurrences of each of the three thresholds discussed in the text during the 30 year period 1981-2010. Data from 47 climate data stations are shown. The numbers in the top right corners are the number of days where the threshold is met at at least one station.

What factors influence the spatial variations? Scotland provides an interesting case study: there is a striking east-west difference in the occurrence of all three thresholds. A closer look at the data reveals that this difference is due predominantly to the monthly climatologies being much smaller in the east than the west, rather than any given daily event being larger there.

Finally, can we tell from this data how often a month’s worth of rain falls in a day somewhere in the UK? In other words, how often can we expect to see headlines like the first sentence of this post, even if only for a small area? Figure 2 also shows the number of days on which each threshold was exceeded at at least one station. On average, a months worth of rain in a day was received at at least one station 1.3 times a year, and there are 16 days a year where at least one station received half a month’s rain in a day. However, care is needed with these numbers: since many of the events are localised to small areas it is likely that many events have been missed here. Using a higher density of observations would increase these numbers substantially.

Posted in Climate, Environmental hazards, Hydrology, Measurements and instrumentation, Weather | Tagged | Leave a comment

Why does it always rain on me?

By Helen Dacre

Last Monday morning I got so wet on my cycle to work that I had to spend 10 minutes under the hand dryer in the toilets to stop myself looking like a drowned rat. Being the keen meteorologist that I am, however, my next steps took me to the coffee room to look at the synoptic charts to find out exactly why I’d got so wet. A fairly cursory glance at the chart for 00 UTC on Monday 20 June (Figure 1) showed me an occluding low-pressure system sitting to the north-west of the UK with a long trailing front extending over the entire length of the country (so I doubt I was the only person standing under the hand dryer that morning).

2016 06 23 Helen Dacre blog Fig 2

You don’t need to have studied meteorology to know that fronts mean clouds: and clouds, more often than not, mean rain – particularly those associated with an active low-pressure system like that passing through on Monday morning. For most of the morning we sat under low cloud in the warm sector (that’s the bit between the warm front and the cold front) and I was glad for once to be stuck at my desk with no need to trek through the wilderness to a meeting on the other side of campus.

Having experienced the passage of a low-pressure system over the UK many times over the last 30+ years (and taught Introduction to Weather Systems often enough), I knew things were about to change and sure enough around 12 o’clock the cloud began to lift, the rain stopped, the sunshine broke through and by the time I left work to cycle home (wearing my soggy shoes from the morning) there were glorious blue skies overhead with no trace of a cloud to be seen.

A quick look at the Reading atmospheric observatory measurements in the foyer as I left the Department confirmed the passage of the cold front at 12:00 (Figure 2), marked by an increase in pressure (known as a pressure kick), a wind shift from southerly to westerly (known as a wind veer), lifting of the cloud base (measured by our ground-based lidar) but no expected decrease in temperature. Why not? Probably due to the decrease in cloud cover allowing the solar radiation to reach the surface and warm the air above.  Other than that, a pretty classic frontal passage.

2016 06 23 Helen Dacre blog Fig 1

This all got me thinking on my cycle home about the demise of the synoptic chart.  In an age where text based postcode forecasts are growing in popularity, my phone can tell me, hour by hour, the chance of rain in my backyard. But it doesn’t tell me at a glance why it’s raining or why it’s going to stop, or if it’s raining in Reading is it also raining in Liverpool? It’s like the Indian proverb of trying to describe an elephant blindfold whilst only touching its leg, trunk or tail.  It’s very difficult to explain the weather in my backyard without knowing what’s going on elsewhere.

So, whilst I continue to use my phone to find out whether to pack my waterproofs, please lets keep the tried and tested synoptic chart so we can understand at a glance why the weather is doing what it’s doing.  Forget the cloud appreciation society (sorry Gavin Pretor-Pinney), how about a synoptic chart appreciation society – because a picture really does tell 1000 words (well 571 words according to my word count).

Posted in Measurements and instrumentation, Weather, Weather forecasting | Leave a comment

Understanding Summer Flash Flooding

By Adrian Champion

‘Flash flooding’ is flooding that only lasts between a few hours and a day and typically has very little warning. There are many causes of flash flooding, from the meteorological conditions that lead to the rainfall that cause the flooding, to the ground situation that results in flooding. Flash flooding is generally very localised, but can be very costly and result in significant disruptions.

Flash flooding is due to intense rainfall that only lasts a short period – from less than an hour to a few hours. The amount of rain recorded over the course of the day may be low in comparison to rainy winter days, however the difference is that this amount of rain falls in perhaps only a few hours (Figure 1). The difficulty in forecasting such rain events is that the meteorological conditions that lead to intense rainfall are very small in scale. The predominant cause of hourly extreme rain is a convective storm, or a feature with convective elements. These are only around a few kilometres in size, smaller than the forecast resolution of any national weather centre’s weather forecast model. There may also be other processes, or other factors from the prevailing wind conditions to the orography, that will act to enhance the convective system. It is due to the small size of the meteorological processes that cause intense rainfall that make it so difficult to forecast.

2016 06 16 Adrian Champion Department Blog Hourly Record Totals Met Office

Figure 1. Short-period depth-duration extremes of rainfall in the United Kingdom. Source: Met Office climate extremes

Once the rain reaches the ground there are also significant difficulties in predicting what will happen to all of the water. Outside of these hourly extreme rain episodes, we’re able to model how much of the water will be absorbed by the ground via infiltration and how much will run off the ground into rivers and drains (Figure 2). We’re also able to model the resulting changes in river flow from this over-land run off and water release from the ground. The natural (and man-made) systems are also able to respond to ‘normal’ rainfall intensities. During extreme rainfall it is a lot harder to model what will happen to the water. The ground is not able to absorb the water as quickly as it is falling, and other factors such as how wet the ground already is play a significant role. Therefore, it can be expected that the majority of the water will flow over the surface.

2016 06 16 Adrian Champion Example Hydrology Model SHETRAN Newcastle University

Figure 2. An example of a hydrology model as used by Newcastle University, their SHETRAN model

This surface run-off is difficult to model and is highly dependent on the type of land use – in towns and cities tarmac and concrete surfaces will result in fast run-off speeds resulting in a rapid accumulation of run-off water in low-lying areas, e.g. under a railway bridge when the road dips (Figure 3). It may take only tens of minutes for the water to collect and exceed the drainage capacity. In rural areas there will be natural barriers, e.g. trees and hedgerows, however intense rainfall can still result in rapid increases in local river levels causing localised river flooding typically of natural floodplains, as the river is unable to get rid of the excess water quick enough.

2016 06 16 Adrian Champion Department Blog Bridge Flooding London Fire Brigade

Figure 3. A recent example of flooding underneath a railway bridge in an urban area that would have accumulated quickly and took drivers by surprise – south London, 7 June 2016. Source: BBC News website, photograph credited to the London Fire Brigade.

Due to the rainfall events lasting only a few hours, the flooding also only lasts a few hours as drainage systems, either natural (rivers) or man-made (drains), recover and move water further downstream. However, the speed at which the flooding occurs can often have large consequences due to the lack of warning or the speed and volume of water. We usually only see such flooding in winter as the convective processes that dominate the hourly extreme rain dominate in summer due to the stronger incoming solar radiation (it’s summer, it’s warmer). Such convective processes cause “summery showers” that last only a few hours, or sometimes minutes.

Posted in Environmental hazards, Hydrology, Numerical modelling, Urban meteorology, Weather | Tagged , | Leave a comment

Standing up for Science

By Joanne Thomas, Project & Events Coordinator, Sense about Science

Voice of Young Science (VoYS) is a dynamic network of more than 2000 early career researchers and scientists across science, engineering and medicine. VoYS members are committed to playing an active role in public discussions about science; they challenge pseudoscientific claims, tackle popular misconceptions around controversial issues and respond to misinformation in all kinds of media. These early career researchers don’t wait until later in their careers to stand up for science.

2016 06 08 VOYS-print2010

VoYS members meet at one of four Standing up for Science media workshops organised each year by the charity Sense about Science. These workshops encourage the early career researchers to voice their opinions in public debates about science. During the full-day events, participants discuss science-related controversies in media reporting, and have the chance to hear directly from respected science journalists about how the media works, how to respond and comment, and what journalists want and expect from scientists.

Previous attendees have said:

  • “Incredibly useful workshop. I definitely feel more prepared to engage with the media about my research!”
  • “Great speakers, lots of useful stuff, well-focused on what we can do”
  • “Found the panellists’ comments very helpful and thought provoking”
  • “An enjoyable & relevant discussion”

Inspired and engaged by the peers they meet during the events, VoYS members are empowered to do more to stand up for science and have launched many successful mythbusting and evidence hunting campaigns. They’ve published a detox dossier debunking common marketing claims associated with ‘detox’ products, written an open letter to the World Health Organisation, prompting several disease department directors to clarify that they do not condone the use of homeopathy to treat serious diseases and most recently launched a weather quiz to address misuse of weather terms. This latest project was initiated by meteorologists at the University of Reading and launched in January 2016 (see below sample). Frustrated by sensationalised stories and misleading use of meteorological terms, and concerned that this could undermine public trust in meteorology, they launched this quiz to challenge everyone to test their weather know-how and arm themselves with the facts to decipher the truth behind weather stories.

2016 06 08 Havent the foggiest 2

The next media workshop is sponsored by the Department of Meteorology at the University of Reading and will take place in London on Friday 16 September –  priority places are available for early career researchers at the University of Reading (PhD students, post-docs or first job equivalents).

 

Posted in Climate | Leave a comment

The interaction between aerosols and clouds

By Nicolas Bellouin

As part of the Copernicus Atmosphere Monitoring Service (CAMS), I lead an activity that will provide in August new estimates of radiative forcing of climate due to changes in atmospheric composition.

One of the radiative forcing mechanisms that we are working to quantify is the interaction between aerosols and clouds. Aerosols are the small liquid and solid particles in suspension in the atmosphere. Human activities emit aerosols in the atmosphere, adding to natural levels and causing the formation of liquid clouds with droplets that are more numerous and smaller than in unpolluted clouds. A cloud made of more numerous droplets is brighter, reflecting more radiation back to space. A cloud made of smaller droplets may evaporate more easily, becoming thinner or even disappearing completely. Alternatively, smaller droplets may take longer to form rain, causing the cloud to linger in the atmosphere and reflect sunlight for longer. The physics of aerosol-cloud interactions are complex and have been the subject of many scientific studies, summarised in the latest assessment report of the Intergovernmental Panel on Climate Change.

Radiative forcing is a measure of the imbalance in the Earth’s energy budget caused by perturbations external to the natural climate system, such as the emission of aerosols into the atmosphere by human activities. Our preliminary CAMS estimate of radiative forcing due to aerosol-cloud interactions, based on satellite observations of aerosol amounts and cloud reflectivity, is –0.6 W m−2. The negative sign indicates a loss of energy for the climate system. The estimate of climate models for the same radiative forcing is stronger, typically larger than –1 W m−2. What causes that discrepancy? Over the past few months, I have discussed with experts in aerosol-cloud interactions, and there are reasons to expect that aerosol-cloud interactions are weaker than simulated by climate models – and perhaps even weaker than the preliminary CAMS estimate.

The modification of cloud properties by external perturbations is observed routinely. Ship tracks are emblematic examples: the aerosols emitted by ship engines provide additional sites for water vapour to condensate into cloud droplets, forming linear clouds along the ship’s route. If a single ship can create new clouds, surely the masses of aerosols emitted worldwide by transport and power generation must exert a strong radiative forcing. But crucially, ship tracks do not happen all the time, otherwise the busy shipping lanes linking Europe, Asia, and North America would leave a noticeable and persistent trail of clouds on satellite pictures (Figure 1). This is not the case.

2016 05 25 Nicolas Bellouin - atlantic_shiptracks_lrg
Figure 1. Ship tracks off the Atlantic coasts of France and Spain, as observed by NASA’s MODIS satellite instrument in January 2003.

Another event casts doubts on the possibility of strong radiative forcing from aerosol-cloud interactions. In late 2014/early 2015, the Holuhraun volcano erupted in Iceland. This eruption injected masses of aerosols into the atmosphere – so many aerosols in fact that at one point the volcano emitted as much in a day than the entire European Union combined. Such a large and precisely located perturbation was the perfect laboratory for studying aerosol-cloud interactions. And indeed, satellite instruments reported that clouds in the North Atlantic were composed of smaller droplets than normal, as expected from the physics of aerosol-cloud interactions. But were North Atlantic clouds brighter than normal during that period? Observations are inconclusive. It may be that aerosol-cloud interactions are lost in the noise of natural variability in cloud properties, but for such a large perturbation, the impacts are surprisingly hard to isolate.

In the end, aerosol-cloud scientists reckon that it will come down to counting how often clouds happen to show strong sensitivity to aerosol perturbations. Those discussions leave me with the feeling that such situations occur infrequently, and radiative forcing of aerosol-cloud interactions may need to be revised down to weaker values.

I thank Graham Feingold, Johannes Quaas, Annica Ekman, Leo Donner, and Ilan Koren for interesting discussions on current understanding of aerosol-cloud interactions. Note that they do not all agree that aerosol-cloud radiative forcing is weak: some argue that a value of up to −1.2 W m−2 remains consistent with scientific understanding.

Posted in Aerosols, Climate modelling, Numerical modelling | Leave a comment

Predictions and errors

By Javier Amezcua

Predicting is one of the most ambitious goals of science. It goes beyond describing and explaining, and it attempts to “tell the future”. The prediction process has the following basic steps:

  1. We have an estimate of the present conditions of a system, for instance, the atmosphere.
  2. We have a model –i.e. a set of mathematical rules coming from physical principles- which we evolve forward (or integrate) in time.
  3. We get an estimate of the future state of our system at any given time.

When computing a prediction, it is very important to provide a measure of the quality of this prediction. Intuition tells us that we are more certain, for example, in predicting the temperature in our neighborhood for tomorrow, than in predicting the temperature in the same place a year from now. Where does this certainty/uncertainty come from? Let us explore this next.

For the sake of this discussion, consider that the model mentioned in step (2) is perfect. That is, let us assert that have completely captured in our equation all the processes we are interested in, and that we can solve these equations perfectly with a computer code (this is not true in reality, but we will leave that for another blog entry). In this case the quality of a prediction is determined by the error of our estimate mentioned in step (1) –i.e. the error in our initial conditions– and the error growth in time.

As it turns out, errors grow differently in different dynamical systems. In some systems, making a tiny mistake is irrelevant for a future prediction, while in other systems a tiny initial error can ruin a forecast after a certain lead-time. Let’s take a quick view at different families of dynamical systems with the help of Figure 1. The figure has four panels; for each panel the x-axis corresponds to time, while the y-axis corresponds to the value of a physical variable (it can be wind speed, temperature, etc). Let us run a trajectory starte from a given initial condition; we label this reference trajectory (shown in black in the figure). Also, let us evolve trajectories initialised from ‘nearby’ initial conditions – i.e. initial conditions with errors; we label these trajectories as perturbed trajectories. In the figure, red lines indicate that initial perturbed values are larger than the initial true value, while blue lines indicate that the initial perturbed values are smaller than the initial true value. The behaviour in error growth is different in each case:

  1. a) In this example, the perturbed trajectories tend towards the reference trajectory. This is a typical dissipative system. Regardless of the initial conditions, the system evolves towards a fixed point, and any initial error disappears. Think of a pendulum with friction: it does not matter at what height you drop it, it will use its gravitational potential energy to swing for a while, but it will eventually stop.
  1. b) In this example, the errors of the perturbed trajectories grow as time increases, and they do not stop growing, instead, the perturbed trajectories tend towards plus and minus infinity. This system –in which errors grow without limit– is not feasible in reality, since it would require infinite energy. However, if we want to make predictions in a finite-time frame, the accuracy in the initial conditions is crucial, and we will see the quality of the forecast decrease with time.
  1. c) In this example, the initial error of the perturbed trajectories is preserved as time evolves; it neither grows nor decreases with time. This is typical for periodic systems, such as those found in celestial mechanics, or physical processes related from them, like the tides. If we are wrong in our position of the moon tonight, and do a forecast for the next days, the error will stay constant as time progresses. There is another type of systems, called quasi-periodic, which have similar characteristics, but I will not discuss them further.
  1. d) The last kind of systems is perhaps the most interesting to us; we are talking about chaotic systems. The atmosphere is a typical forced-dissipative system that presents chaotic behaviour. In this case, errors initially grow slowly, then the error growth turns faster, and eventually the perturbed trajectories do not resemble the reference trajectory at all, and in fact they do not resemble each other. The accuracy of the initial conditions is crucial for a good forecast, and the quality of a forecast decreases with time. In fact, even the tiniest initial errors will ruin a forecast after a given lead-time. What is different with respect to panel (b)? Errors do not grow forever and without limit, instead they saturate. After a long time, the trajectories – both the reference and the perturbed ones – evolve and live within a permissible range of values (without going to plus or minus infinity). This set of values is know as attractor (or climatology).

2016 05 25 Javier Amezcua Fig 1Figure 1. Error growth for different families of dynamical systems.

Let us discuss chaotic systems a little further using our example in panel (d). A forecast for time t=0.5 is more reliable than that for time t=1, and after approximately t=1.5 we have lost our capacity to predict. Something similar happens in the atmosphere. For large scale features, this limit of predictability is about 2 weeks. Operational centres release forecasts for up to 5 or 7 days in advance, and they equip these forecasts with some probabilistic measure (representing, in simple terms, how different are trajectories initiated from similar initial conditions). Unfortunately, some commercial forecast providers give no information on the accuracy of their forecasts at all. Furthermore, they are known for (irresponsibly) releasing ‘valid’ determinisitic forecasts for up to 45 days in advance (do not confuse this with the proper seasonal outlooks generated by meteorological agencies). As expected, these forecasts change considerably when updated every day, and these changes continue until the lead-time is within the predictability window. Such 45 day ‘forecasts’ are not prediction, they can be considered quasi-random draws from the climatology of different regions. At the end these forecasts have no value and they end up stating the obvious: July will be relatively warm and December will be relatively cold.

 

 

 

 

 

 

 

Posted in Numerical modelling, Weather forecasting | Tagged , | Leave a comment

A Random Blog

By Peter Clark

As a young scientist I was introduced to turbulent flow in the traditional way – we consider an ‘infinite ensemble of realisations’ of a random flow, and split each realisation into the average over the ensemble and the ‘random’ fluctuations. I remember being unsatisfied by this approach. Classical physics is not random! What actually is this ‘ensemble’? Why treat the fluctuations as just random noise when any curious eye can see there is a rich structure to the flow?

Many of these questions have (at least partially) been answered by the revolution in mathematics and thinking that is chaos theory (and siblings such as ergodic theory). Perhaps the most remarkable result is that some systems in which the future state is perfectly predictable in terms of the current state (‘deterministic’), evolve to become indistinguishable from a random system. The system ‘forgets’ its initial state, in the sense that to track backwards to find it out requires increasingly accurate knowledge of the current state the further one goes back, to a degree which soon becomes beyond any kind of practicality. This is the converse of the problem of forecasting.

At the same time the computer revolution has enabled us to simulate the evolution of at least a finite sample of an ‘ensemble’ explicitly – a process in weather forecasting sampling the ‘ensemble of initial states’ pioneered with considerable success (and rigour) by ECMWF and now a standard methodology.

Ensemble techniques are now a widespread practice in expressing (often poorly defined) ‘uncertainty’.  This powerful approach has become so universal we often forget to ask the question ‘what ensemble?’ The mere use of an ensemble technique is sometimes taken to give credibility to a piece of work. Too often, arbitrary random perturbations, or worse, an arbitrary mixture of model configurations are used to express ‘uncertainty’, even though it is difficult to know exactly what the results actually mean. While all science is uncertain, perhaps unsurprisingly, some users reject ‘uncertain’ advice with the cry ‘I need to be sure!’

We can, however, return to real physical ensembles arising from the turbulent processes in the atmosphere as an example where uncertainty really matters. When we build weather and climate models, we have to approximate (‘parametrize’) small-scale aspects of the flow (which may be smaller than anything from a few km to several hundred km, depending on the model and application). We simply don’t know how to do this, and there is no reason to suppose it is even possible. However, we do know that, with some restrictions, we can accurately predict an ‘ensemble mean’ behaviour of the small-scale flow. So we use that instead.

The trouble is, we don’t live in an ‘ensemble mean’ world – we live in ‘one realisation’. However, by returning to the quite rigorously defined ensemble, we can also make predictions about the variability of realisations. Figure 1 illustrates this with a very simple model of a real turbulent system. In practical weather forecast models we have shown that using physically realistic random variability can significantly improve the performance of a model (even if the ensemble system we use remains a simplification of the real world) – for example, thunderstorms may form at a more realistic time and evolve more realistically. The downside is that so-called ‘deterministic’ forecasts are an impossibility. Behaving like the real world means behaving, to a certain extent, randomly. Physical realism and not being sure go hand in hand.

Figure 1a

Figure 1a

Figure 1b

Figure 1b

Figure 1c

Figure 1c

 

Figure 1. Results using an ensemble of 10000 realisations of the Lorenz (1963) simple model of Rayleigh-Bénard convection
Top, Figure 1a)     Two realisations of the rate of heating at z=0.75 the height of the system. The ensemble mean must be zero.
1b)     The position of each realization in phase space – the ensemble is randomly distributed over the ‘Lorenz attractor’ – see animation 
1c)      The standard deviation of the time averaged heating rate as a function of averaging time. The red line varies as 1/averaging time.

Reference

Lorenz , E.N., 1963, Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences 20 (2): 130–141. doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.

Posted in Numerical modelling, Weather forecasting | Tagged , | Leave a comment

Characterising extreme event occurrence

By Reinhard Schiemann

When presented with a new data sample, the first thing many of us scientists do is to characterise it in terms of two numbers: the average or mean value of the sample, and the spread or variance of the sample values around the mean. This has become second nature and we rarely stop to think twice about it. Yet it is indeed quite remarkable that data as different as Reading summer temperatures, the chest circumference of Scottish soldiers, or the sum of points obtained by rolling several identical dice can all be characterised by just these two numbers. Essentially, this is a consequence of the Central Limit Theorem in statistics, which states that in the examples above and many other situations, where the data arise as an average of more elementary data (for example tossing individual dice, averaging temperature throughout a season), the samples will tend to follow a Gaussian or normal distribution. The bell-shaped curve of this distribution is ubiquitous in all areas of quantitative science and may be the only mathematical function that has made it onto a bank note (Figure 1). The curve is described by two numbers, the mean determining the location of the bell, and the variance determining the width of the bell.

2016 05 12 Reinhard Schiemann Figure 1

Figure 1. Carl-Friedrich Gauß (1777-1855) and the distribution named after him on the former 10 Deutsche Mark note (source: Wikipedia).

In meteorology we are often interested in extreme events such as strong windstorms, rain and flooding, heatwaves or drought. When we want to describe extreme behaviour, we have to change the way we collect data samples and characterise them. One option is to collect samples that comprise all strongest events in a block of data: the example I am presenting here is maximum daily winter precipitation (rain and snow) that falls over a river basin in each year. Unfortunately, such data samples can no longer be described by the tried and tested Gaussian distribution and its mean and variance. But mathematical statistics comes to the rescue in this situation too: there is an analogue of the Central Limit Theorem, called Extremal Types Theorem, telling us that we can replace the familiar Gaussian bell with a different function called the Generalized Extreme Value (GEV) distribution. We now need three numbers (or parameters) to characterise the GEV. They are called location μ, scale σ, and shape ξ, and their meaning is best illustrated graphically by so-called Gumbel diagrams shown in Figure 2. The vertical axis of these diagrams shows return values indicating the strength of an event (here daily river basin precipitation) and the horizontal axis shows return times, which tell us about the frequency of an event. The bold lines in the diagrams show different GEV distributions and they tell us how to relate a return time to an expected return value. For example, the brown curve in the top panel of Figure 2 shows that the expected return value for a return time of 20 years is 21 mm. We have to wait 20 years on average for a precipitation event of this amount to occur. The location parameter μ determines the vertical position of the GEV curve in the diagram – increasing it to μ=15 mm yields the green curve and the 20-year return value increases to 27 mm. The scale parameter σ determines the slope of the GEV curve in the Gumbel diagram as illustrated in the middle panel of Figure 2. The greater the σ, the more maximum precipitation will vary from year to year, and the more return values will increase with an increase in return time. Finally, the shape parameter ξ describes the curvature of the GEV curve (Figure 2, bottom panel).

2016 05 12 Reinhard Schiemann Figure 2

Figure 2. Illustrative Gumbel diagrams showing GEV distributions with different values for the location parameter (top), for the scale parameter (middle), and for the shape parameter (bottom).

What is all this good for? One application is model evaluation, the process where we assess how realistically numerical models simulate the observed weather and climate. Here, I am interested in how well two versions of a climate model, a low-resolution version (named N96 in Figure 3) and a high-resolution version (N512, also in Figure 3) simulate the extremes of daily winter precipitation over European river basins. To obtain a summary assessment of this performance, I estimate the three GEV parameters for each of the models (N96, N512) and for a reference dataset (E-OBS) based on observed precipitation data from rain gauges. The results are shown in Figure 3. The top row shows the location, scale and shape values for the observations, and the middle and bottom rows show differences between the two models and the observations. We see that both models tend to produce too high precipitation extremes over large parts of Europe, especially over the northern European plains from the Loire river basin in the west to the Vistula basin in the east (greenish colours for the model-observation differences for the location and scale parameters). We also see that this problem is alleviated in the high-resolution (N512) model, where these differences are smaller than in the coarse (N96) model.

The statistical summary assessment shown here is only the first step in model evaluation and many questions remain. How do our two models represent rain-producing Atlantic storms, and how do these storms interact with the European landmass and, in particular, major mountain chains, such as the Alps? Trying to answer such questions is called process-based model evaluation and is an important part of the meteorological research here at Reading. But we will have to leave that for another blog.

2016 05 12 Reinhard Schiemann Figure 3

Figure 3. Estimated GEV parameters for daily winter precipitation over European river basins. Top: precipitation observations (E-OBS), middle: difference between coarse model simulation (N96) and E-OBS, bottom: difference between high-resolution model simulation (N512) and E-OBS. Left: location parameter μ, centre: scale parameter σ, right: shape parameter ξ. Stippling shows statistically significant differences between N96 and E-OBS (middle row) and between N512 and N96 (bottom row).

 

Posted in Climate, Numerical modelling | Tagged , , , | Leave a comment