Time scales of atmospheric circulation response to CO2 forcing

By Paulo Ceppi

An important question in current climate change research is, how will atmospheric circulation change as the climate warms? When simulating future climate scenarios, models commonly predict a shift of the midlatitude circulation to higher latitudes in both hemispheres – generally referred to as a “poleward circulation shift”. As an example, under a “business as usual” future emissions scenario the North Atlantic jet stream is predicted to shift northward during the summer months (Figure 1). If true, this would likely affect the average amount of precipitation, wind, and sunshine experienced in the UK and more generally across Western Europe.

 

Figure 1: Change in eastward wind speed at 850 hPa in the RCP8.5 (“business as usual”) experiment during the 21st century for June-August. The response is calculated as the mean of 2070-2100 minus 1960-1990. Grey contours indicate the wind climatology, while colour shading denotes the change (in m/s). Results are averages over 35 coupled climate models.

Such shifts of circulation being caused by global warming, it is natural to assume that the more the planet warms, the larger the circulation shift. But is that assumption generally true? More specifically: as the planet warms in response to CO2 forcing, do circulation shifts scale with the change in global-mean temperature? Here we are interested in the time evolution of the transient response to CO2 forcing, i.e. the period during which the climate adjusts to the change in CO2. This evolution is best represented in climate model experiments in which CO2 concentrations are increased abruptly and then held constant; since the forcing happens all at once, the various time scales of climate response are cleanly separated. Below I will present results from the so-called “abrupt4xCO2” experiment, in which a set of climate models were subjected to a sudden quadrupling of CO2 concentrations and then run for 150 years.

It turns out that as climate changes following a sudden quadrupling of CO2, circulation shifts do not generally scale with global warming. Instead, two distinct phases of circulation change occur: during the first 5 years or so, the planet warms quickly and the jet streams shift poleward; but thereafter the jets tend to stay at a constant latitude, despite the fact that the planet continues to warm substantially. This is summarised in Figure 2 below, where the curves indicate changes in the latitude of the jet stream (averaged over a set of 28 climate models). In the North Pacific region, the change in jet stream latitude even changes sign over the course of the experiment (Figure 2b).

 

Figure 2: Change in annual-mean jet stream latitude (measured as the latitude of peak eastward wind at 850 hPa) in climate models during the first 140 years following a quadrupling of atmospheric CO2, as a function of global-mean surface temperature. The curves indicate means across 28 climate models. Shading denotes the 75% range of responses. The jet shifts are in degrees latitude and positive anomalies are defined as poleward. Circles denote individual years until year 10; diamonds denote decadal means between year 11 and year 140. The black crosses indicate the means of years 5-10 and 121-140, respectively.

How can we explain this peculiar time evolution of circulation shifts? The evolution of changes in atmospheric temperature and circulation is mainly controlled by how the ocean surface warms in response to greenhouse gas forcing. Like the atmosphere, the ocean has its own circulation and in some regions deep, cold water rises to the surface – a process known as “upwelling”. Due to this and other processes, the ocean surface does not warm at the same rate everywhere; in particular, upwelling regions like the Southern Ocean experience delayed warming.

We find that the time scales of ocean surface warming determine the time scales of change in atmospheric circulation, via the changes in atmospheric temperature. In particular, the patterns of ocean surface warming before and after year 5 of the experiment are strikingly different; when imposed in an atmospheric climate model, we obtain circulation changes consistent with the results shown in Figure 2, confirming the role of the ocean surface in controlling the atmospheric response.

While the scenario of abrupt CO2 increase described in Figure 2 is unlikely to happen in the real world, further analysis shows that the two time scales of circulation shift are also present in more realistic scenarios of gradual greenhouse gas increase. This indicates that care must be taken when extrapolating transient circulation shifts to estimate changes in future warmer climates.

References

Ceppi et al., 2017. Fast and slow components of the extratropical atmospheric circulation response to CO2 forcing: submitted to Journal of Climate.

Zappa et al., 2015. Improving climate change detection through optimal seasonal averaging: the case of the North Atlantic jet and European precipitation: Journal of Climate, doi: 10.1175/JCLI-D-14-00823.1

Posted in Atmospheric chemistry, Climate, Climate change, Climate modelling, Numerical modelling | Tagged | Leave a comment

What’s in a number?

By Nancy Nichols

Should you care about the numerical accuracy of your computer? After all, most machines now retain about 16 digits of accuracy, but usually only about 3-4 figures of accuracy are needed for most applications;  so what’s the worry? To demonstrate, there have been a number of spectacular disasters due to numerical rounding error. One of the most well known is the failure of a Patriot missile to track and intercept an Iranian Scud missile in Dharan, Saudi Arabia, on 25 February 1991, resulting in the deaths of 28 American soldiers. 

The failure was ultimately attributable to poor handling of  rounding errors. The computer doing the tracking calculations had an internal clock whose values were truncated when converted to floating-point arithmetic with an error of about 2-20 . The clock had run up a time of 100 hours, so the calculated elapsed time was too long by 2-20 x 100 hours = 0.3433 seconds, during which time a Scud would be expected to travel more than half a kilometre.

 

 

 

 

(See http://wwwusers.math.umn.edu/~arnold/disasters/patriot.html)

The same problem arises in other algorithms that accumulate and magnify small round-off errors due to the finite (inexact) representation of numbers in the computer. Algorithms of this kind are referred to as ‘unstable’ methods. Many numerical schemes for solving differential equations have been shown to magnify small numerical errors. It is known, for example, that L.F. Richardson’s original attempts at numerical weather forecasting were essentially scuppered due the unstable methods that were used to compute the atmospheric flow. Much time and effort have now been invested in developing and carefully coding methods for solving algebraic and differential equations such as to guarantee stability. Excellent software is publicly available. Academics and operational weather forecasting centres in the UK have been at the forefront of this research.

Even with stable algorithms, however, it may not be possible to compute an accurate solution to a given problem. The reason is that the solution may be sensitive to small errors  –  that is, a small error in the data describing the problem causes large changes in the solution. Such problems are called ‘ill-conditioned’. Even entering the data of a problem into a computer  –  for example, the initial conditions for a differential equation or the matrix elements of an eigenvalue problem  –   must introduce small numerical errors in the data. If the problem is ill-conditioned, these then lead to large changes in the computed solution, which no method can prevent.   

So how do you know if your problem is sensitive to small perturbations in the data?  Careful analysis can reveal the issue, but for some classes of problems there are measures of the sensitivity, or the ‘conditioning’, of the problem that can be used. For example, it can be shown that small perturbations in a matrix can lead to large relative changes in the inverse of the matrix if the ‘condition number’ of the matrix is large. The condition number is measured as the product of the norm of the matrix and the norm of its inverse.  Similarly  small changes in the elements of a matrix will cause its eigenvalues to have large errors if the ‘condition number’ of the matrix of eigenvectors is large. Of course to determine the condition numbers is a problem implicitly, but accurate computational methods for estimating the condition numbers are available.

An example of an ill-conditioned matrix is the covariance matrix associated with a Gaussian distribution. Figure 2 below shows the condition number of a covariance matrix obtained by taking samples from a Gaussian correlation function at 500 points, using a step size of 0.1, for varying length-scales [1].  The condition number increases rapidly to 107 for length-scales of only size L = 0.2  and, for length scales larger than 0.28, the condition number is larger than the computer precision and cannot even be calculated accurately.

Figure 2

This result is surprising and very significant for numerical weather prediction (NWP) as the inverse of covariance matrices are used to weight the uncertainty in the model forecast and in the observations used in the analysis phase of weather prediction. The analysis is achieved by the process of data assimilation, which combines a forecast from a computational model of the atmosphere with physical observations obtained from in situ and remote sensing instruments. If the weighting matrices are ill-conditioned, then the assimilation problem becomes ill-conditioned also, making it difficult to get an accurate analysis and subsequently a good forecast [2]. Furthermore, the worse the conditioning of the assimilation problem becomes, the more time it takes to do the analysis. This is important as the forecast needs to be done in ‘real’ time, so the analysis needs to be done as quickly as possible.

One way to deal with an ill-conditioned system is to rearrange the problem to so as to reduce the conditioning whilst retaining the same solution. A technique for achieving this is to ‘precondition’ the problem using a transformation of the variables. This is used regularly in NWP operational centres with the aim of ensuring that the uncertainties in the transformed variables all have a variance of one [1][2]. In Table 1 we can see the effects of the length-scale of the error correlations in a data assimilation system on the number of iterations it takes to solve the problem, with and without preconditioning of the problem [1]. The conditioning of the problem is improved and the work needed to solve the problem is significantly reduced. So checking and controlling the conditioning of a computational problem is always important!

Table 1

References

[1]  S.A Haben, 2011. Conditioning and Preconditioning of the Minimisation Problem in Variational Data Assimilation. University of Reading, Department of Mathematics and Statistics, PhD Thesis: https://www.reading.ac.uk/web/files/maths/HabenThesis.pdf

[2]  S.A. Haben, A.S. Lawless and N.K. Nichols,  2011. Conditioning of incremental variational data assimilation, with application to the Met Office system, Tellus, 63A, 782–792. (doi:10.1111/j.1600-0870.2011.00527.x)

Posted in Climate modelling, data assimilation, Numerical modelling, Weather forecasting | Leave a comment

A Presidential address …

By Ellie Highwood

I have been President of the Royal Meteorological Society (RMetS) for almost a year now (I will serve two years in total) and people keep asking me “how’s it going?” or “are you enjoying it?” Before I answer those questions let me describe the role.

The role of President of a small society like RMetS is a bit of everything to be honest. Obviously there are the formal things – I chair Council meetings three times a year as well as the Awards Committee and then present the awards at the AGM. Since we are developing our next 3 year strategy there are also meetings and workshops with a group of members and Council to do that. Ahead of each Council there are about 3 hours of work to do on the papers to make sure I understand what is in them – these can be about new initiatives the RMetS wants to do, reports from the other committees or the annual accounts. Some people who have experienced me chairing a Termly Staff Meeting at Reading will perhaps be surprised to learn that all the Council meetings have tended to over-run! It is definitely a challenge to make sure every voice is heard on some quite complex issues without getting bogged down in the detail. In truth, it’s not my favourite part of the job, but it comes in chunks as between times the working groups, committees and Society staff are busy getting on with things. In fact, having served 4 years as Vice President prior to becoming president (not usual but a variety of exceptional conditions led to me “filling in”), I was in more regular committee meetings in that role. Not only did I attend the same meetings as the President, but also chaired the Strategic Programme Board and was a member of the Membership Development Committee.

On top of this, there are the less predictable items – dealing with requests/complaints from members and supporting the Chief Executive, Liz Bentley, in the day to day running of the Society and in strategic planning. Liz runs a small office with 8 or so employees and sometimes it’s good to have someone outside the line management structure to chat things through with. We meet once per month – it’s certainly very handy that RMetS Headquarters is in Reading! Unusually this year there are significant anniversaries of the formation of the Canadian Meteorological and Oceanographic Society and Australian Meteorological and Oceanographic Society from the previous “local” branches of the RMetS. To mark this, I will be travelling to Melbourne in August along with Liz and Brian Golding (outgoing Chair of Meetings Committee) to represent the Society (fitting in a seminar at Monash University on the way). I sent a video birthday message to Canada because the timing didn’t fit with the rest of my life to do both. I will also be playing a reasonably big role at the RMetS national conferences and at some point will have to give a Presidential address both at a national meeting, and in Scotland.

I expect I spend on average half a day per week, if that, on RMetS business unless there is a crisis – which rarely happens due to the excellent work of staff and volunteers. I am also lucky that my predecessor Jennie Campbell had to do the negotiation with our publishers Wiley and that’s not due again for another couple of years (I hope). So, back to the original questions:

How is it going? Well, I think (apart from the length of those Council meetings). I wouldn’t expect an RMetS President to come in suddenly change everything – it just isn’t that kind of Society, and 2 years is too short a term to do that. Instead I see my role as nudging and encouraging movement in certain directions that may already have started happening, e.g. review of what it means to be a Fellow of the Royal Meteorological Society, tightening up our nominations and awards processes and making them more transparent, and getting discussions about diversity and inclusion happening (well you would expect nothing less given my day job, right?).

Am I enjoying it? Hmm. Interesting. It is certainly a great honour to be President, but somewhat intimidating every time I walk up the staircase at Headquarters and see my picture there alongside centuries of great meteorologists (imposter syndrome klaxon). I am proud to be involved with shaping our learned society and, dare I say, moving it along a little to be fit for the next generation of meteorologists. The volunteers on Council and the various Committees are each one of them fascinating.  I loved handing out the awards at the AGM, and I love working with the RMetS staff on conferences and such like. It is great fun. But it is weird. It doesn’t feel like a thing most of the time. Which is probably as it should be. We wouldn’t want Presidents to let power go to their head now would we?

If you’d like to get involved with the Royal Meteorological Society there are many ways to do so. I started being involved as a postdoc and got a lot of my formal meeting experience and contacts through the RMetS. Visit the website to see what they are up to and whether you can help, attend a meeting or a conference, or nominate someone for Council or an award.

Posted in Royal Meteorological Society, Women in Science | Leave a comment

Belmont Forum: joined-up thinking from science funders

By Vicky Lucas

The Belmont Forum supports ‘international transdisciplinary research providing knowledge for understanding, mitigating and adapting to global environmental change’.

The Belmont Forum fund research and themes include sustainability, climate predictability, ecosystem services and arctic observing.  The group considers research to be part of a value chain which is socially responsible, inclusive and provides innovative solutions.  Furthermore, open data policies and principles are considered essential to making informed decisions in the face of rapid changes affecting the Earth’s environment.

Belmont Forum Funded Projects
Andy Turner of NCAS and the University of Reading Meteorology Department, leads a Belmont Forum funded project, BITMAP, also jointly funded by JPI Climate.  BITMAP is the ‘Better understanding of Interregional Teleconnections for prediction in the Monsoon and Poles’.  The research is an Indo-UK-German collaboration between the Indian National Centre for Medium Range Weather Forecasting and the universities of Reading and Hamburg.

As is regularly the case with Belmont funding calls, a multi-national consortium was required and each of the participating countries contributed support, with NERC the relevant funder in the UK.  Andy says that his project is ‘encouraging international collaboration and bridging the gap between academic climate science and the more applied needs of weather forecasting’.  The project is going well and only six months from starting, a paper on an algorithm for tracking storms is already in preparation by Kieran Hunt.  Andy observes that in addition to regular virtual meetings between the three countries that ‘as papers from the individual countries begin to be published, the collaboration on the project will increase and more ideas will be shared’.

Scott Osprey of the University of Oxford leads GOTHAM the ‘Globally Observed Teleconnections and their role and representation in Hierarches of Atmospheric Models’, also funded by the Belmont Forum.  When asked about the role of the Belmont Forum, Scott pointed to the ability of this international group to encourage ‘new international research communities for tackling large and complex environmental issues beyond the purview of most national research centres’.

Data Intensive Environmental Research
The e-Infrastructure and Data Management sub-group of the Belmont Forum was set up to concentrate on overcoming barriers in data sharing, use and management for environmental and global change research.  By improving data sharing, data intensive research will be accelerated.  The University of Reading has been involved for several years as Robert Gurney co-chairs the e-I&DM group.

The e-I&DM promotes the FAIR data principles which in the detail include the use of rich metadata, standards for vocabularies and data formats, along with persistent identifiers, clear licencing and provenance, to ensure that data are:

  • Findable
  • Accessible
  • Interoperable
  • Reusable

These principles have been embraced by many, including the European Commission and Horizon 2020 funded projects.

The number of countries participating in the e-I&DM group is smaller than the parent Belmont Forum, with the active roles provided by France (ANR), Taiwan (MOST), Japan (JST), US (NSF) and the UK (NERC).

Back-of-the-envelope calculation for a data management plan

The Belmont Forum e-I&DM group is currently developing a template for data plans, intended to be a light touch, to highlight issues such as cost, documentation, anticipated restrictions on accessing data and data management after the lifetime of the project.  Organisations such as NERC already have guidelines, but the Belmont Forum can help to standardise the actions of a number of countries.  Andy Turner identified that flexibility from funders is the key when asking for a data management plan at proposal stage, that ‘only very rough estimates of data sizes might be possible and it is difficult to say at the outset how much data produced will have long-term value’.  Scientists are asked to make projections in the knowledge that the change of track of the research through a project might make for changed data management needs.  Nevertheless, considering data management issues from the outset can only help to raise awareness on the value of the data projects produce and to highlight the potential value in reuse.

Summary
The Belmont Forum provides the opportunity to produce joined-up thinking from science funders and councils.  The group uses its global reach to influence and fund collaborative research and to work on specific issues for data intensive environmental research.  The data behind this research, which is channelled into discussions, analyses and papers, is also being more widely acknowledged as a valuable resource in itself.  The Belmont Forum is providing leadership and agreement to develop and disseminate best practice for the data themselves.

Vicky Lucas is the Human Dimensions Champion, Belmont Forum e-I&DM & Training Manger, IEA

Posted in Climate | Leave a comment

Soil Moisture retrieval from satellite SAR imagery

By Keith Morrison

Soil moisture retrieval from satellite synthetic aperture radar (SAR) imagery uses the knowledge that the signal reflected from a soil is related to its dielectric properties. For a given soil type, variations in dielectric are controlled solely by moisture content changes. Thus, a backscatter value at a pixel can be inverted via scattering models to obtain surface moisture. However, this retrieval is complicated by the additional sensitivity of the backscatter to surface roughness and overlying vegetation biomass.

For the simplest cases of bare or lightly vegetated soils, extraction of accurate soil moisture information relies on an accurate model representation of the relative contributions of soil moisture and surface roughness. Models to invert backscatter into soil moisture can be broadly categorised into physical, empirical, or semi-empirical. Empirical models have used experimental results to derive explicit relationships between the radar backscattering and moisture. However, these models tend to be site-specific, only being applicable to situations where radar parameters and soil conditions are close to those used in the initial model derivation. Semi-empirical models start with a theoretical description of the scene, and then use simulated or experimental data to direct the implementation of the model. Such models are useful as they provide relatively simple relationships between surface properties and radar observables that capture a lot of the physics of the radar-soil interaction. The key advantages of such models are that they are much less site dependent in comparison to empirical models, and can also be applied when little or no information about the surface roughness is available. Theoretical, or physical, models are based on a robust description of the mathematics of the radar-soil interaction, providing backscatter through a rigorous inversion. Their generality means they are applicable to a wide range of site conditions and sensor characteristics. However, in practice, because the models require the input of a large number of variables it makes their parameterisation complex, and consequently their implementation difficult. As such, semi-empirical models have generally been the most favoured.

The approaches outlined above only use the incoherent component – backscatter intensity – to characterise the soil moisture, discarding potentially useful information contained in the phase. Recently, however, a causal link between soil moisture and interferometric phase has been demonstrated, and the development of phase-derived soil products will see increasing attention. The figure below shows the first demonstration of phase-retrieved soil moisture, applied across agricultural fields (De Zan et al, 2014). Here, the differential phase (in degrees) between two SAR images clearly shows delineation along field boundaries, associated with differing moisture states.

Reference

De Zan, F., et. al., 2014. IEEE Transactions on Geoscience and Remote Sensing, 52, 418–425

Posted in Climate, earth observation, Hydrology, land use, Measurements and instrumentation, Numerical modelling, Remote sensing | Tagged | Leave a comment

Can observations of the ocean help predict the weather?

By Amos Lawless

It has long been recognized that there are strong interactions between the atmosphere and the ocean. For example, the sea surface temperature affects what happens in the lower boundary of the atmosphere, while heat, momentum and moisture fluxes from the atmosphere help determine the ocean state. Such two-way interactions are made use of in forecasting on seasonal or climate time scales, with computational simulations of the coupled atmosphere-ocean system being routinely used. More recently operational forecasting centres have started to move towards representing the coupled system on shorter time scales, with the idea that even for a weather forecast of a few hours or days ahead, knowledge of the ocean can provide useful information.

A big challenge in performing coupled atmosphere-ocean simulations on short time scales is to determine the current state of both the atmosphere and ocean from which to make a forecast. In standard atmospheric or oceanic prediction the current state is determined by combining observations (for example, from satellites) with computational simulations, using techniques known as data assimilation. Data assimilation aims to produce the optimal combination of the available information, taking into account the statistics of the errors in the data and the physics of the problem. This is a well-established science in forecasting for the atmosphere or ocean separately, but determining the coupled atmospheric and oceanic states together is more difficult. In particular, the atmosphere and ocean evolve on very different space and time scales, which is not very well handled by current methods of data assimilation. Furthermore, it is important that the estimated atmospheric and oceanic states are consistent with each other, otherwise unrealistic features may appear in the forecast at the air-sea boundary (a phenomenon known as initialization shock).

However, testing new methods of data assimilation on simulations of the full atmosphere-ocean system is non-trivial, since each simulation uses a lot of computational resources. In recent projects sponsored by the European Space Agency and the Natural Environment Research Council we have developed an idealised system on which to develop new ideas. Our system consists of just one single column of the atmosphere (based on the system used at the European Centre for Medium-range Weather Forecasts, ECMWF) coupled to a single column of the ocean, as illustrated in Figure 1.  Using this system we have been able to compare current data assimilation methods with new, intermediate methods currently being developed at ECMWF and the Met Office, as well as with more advanced methods that are not yet technically possible to implement in the operational systems. Results indicate that even with the intermediate methods it is possible to gain useful information about the atmospheric state from observations of the ocean. However, there is potentially more benefit to be gained in moving towards advanced data assimilation methods over the coming years. We can certainly expect that in years to come observations of the ocean will provide valuable information for our daily weather forecasts.

References

Smith, P.J., Fowler, A.M. and Lawless, A.S., 2015. Exploring strategies for coupled 4D-Var data assimilation using an idealised atmosphere-ocean model. Tellus A, 67, 27025, http://dx.doi.org/10.3402/tellusa.v67.27025.

Fowler, A.M. and Lawless, A.S., 2016. An idealized study of coupled atmosphere-ocean 4D-Var in the presence of model error. Monthly Weather Review, 144, 4007-4030, https://doi.org/10.1175/MWR-D-15-0420.1

Posted in Boundary layer, data assimilation, Numerical modelling, Oceans, Weather forecasting | Leave a comment

It melts from the top too …

By David Ferreira

The global sea level rises at about 3 mm/year. Oceans absorb nearly 90% of the heat trapped in the atmosphere by anthropogenic gases like carbon dioxide. As water warms, it expands: this effect explains about half of the observed sea level rise. The other half is due to the melting of ice stored over land, that is, glaciers, the Greenland ice sheet and the Antarctic ice sheet.

Although the latter was a relatively small contributor, recent estimates suggest an increased mass loss from Antarctica in the last decade. Up to now, Antarctica was thought to lose most of its mass at the edges.

The Antarctic ice sheet behaves a bit like a pile of dough that slowly collapses under its own weight. The ice spreads over the whole, and then over the oceans as floating ice, known as ice shelves. Ice shelves are usually found at the end of fast ice-streams channeled by mountains (there are hundreds of these around the continents). The ice shelves in contact with the “warm” ocean (~ 2-4 °C) and melt slowly. Occasionally the process is more abrupt, the ice shelves shed icebergs, some of which are many kilometres in size (an iceberg much larger than Greater London is about to break loose from the Larsen ice shelf). On long timescales, the ice loss at the edges is compensated by snow falling on top of the ice sheet. In recent decades, however, the mass loss at the edges has been slightly larger than the gain through snowfall (a transfer of water to the oceans and a contribution to the sea level rise). The leading explanation for this recent imbalance is that the rate at which warm water is brought to the ice shelves has increased, possibly because of a strengthening of the winds that drive the ocean currents.

A recent paper brings a new element into the picture: the Antarctic ice sheet does not only melt at the edges but also from the top (Kingslake et al., 2017). The surface melt process was thought to be exclusive to Greenland as Antarctica is too cold, even in summer, for temperature to rise above 0°C. So, how is this happening? Melt water in Antarctica seems to originate next to blue ice or exposed rocks. Within the white world of Antarctica, blue ice and rocks are dark. That is, they absorb more sunlight than snow and could (locally) create the conditions for melting. The melt water then gathers into elongated ponds that can grow by kilometres within weeks. Kingslake et al. have documented this process for hundreds of ice streams around Antarctica, sometimes deep into the continent, highlighting a much more widespread phenomenon than previously thought.

What are the possible consequences? These ponds can accelerate the mass loss to the ocean. For example, if they form over land, they may flush to the base of the ice sheet, “lubricate” the ice-ground interface and speed up the ice flow to the coast. If the ponds form over the ice shelves, the added pressure due to the weight of liquid water can help fracture the ice shelves and create icebergs.

Then, the natural question is whether the Antarctic ice shelf is more susceptible to rising temperatures than we think. Unlike the melting at the edge which involves indirect mechanisms through changing winds and ocean currents, surface melting could be directly influenced by increasing temperatures. How important could that be in terms of sea level rise? This remains to be quantified as modern ice sheet models do not take this effect into account, or at least underestimate it.

Reference

Kingslake et al, 2017: doi: 10.1038/nature22049

Posted in antarctica, Climate, Cryosphere, Oceans, Polar | Tagged | Leave a comment

Reducing climate change from aviation: could climate-friendly routing play a part?

By Emma Irvine

It’s commonly known that burning fossil fuels, like in jet engines, leads to the emission of carbon dioxide (CO2) which causes global warming. It is perhaps less well known that, particularly in the case of aviation, carbon dioxide is not the only (nor necessarily the smallest) problem. When it comes to determining its climate impact, the aviation sector is complicated, with an entire range of ‘non-CO2’ impacts to consider. Burning jet fuel also releases water vapour, another greenhouse gas, as well as oxides of nitrogen which leads to changes in ozone and methane, two other greenhouse gases. In addition, aircraft cruising at high altitude through very cold and moist air form long-lasting contrails. These long-lived contrails have potentially as large a climate impact as that from aviation CO2 emissions (Reference 1).

Figure 1.  Long-lived contrail cirrus criss-crossing the morning sky over Reading.

How can we continue to fly, but with a smaller impact on climate? It is encouraging to note that not only are there many possibilities being investigated, but some are already being introduced. There is gradual technological development which brings, for example, more fuel-efficient aircraft engines. There is development of cleaner aviation fuels which are not petroleum-based (although at present the alternative fuels certified for use have to be blended 50:50 in combination with the traditional kerosene). Improvements to the way air traffic is managed may also play a role. It is in this latter category that climate-optimised aircraft routing falls.

Traditionally, aircraft routes are optimised by factors such as operating cost, fuel use and flight time.  A new idea, investigated under the European project REACT4C (Reference 2), was to find the optimal route which minimised the impact of that flight on climate, where the climate impact included not only the CO2 emissions (i.e. fuel burn) but also the non-CO2 impacts. This approach, whilst having the advantage of not requiring any developments to aircraft technology or different air traffic control procedures, is made non-trivial by the non-CO2 impacts. Just the detailed computations of the impact of emissions from one day of trans-Atlantic flights required huge computational effort, since some of the emissions have a long lifetime and their impact depends on not just where they are emitted but where the emissions are transported to and the chemical processes they undergo during their lifetime. 

An additional complexity of the project, which was also one of its greatest strengths, was the involvement of scientists and engineers across a range of disciplines, from atmospheric and climate modelling to aeronautical engineers and air traffic control experts from six different countries. Establishing a common scientific language across disciplines can sometimes be a challenge – for example meteorologists are very attached to pressure as an indicator of altitude, whilst engineers and air traffic control specialists use flight levels in thousands of feet (which despite their units are really pressure levels in disguise).  However in the end the project benefits from the individual expertise of each of its members, and ensured greater applicability of the results to the aviation industry.   

The study simulated the full set of trans-Atlantic flights on each of 5 days with different weather conditions (in terms of their upper-level winds). By varying the flight path of each flight, we were able to find safe combinations of flights through north Atlantic airspace which, if their total climate impact was calculated, had a smaller climate impact than the set of flights with smallest economic cost. One of the headline results from the project was that it should be possible to achieve a 10% reduction in total aviation climate impact with a 1% increase in economic cost (Reference 2). Any cost increase may be unpalatable to an industry run on tight margins, but the study also showed that this cost increase could be compensated for by incentivising climate-optimised routing through market-based measures. Our current research, under the umbrella of the Single European Sky Air Traffic Management Research (SESAR) (Reference 3), seeks to apply this idea to the more congested airspace over Europe.

References

  1. Burkhardt and Kärcher 2011: http://www.nature.com/nclimate/journal/v1/n1/full/nclimate1068.html
  2. Grewe et al. 2017: http://iopscience.iop.org/article/10.1088/1748-9326/aa5ba0
  3. https://www.sesarju.eu/
Posted in Atmospheric chemistry, aviation, Climate, Weather, Weather forecasting | Tagged | Leave a comment

Why has there been a rapid increase in heat-related extremes in Western Europe since the mid-1990s?

By Buwen Dong

In the last few decades, Europe has warmed not only faster than the global average, but also faster than expected from anthropogenic greenhouse gas increases (van Oldenborgh et al., 2009). With the warming, Europe experienced record-breaking heat waves and extreme temperatures, such as the 2003 European heatwave, 2010 Russian heatwave, and 2015 European heatwave, which imposed disastrous impacts on individuals and society.

Illustrated in Figure 1 are time series of the area averaged summer (June to August, JJA) surface air temperature (SAT), and summer or annual temperature extreme anomalies over Western Europe (35oN-70oN, 10oW-40oE, land only) relative to the climatology over the time series expanding period. One of most important features is the abrupt surface warming since the mid-1990s and rapid increases in temperature extremes. The changes in SAT and temperature extremes during the recent 16 years (1996-2011) relative to the early period 1964-1993 are more than 1.0 degrees Celsius (degC).

Figure 1. Time series of summer (JJA) or annual mean anomalies relative to the climatology (mean of the whole period) averaged over Western Europe (35oN-70oN, 10oW-40oE). (a) Surface Air Temperature (SAT, degrees Celsius),Tmax, Tmin, and Daily Temperature Range (DTR, degC), (b) annual hottest day temperature (TXx) and warmest night temperature (TNx) (degC), TXx and TNx based on two data sets of HadEX2 and E-OBS are shown. Black and red range bars indicate the earlier period of 1964–1993 and the recent period of 1996–2011.

What has caused the rapid summer warming and increases in high temperature extremes over Western Europe? Relative to an early period of 1964-93, sea surface temperatures (SSTs) have warmed, particularly in the North Atlantic and Indian Oceans, and sea ice extent (SIE) has decreased. Due to air quality legislation, anthropogenic aerosol (AAer) precursor emissions in Europe and North America have decreased since the 1980s and greenhouse gas (GHG) concentrations have increased. In order to understand the relative importance of these forcing factors on the rapid Western European summer warming and increases in hot temperature extremes, numerical experiments with the atmospheric component of a state of the art global climate model have been performed in a study by Dong et al (2016).

Some area averaged summer or annual changes in temperature extreme indices over Western Europe between two periods for both observations and model simulations are illustrated in Figure 2. There is good agreement between the model forced by changes in all forcings and observed changes in summer seasonal mean SAT, Tmax, and Tmin. In response to changes in all forcings, the model simulates an area-averaged summer mean SAT change of 1.16 ± 0.21 degC over Western Europe, which is very close to observed change of 0.93 degC. The changes in SST/SIE explain 62.2 ± 13.0% of the area-averaged SAT signal, with the 37.8 ± 13.6% explained by the direct impact of changes in GHGs and AAer. Both changes in SST/SIE and AAer lead to an increase in Tmax, while the increase in Tmin is predominantly due to the change in SST/SIE. The direct impact of AAer changes act to increase Daily Temperature Range (DTR), but change in DTR is countered by direct impact of GHG forcing. However, DTR change in response to all forcings is overestimated by the model. Results also suggest that the direct impact of AAer changes plays an important role in the increase in the annual hottest day temperature (TXx) (explaining 45.5 ± 17.6% of the signal in the response to changes in all forcings) while the increase in the annual warmest night temperature (TNx) is mainly mediated through the warming of the ocean.

Figure 2: Observed and model simulated summer seasonal mean (JJA) changes between two periods for SAT, Tmax, Tmin, and DTR, the annual changes in annual hottest day (TXx) and warmest night (TNx) temperature, averaged over Western Europe. SAT, Tmax, Tmin, DTR, TXx, and TNx are in degC. (a) Observed changes (based on CRUTS3.2 and HadEX2) data sets, and simulated responses to changes in SST/SIE, GHG concentrations, and AAer precursor emissions. The coloured bars indicated the central estimates and the whiskers show the 90% confidence intervals based on a two tailed Student t-test. (b) Model simulated changes in response to different forcings. SST & SIE is the response to changes in SST/SIE. GHG is the response to GHG concentrations, and Aerosols is the response to changes in AAer precursor emissions.

Whilst each forcing factor causes summer mean surface warming and associated temperature extreme changes over Western Europe, the physical processes are distinct in each case. For example, SST/SIE changes lead to more or less uniform summer mean warming at the surface. In contrast, changes in AAer lead to a band of surface warming and temperature extreme changes in latitude of 40oN-55oN. The results in this study illustrate the important role of the direct impact of changes in AAer not only on summer mean temperature but also on temperature extremes. Reduction of AAer precursor emissions not only induces increased downward solar radiation through aerosol-radiation and aerosol-cloud interactions, but also induces local positive feedbacks between surface warming and reduced cloud cover, reduced precipitation, soil moisture, and evaporation.

Looking forward in the next few decades, greenhouse gas concentrations will continue to rise and anthropogenic aerosol precursor emissions over Europe and North America will continue to decline. Our results suggest that the changes in seasonal mean SAT and temperature extremes over Western Europe since the mid-1990s are most likely to be sustained or amplified in the near term, unless other factors intervene.

References

Dong, B.-W., R. T. Sutton, and L. Shaffrey, 2016: Understanding the rapid summer warming and changes in temperature extremes since the mid-1990s over Western Europe. Clim. Dyn. doi:10.1007/s00382-016-3158-8

van Oldenborgh GJ, Drijfhout S, van Ulden A, Haarsma R, Sterl A, Severijns C, Hazeleger W, Dijkstra H, 2009: Western Europe is warming much faster than expected. Clim Past 5(1):1–12

Posted in Aerosols, Atmospheric chemistry, Climate, Climate change, Climate modelling, Environmental hazards, Numerical modelling | Leave a comment

The physics behind a physics scheme

By Alan Grant

When I joined the Met Office (or, as it was then, The Meteorological Office), I was posted to the boundary layer group. I spent a number of years investigating the atmospheric boundary layer, using data from aircraft and tethered balloons. The justification for the work was to increase our understanding of the boundary layer, which would hopefully lead to improvements in the parametrization of the boundary layer in forecast models. Fast forwarding to the present, I now work on the boundary layer that forms below the surface of the ocean, using high resolution large eddy models, instead of autonomous underwater vehicles (AUVs) and gliders.  The aim of the work remains the same, to develop better parametrizations.

Figure 1. The sea surface in a North Pacific Storm. Photo credit – NOAA

A simple approach to parametrizing the ocean boundary layer is to use parametrizations developed for the cloud-free, atmosphere boundary layer, but upside down (making appropriate changes to account for different densities and heat capacities of air and water). This is a reasonable strategy, but it turns out that there is more to the ocean boundary layer than this, and unsurprisingly the source of the difference between the oceanic and atmospheric boundary layers is the boundary condition.

The possible effects of the surface waves, one of the more striking features of the ocean surface, is an obvious difference between the oceanic and atmospheric boundary layers. Breaking waves, and the interaction between turbulent currents and the Stokes drift of the surface waves (a Lagrangian drift which arises from the non-linearity of the Navier-Stokes equations) has dramatic effect on the properties of the turbulence in the boundary layer.

A more fundamental difference between the oceanic and atmospheric boundary layers is the effect that the surface stress has on the boundary layer flow. In the atmospheric boundary layer momentum is transferred to the surface, so that the surface exerts a drag on the atmosphere. Along with the transport of momentum to the surface, there is also a transport of the mean kinetic energy of the flow (not to be confused with turbulent kinetic energy) from the outer region of the boundary layer towards the surface. This flux of mean kinetic energy maintains the flow near the surface, and supplies the energy needed for the large dissipation rates that occur in the surface layer. The ultimate source of this kinetic energy flux is the work done by the pressure gradient.

In the oceanic boundary layer, the momentum transferred from the atmosphere to the surface acts to generate the mean current. Along with the transfer of momentum into the ocean there is, again, a transfer of mean kinetic energy, but now it is directed away from the surface into the ocean.  This energy flux supports the generation of turbulence at the base of the well-mixed portion of the boundary layer, and turbulent mixing in the stratified layer below. The turbulent mixing in the stratified layer is an important feature of the upper ocean, but is poorly represented in current parametrizations.

To improve parametrizations of the mixing in the stratified layer we need to understand in more detail the process outlined above, and large-eddy simulation can be used to make detailed studies of this and other processes. By understanding the fundamental physics that lies behind the physics scheme we can hopefully improve the parametrization of the surface boundary layer in ocean models.

Posted in Boundary layer, Environmental physics, Numerical modelling, Oceans, Waves | Leave a comment