Stronger windstorms and higher wind risk in a warmer climate

By Oscar Martínez-Alvarado

The most devastating type of winter storms to affect north-west Europe are characterised by a descending jet of air, known as a sting jet, that can result in strong, localised surface winds and wind gusts in a region of the storm not normally associated with strong surface winds. The Great Storm, that ravaged southeast England 30 years ago on 16 October 1987, is a prominent example of this type of storm and the first published case in which a sting jet was identified. Since then, sting jets have been formally identified in several other storms and the term ‘sting jet’ has become common in the media, as shown by the recent coverage of the revolution in weather forecasting triggered by the Great Storm by the BBC and The Guardian.

Last year, a team of researchers from the Department of Meteorology published a study about the frequency of sting-jet windstorms between 1979 and 2012 (Hart et al. 2017). They found that about 32% of cyclonic storms over the North Atlantic between September and May have the potential of generating sting jets. Applying the same techniques as in Hart et al. (2017), we have gone one step further and have produced the first study on how sting-jet windstorms might be different in a warmer climate (Martínez-Alvarado et al. 2018). Our study assumes the most extreme scenario of climate change considered by the Intergovernmental Panel on Climate Change (IPCC), in which greenhouse gases continue to rise throughout the 21st century.

Our results show that the proportion of cyclonic storms with the potential to generate sting jets increases to around 45% in the warmer climate. Furthermore, while the proportion of explosively-developing storms (low pressure systems whose central pressure falls very rapidly) is similar in the two climate simulations, the proportion of these storms with the potential to generate sting jets increases from 9% to 14% in the warmer climate (Figure 1). In a previous blog entry, Giuseppe Zappa discussed the changes that cyclonic storms might undergo under climate change. Among these changes he mentioned an increase in cyclones associated with extreme rainfall, related to a larger amount of moisture in the atmosphere. We think that this larger atmospheric moisture content is the reason behind the increase in the frequency of storms capable to generate sting jets. However, more work is needed to confirm this.

Figure 1: Infographic illustrating the number per winter season and percentage of all identified cyclones categorised by type of development (explosive or non-explosive) and potential to generate sting jets. A mixed symbol is used to represent the dominant types of cyclones where the rounded percentages do not add to 100% (Martínez-Alvarado et al. 2018).

We also looked at the wind risk posed by these storms for the UK and northern Europe. We found that the risk of wind speeds larger than 35 m/s over both regions increases, and that a large proportion of that increase is due to explosively-developing sting-jet storms (Figure 2). One factor to consider when looking at these results is that the models we used tend to underestimate wind speed. Therefore, this wind risk is likely to be larger in the real world.

Figure 2: Events per year of strong resolved-wind events in storms with (red shading) and without (blue shading) the potential to produce sting jets (Martínez-Alvarado et al. 2018).

References

Hart, N.C., S.L. Gray and P.A. Clark, 2017. Sting-Jet Windstorms over the North Atlantic: Climatology and Contribution to Extreme Wind Risk. J. Climate, 30, 5455–5471, DOI: 10.1175/JCLI-D-16-0791.1 

Martínez-Alvarado, O., S.L. Gray, N.C.G. Hart, P.A. Clark, K.I. Hodges and M.J. Roberts, 2018. Increased wind risk from sting-jet windstorms with climate change. Environ. Res. Lett., DOI: 10.1088/1748-9326/aaae3a

Posted in Climate, Climate change, Climate modelling, Environmental hazards, extratropical cyclones | Tagged , | Leave a comment

High speed mathematics: reducing the computation time for weather forecasting

By Sarah Dance

Several times a day, around 10 million observations of the atmosphere are processed by operational weather services, in order to produce the next weather forecast. At the University of Reading, we have been using mathematics to understand and control the amount of computer-time taken in the forecasting process.

Why?
In numerical weather prediction, heterogeneous observations are weighted according to their uncertainty, to create our best estimate of the current state (winds, pressures, temperatures, moisture) across the globe.  This process is called data assimilation. A computer model then solves equations based on physical laws, to calculate the forecast from a few minutes to several days ahead.  The amount of computer-time taken in the data assimilation process is very important: a weather forecast that arrives after the weather has already happened is pretty useless!

How?
The data assimilation process uses weighting matrices, describing our knowledge of the uncertainty in the observations.  We have shown how the sensitivity of the data assimilation solution, and the speed of the computer code in finding that solution, depends on the mathematical properties of the weighting matrices.

What now?
Observation uncertainty cannot be measured and must be estimated in statistical sense.  However, these estimated matrices may be noisy, and require “cleaning up” before they can be used practically.  Our results could be used to inform this clean-up process and, in turn, reduce the computational time taken for data assimilation.

Reference
Tabeart, J. M., Dance, S. L., Haben, S., Lawless, A., Nichols, N. and Waller, J., 2018. The conditioning of least squares problems in variational data  assimilation. Numerical Linear Algebra with Applications. (In Press)

Posted in Climate | Leave a comment

Thoughts on Standing up for Science workshop in London

By Amulya Chevuturi

On recently attending the “Standing up for Science workshop” in London, organized by “Voice of Young Science” (VoYS), I got a glimpse of the implications of my science beyond my own desk at work. I went to this workshop without too many ideas about this topic. Though I do follow science news avidly, I didn’t ever think I could be part of it just yet, if ever.  Over the course of this workshop, my ideas about involvement of young scientists in dissemination of science has changed rapidly.

The speakers from different backgrounds, experiences and perspectives gave us ideas from different points of view. The casual atmosphere and the easy interaction allowed people to communicate and raise questions without hesitation. I loved the inclination of VoYS towards strong scientific evidence, as usually propagation of false facts is the biggest challenge in science, especially in my field of climate science.

Another issue that really made me wary about standing up for science is the idea of speaking to a large audience. Personally, I have overcome my fear of public speaking, but I still don’t feel I would be articulate enough in front of a large audience. But this workshop provided us with different ways we can avoid pitfalls of media appearances or public debates. I was most encouraged by the experiences of young speakers, whom I could definitely relate to.

I met young scientists like me from very different specializations than mine, and it was heartening to hear that though we’re doing very different things, we all seem to be facing the same basic challenges and have the same underlying fears. This cemented how cohesive the VoYS community is. Being part of such an active community makes me feel comfortable while simultaneously drives me towards a feeling of let’s do something.

So for all those early career researchers, who, like me, may not have entertained the idea of their voices being heard, such a workshop would be your starting point. Or for even those who want an active role in public discussions about science, but don’t know how to go about it; Voice of Young Science will provide you with a launch pad to start speaking about science to the public on any platform.

Sense about Science is an independent campaigning charity that challenges the misrepresentation of science and evidence in public life. We advocate openness and honesty about research findings, and work to ensure the public interest in sound science and evidence is recognised in public discussion and policymaking.

Voice of Young Science is a unique and dynamic network of early career researchers across Europe committed to playing an active role in public discussions about science. By responding to public misconceptions about science and evidence and engaging with the media, this active community of 2,000+ researchers is changing the way the public and the media view science and scientists.

For more information on future VoYS workshops, see the links above. The University of Reading and the Royal Meteorological Society are two of many partners of Sense about Science and VoYS.

Captions for photographs below imagesAudience (young scientists) listening intently

 

Scientists on ‘how to present your work to the media’

Journalists on ‘what the media is looking for from scientists’

Q&A

Getting to know each other

 

Posted in Academia, Conferences | Tagged | Leave a comment

Stronger turbulence causes a stir

By Paul Williams and Luke Storer

Our new study calculating that climate change will strengthen aviation turbulence has caused a stir on social media. Most of the online comments about the article have been positive – albeit expressing a little anxiety at the prospect of experiencing a doubling of the amount of severe turbulence later this century.

The new paper, as well as our previous study on this topic in Nature Climate Change, was peer-reviewed by international experts in aviation turbulence and found to be scientifically correct. However, as is commonplace in the public discussion about climate science today – at a time when opinions seem to count more than evidence and facts – a small number of non-expert commentators have misunderstood the scientific details and attempted to discredit the findings.

Some commentators say they have experienced less turbulence on their recent flights. While we do not doubt such claims, one individual person’s encounters with turbulence are obviously a very small sample from a very large distribution of possibilities. The volume of global airspace sampled by even the most frequent of fliers is tiny. Also, as we have pointed out in a third study on this topic, aircraft bumpiness depends on a number of extraneous factors in addition to the strength and frequency of atmospheric turbulence.

Some commentators assert that we “fudge” the input parameters to obtain the answers we want. This is simply untrue, as anyone reading our paper can see. The key input parameters are the fractions of the atmosphere containing light, moderate, and severe turbulence. We know these fractions from detailed in-flight measurement campaigns. Our input parameters are objectively constrained by these measurements. For example, we know that severe turbulence is found in around 0.1% of the atmosphere at typical flight cruising altitudes. This percentage value allows us to define thresholds for severe turbulence in our calculations, and to count how often those thresholds are exceeded when the climate changes. There is no fudging, because the in-flight measurements give us no freedom of choice to do anything other than what we have done.

A final source of confusion seems to be the response of the jet streams to climate change. Although much research remains to be done in this area, we know that the jet streams are driven by the equator-to-pole temperature difference: the stronger the temperature difference, the more sheared the jet stream. In the lower atmosphere, melting Arctic sea ice is causing the polar regions to warm more quickly than the tropical regions. Therefore, the lowest part of the Northern Hemisphere jet stream is expected to weaken with climate change.

Many online critics mistakenly think the same conclusion applies at flight cruising altitudes. In fact, the opposite is true. In the upper atmosphere, water vapour feedbacks are causing the tropical regions to warm more quickly than the polar regions. Therefore, the upper jet stream is expected to become more strongly sheared with climate change, increasing the fluid-dynamical instabilities that generate turbulence.

Our three peer-reviewed studies represent the cutting-edge scientific knowledge regarding how turbulence in the atmosphere is changing, and the impacts those changes could have on aviation. As scientists, that is all we can do.

 

 

 

Posted in aviation, Climate, Climate change, Environmental hazards | Tagged | Leave a comment

Biomass burning in South America: Impacts on the regional climate

By Gillian Thornhill

Deforestation in south America has many environmental impacts, including loss of habitat, soil erosion, changes to the water cycle and the reduced capacity of the CO2 sink the vegetation provides. Where the vegetation is burned, an additional climate impact comes from the release of smoke aerosols into the atmosphere, which affects the regional climate due to changes in the radiation reaching the surface and changes in cloud cover resulting from atmospheric heating by the aerosol.  The wind circulation, surface temperatures and precipitation in the region are also affected by increases in aerosol from biomass burning.

In order to investigate the impact of biomass burning aerosols (BBA) on the regional climate, we compared two simulations of the global atmosphere using the Met Office Unified Model HadGEM3. This work was undertaken as part of the South American Biomass Burning Analysis (SAMBBA) project, which included aircraft observations of biomass burning aerosols to provide constraints on the aerosol properties in the model. We used two realistic levels of biomass burning emissions, one case taken from a high emissions year and one from a low emissions year, and ran the model for 30 years to average out inter-annual variability. The model output from the two cases was compared for September (the month with highest smoke emissions), by taking the September means over the 30 year run.

Figure 1 shows the September mean difference in the biomass burning aerosol optical depth between the high emissions case and the low emissions case, the largest differences being over the areas with largest smoke emissions, as we might expect. The aerosol can affect cloud cover by increasing cloud burn-off as the aerosol absorbs radiation and heats up the atmosphere around it, reducing the cloud fraction at the altitude of the aerosol layer (referred to as the semi-direct effect). In Figure 2 we see the decrease in cloud cover over the area of the main biomass burning aerosol, extending up to the north-east and slightly beyond the main area of biomass burning. There are also changes in the boundary layer height and an increase in the boundary layer stability due to the increased amount of aerosol, which can affect the formation of higher convective clouds; we think this mechanism is responsible for reducing higher level clouds in this area. The absorption of downwelling shortwave radiation by the BBA (Figure 3) results in a reduction at the surface, which lowers the surface temperature slightly (Figure 4); this effect competes with the reduction in cloud cover, which tends to increase shortwave radiation at the surface. In areas with the highest biomass burning, the reduction in the downwelling shortwave from absorption by the aerosol is the stronger process. Finally there is a drying effect in the region, with a reduction in the precipitation occurring in the high emissions case (Figure 5).

Figure 1 Difference in the Aerosol Optical Depth (AOD) at 0.44 microns for September between the high emissions case and the low emissions case (H-L). Stippling represents the 95% confidence level.

As the amount of biomass burning varies from year to year, investigating the impact of high emissions versus low emissions gives us an insight into how the level of biomass burning may affect the regional climate.

Further details and discussion

Figure 2 Difference in cloud fraction for September between the high and low emissions case (H-L). Stippling represents the 95% confidence level.

Figure 3 Difference in downwelling shortwave radiation at the surface for high-low emissions cases. Stippling represents significance at the 95% confidence level.

Figure 4 Difference in surface temperature in Sep. for high-low emissions cases. Stippling represents significance at the 95% confidence interval.

Figure 5 Difference in precipitation in September for high-low emissions cases. Stippling represents significance at the 95% confidence interval. (Note different contour colour scale)

Posted in Climate | Leave a comment

Bali volcanic eruption: Research to help reduce flight disruption caused by ash clouds

By Helen Dacre and Andrew Prata

The volcanic ash clouds released into the atmosphere by Mount Agung in Indonesia late last year brought back memories of the 2010 eruption of Eyjafjallajökull in Iceland, which caused chaos for holidaymakers in Europe. Airlines operating flights to and from Bali and its neighbouring Indonesian islands were disrupted in late-November last year; research is ongoing to reduce the impact of volcanic eruptions on aviation in the future.

Mount Agung had been showing signs of increased seismic activity since mid-September and throughout October (Figure 1). A new phase began on 21 November when an eruption produced ash and gas up to 12,000 ft (3600 m) above sea level. The height of the ash column increased during 25–28 November; reaching as high as 23,000 ft (7000 m) on 28 November.

Figure 1. Time series of seismic activity for Mount Agung. The y-axis indicates frequency of earthquakes/eruptions per day. Data and graphic courtesy MAGMA Indonesia (https://magma.vsi.esdm.go.id).

On 27 November, ash was advected toward the south-south-west which eventually forced authorities to close Denpasar International Airport, where there had been reports of ash at ground level accumulating on aircraft. Satellite imagery captured glimpses of an ash-rich plume (Figure 2), but it was often obscured by meteorological clouds. Since the eruptions in November, Mount Agung has continued to produce minor puffs of steam and volcanic ash while favourable winds have allowed Denpasar Airport to remain open.

Figure 2. Himawari-8 true colour imagery on 26 November 2017. The true colour imagery was produced following the “hybrid, atmospherically corrected” (HAC) method described by Miller et al. (2016).

Due to the damaging effect of volcanic ash on jet engines – molten ash blocks engine cooling holes causing engines to overheat and shutdown – air travel is restricted in ash contaminated airspace. A prolonged eruption, such as the 2010 Eyjafjallajökull eruption in Iceland that grounded flights across Europe, will lead to inevitable economic damage to Bali and the surrounding area due to lost tourism and productivity. In fact, there are already reports of significant impacts on the tourism industry in Bali due to recent activity at Mount Agung.

The 2010 ash crisis exposed the fragility of air travel and raised questions about the resilience and vulnerability of the world’s critical airspace infrastructure. Since 2010, work in the understanding of ash damage to aircraft has developed rapidly. In particular, aircraft engine manufacturers are now in a much better position to advise on the levels of ash that their engines can safely tolerate.

New research will aid decision-making
In a report published in July 2016, Rolls Royce (the UK’s largest engine manufacturer) outlined new engine susceptibility guidelines, which describe engine tolerance limits in terms of a dosage (i.e. accumulated concentration over time). These guidelines are based on the latest field studies carried out on aircraft engines.

At the University of Reading we are working with Rolls Royce, British Airways and the Civil Aviation Authority (CAA) to develop a tool that is able to calculate the ash dosage encountered by an aircraft along its flight path, and its associated uncertainty, for the first time.

The tool demonstrates a method by which airline operators can calculate ash dosage along time-optimal flight routes during volcanic eruptions. It also provides an assessment of the uncertainty in ash concentration forecasts. In order to represent this uncertainty, we have constructed an ensemble: a set of model realisations created by perturbing various uncertain parameters used in the model. We then use “model agreement maps” to represent the percentage of ensemble members that resulted in an ash concentration above a certain peak concentration threshold. The percentages are then discretised into three categories: less likely (0–10%), likely (10–50%) and very likely (50–100%). This approach gives the stakeholder an appreciation for uncertainty in the model and encourages the use of uncertain information in operational decision-making procedures.

Figure 3 shows an annotated screenshot of the web-tool for a hypothetical eruption of Katla volcano (Iceland) in January 2017. In this example, a peak concentration of 4 mg m-3 was used to construct the model agreement maps. The tool comprises four components: (1) model agreement maps, (2) flight route information, (3) the duration of engine exposure vs. ash concentration (DEvAC) chart (see Clarkson et al. 2016 for details) and (4) the along-flight ash concentration and dosage.

Figure 3. Annotated screenshot of the ash dosage web-tool currently under development at the University of Reading.

The new knowledge developed in the project will be used by the CAA to support strategic decision-making, and will enable new regulations to be developed that are based on the latest understanding of volcanic ash risk to aircraft engines, resulting in a more resilient UK airspace infrastructure.

References

Clarkson, R. J., E. J. E. Majewicz, and P. Mack, 2016. A re-evaluation of the 2010 quantitative understanding of the effects volcanic ash has on gas turbine engines. Proc. Inst. Mech. Eng. G J. Aerosp. Eng., 230(12), 2274–2291, doi:10.1177/0954410015623372.

Miller, S. D., T. L. Schmit, C. J. Seaman, D. T. Lindsey, M. M. Gunshor, R. A. Kohrs, Y. Sumida, and D. Hillger, 2016. A sight for sore eyes: The return of true color to geostationary satellites. B. Am. Meteorol. Soc., 97, 1803–1816, doi:10.1175/BAMS-D-15-00154.1.

Posted in Climate | Leave a comment

Chaotic Convection

By Todd Jones

In the traditional global climate model (GCM) configuration, models simulate atmospheric motions explicitly on spatial grids with spacings on the order of 100 km. Motions on finer scales are not directly simulated. Instead, we use parameterizations, some mathematically simpler, perhaps partially empirical, description of the effects of these motions. To facilitate a harmonious and practical interaction between these scales, they are assumed to be distinct and separate, interacting through time tendencies of temperature and moisture.

Of course, in the real atmosphere, motions occur explicitly in a continuum between and beyond these scales, and it’s unlikely that a specific mean state taken over the area of a typical GCM cell would ever yield exactly the same small-scale motions if that same mean were ever to recur. Instead, it may be similar with some chaotic variability due to the atmosphere’s sensitive dependence on initial conditions. Unfortunately, a deterministic parameterization won’t provide that variability. They’ve historically had no clue about variability on the small scale, the convection that has occurred previously, or how either should influence its representation of convection [Reference 1].

There are clues that these missing pieces are needed for models to produce correct large-scale phenomena, such as the Madden-Julian Oscillation (MJO) or even the stratospheric Quasi-Biennial Oscillation [Refs 2-3], and precipitation statistics, particularly its timing and the occurrence of extreme events [Ref 4]. More recently, there have been efforts to address this deficiency, by adding various representations of small-scale variability and memory to existing convection schemes [Refs 5-7], and one of the remaining questions involves how to represent these ideas correctly in convective parameterizations.

Rather than somewhat arbitrarily stochastically perturbing an existing convective parameterization, some employ the superparameterization (SP) framework [Ref 8; Figure 1]. In SP, the conventional parameterizations are replaced by a cloud permitting model (CPM). A 32-column 2-dimensional curtain on a 4 km grid is placed in each GCM column of the Community Atmosphere Model (CAM). Changes in the state of the GCM column force better-resolved motions within the curtain, while parameterizing the microphysics, radiation, and turbulence on the 4- m grid, that is, at much finer time and space scales that should give more accurate results. Then the CPM reports to the GCM how much precipitation was produced and what temperature and moisture changes resulted from the convective-scale motions.

Figure 1. Schematic representation of the interaction between the global model’s resolved and unresolved scales in CAM, SP-CAM, and the new MP-CAM.

SP-CAM provides individual convective realizations with their sensitive dependence on initial conditions and small-scale structures as well as convective memory, as the convection within the curtain is only initialized at the start of the full simulation, rather than at each GCM time step. We know that it provides an improved solution compared to CAM because of this. What we would like to find out, though, is whether it is possible to create a more deterministic parameterization (like that of CAM) that can retain the benefits of SP-CAM. To this aid in understanding some aspects of this issue, a model was developed for my PhD research at Colorado State University. Shown at the bottom of Figure 1 is the multiple-superparameterization (MP) configuration of the CAM. In MP-CAM we employ 10 CPMs running independently. Each CPM is initialized with different thermal perturbation fields to get things moving, and due to sensitive dependence on initial conditions, they will always be doing something different. In the CPM-domain-mean sense, though, they will remain close together as they each see the same GCM state. Following the CPM computations, their mean column tendencies are averaged in an ensemble sense and passed to the GCM. In this way, the convective effects are more like an “expected mean” that a deterministic parameterization tries to produce and the benefits of simulation at finer scales are retained.

Comparing multi-decadal climate simulations in these frameworks, a number of interesting results emerge [Ref 9]. The models produce slightly different climate features, but of interest to many is the representation of intraseasonal variability (Figure 2). In these wavenumber-frequency power spectra diagrams of outgoing longwave radiation (OLR), we see that MP simulation, with its more smoothed and deterministic representation of the small-scale, shows only slight degradation in the MJO signal (the power peak near eastward wavenumber 1, frequency longer than 30 days). By this estimate, it appears that losing the stochastic nature of the SP tendencies has a negative impact on the result, though one may also reasonably conclude that the bulk of the improvement over the standard CAM is retained, a function of better-resolved motions with convective memory.

Figure 2. Ratios of symmetric spectral power to a smoothed background power for OLR for NOAA observations, CAM, SP-CAM (Control), and MP-CAM (Ensemble). Dispersion curves of the linear shallow water equations are shown in solid black for equivalent depths of 12, 25, and 50 metres. Wave types are Equatorial Rossby (ER), inertio-gravity (IG), and Kelvin.

The nature of the MP approach also allows for study of the range of potential solutions under the same large-scale state. For instance, each CPM produces a different value for grid-cell precipitation, and analysis of that spread can provide insight into the geographic locations and large-scale atmospheric structures that are associated with unpredictable convective precipitation (Figure 3). I encourage those interested in seeing how difficult-to-predict precipitation related to measures of CAPE, atmospheric stability, and critical column water vapour to check out my dissertation [Ref 9] and keep an eye out for two papers currently in preparation for submission to J. Adv. Model. Earth Syst. (JAMES).

Figure 3. Average values across 5 Aprils of CPM-ensemble mean (left) and standard deviation (right). The MP-CAM framework allows for identification of regions of difficult-to-predict precipitation.

References

[1] Jones, T. R., and D. A. Randall, 2011: Quantifying the limits of convective parameterizations. J. Geophys. Res. Atmos., 116 (D8), doi:10.1029/2010JD014913.

[2] Ricciardulli, L., and R. R. Garcia, 2000: The excitation of equatorial waves by deep convection in the NCAR Community Climate Model (CCM3). J. Atmos. Sci., 57 (21), 3461–3487, doi: 10.1175/1520-0469(2000)057⟨3461:TEOEWB⟩2.0.CO;2.

[3] Neelin, J. D., O. Peters, J. W. B. Lin, K. Hales, and C. E. Holloway, 2008: Rethinking convective quasi-equilibrium: Observational constraints for stochastic convective schemes in climate models. Philos. Trans. R. Soc. A, 366 (1875), 2581–2604, doi:10.1098/rsta.2008.0056.

[4] Li, F., D. Rosa, W. D. Collins, and M. F. Wehner, 2012: “Super-parameterization”: A better way to simulate regional extreme precipitation? J. Adv. Model. Earth Syst., 4 (2), doi:10.1029/ 2011MS000106.

[5] Buizza, R., M. Miller, and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ecmwf ensemble prediction system. Q.J.R. Meteorol. Soc., 125 (560), 2887–2908, doi: 10.1002/qj.49712556006.

[6] Plant, R. S., and G. C. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65 (1), 87–105, doi:10.1175/2007JAS2263.1.

[7] Berner, J., U. Achatz, L. Batté, L. Bengtsson, A.d. Cámara, H.M. Christensen, M. Colangeli, D.R. Coleman, D. Crommelin, S.I. Dolaptchiev, C.L. Franzke, P. Friederichs, P. Imkeller, H. Järvinen, S. Juricke, V. Kitsios, F. Lott, V. Lucarini, S. Mahajan, T.N. Palmer, C. Penland, M. Sakradzija, J. von Storch, A. Weisheimer, M. Weniger, P.D. Williams, and J. Yano, 2017: Stochastic Parameterization: Toward a New View of Weather and Climate ModelsBull. Amer. Meteor. Soc., 98, 565–588, doi.org/10.1175/BAMS-D-15-00268.1.

[8] Khairoutdinov, M., D. Randall, and C. DeMott, 2005: Simulations of the atmospheric general circulation using a cloud-resolving model as a superparameterization of physical processes. J. Atmos. Sci., 62 (7), 2136–2154, doi:10.1175/JAS3453.1.

[9] Jones, T. R. (2017), Examining chaotic convection with super-parameterization ensembles, PhD Dissertation, Colorado State University, Fort Collins, CO.

 

 

Posted in Climate, Climate modelling, Numerical modelling | Leave a comment

Impacts of climate variability and change on the energy sector: A case study for winter 2009/10

By Emma Suckling

Secure and reliable energy supplies are an essential part of modern economic life. But the national and global infrastructures that deliver energy are changing rapidly in the face of new and unprecedented challenges, including the need to meet ever-increasing global demand for energy services, whilst reducing CO2 emissions caused by burning fossil fuels. Responding to these challenges will likely involve the development of new technologies, as well as increased deployment of weather-dependent renewables, such as wind and solar power, in the energy mix. This new energy landscape exposes stakeholders in the energy sector to a greater risk from weather and climate than ever before. Better understanding the impacts of climate variability on energy supply and demand therefore has the potential to aid policy and decision makers in evaluating risks.

The European Climatic Energy Mixes (ECEM) project is a Copernicus Climate Change Service (C3S), whose aim is to enable the energy industry and policy makers to assess the impact of climate variability and change on energy supply and demand over Europe. A proof-of-concept service – or Demonstrator is being developed, including datasets that bring together climate and energy data, produced in a consistent way covering a range of time scales and countries in Europe. The ability of the tool to provide insight into events, anticipate future risks and ask ‘what if’ questions is illustrated in the context of the unusually cold winter of 2009/10.

Record power demand in winter 2009/10
Many countries across Europe experienced unusually high levels of gas and electricity demand due to cold weather conditions during winter 2009/10 (December 2009 to February 2010). In the UK and France high levels of day-to-day weather-sensitive electricity demand is seen in the ECEM dataset (Figure 1), with demand exceeding 10-20% above normal levels for several days over the winter.

Figure 1: Daily fluctuations in modelled weather-sensitive electricity demand. Normalised anomalies expressed as difference in percentage from the long-term average (1979-2016).

A cold winter
Winter 2009/10 made headlines for being unusually cold across much of northern Europe and saw some of the lowest temperatures in the last 40 years in the UK. There was a strong contrast between conditions in northern and southern Europe (as seen in Figure 2 for winter mean temperature differences in 2009/10), consistent with a southward-displaced jet stream and a prolonged negative phase of the North Atlantic Oscillation [Reference 1].

Figure 2: Temperature anomaly (degC – differences from 1981-2010 mean) across Europe for winter 2009/10.

What if winter 2010 happened today?
The ECEM historical dataset also provides estimates of renewables energy supplies based on today’s energy mix and the historical climate drivers. This allows us to investigate the potential impacts of past climatic events if they happened today. A winter like 2009/10, which saw persistent cold and still conditions in the UK, would have a larger impact on the energy sector today due to the increase of renewables into the energy mix. For example, in 2010 renewables consumption was around 3% in the UK, rising to around 8% in 2015 (with wind power accounting for 4%) [Ref 2]. The low wind conditions in a repeat of winter 2009/10 would lead to a substantial reduction in wind power production over the season (Figure 3), which could lead to increased risks to electricity supply availability when combined with an increased demand due to low temperatures.

Figure 3: Estimated winter mean wind power production based on the historical wind speeds and today’s wind power generation capacity for the UK.

Anticipating cold, still winters and their impacts in future
Whilst winter 2009/10 was unusually cold compared to recent winters (i.e. the last 40 years), it was warmer than winter 1962/63, despite exhibiting very similar atmospheric conditions (a prolonged negative phase of the North Atlantic Oscillation, NAO-). It has been suggested that winter 2009/10 might have been even colder if the overall global warming trend observed in the 20th Century had not occurred [Ref 3]. Climate projections of winter temperatures over Europe generally show a warming trend out to the end of the century, suggesting that cold winters, such as 2009/10, may become less likely in future (Figure 4). This has implications for winter demand, which has a negative relationship with temperature over most of northern Europe. Projections of wind speed typically show no clear indications of any trend, however, with the increase of installed wind power generation over Europe it is likely that any future power system will be more sensitive to weather-dependent renewables generation than to temperature-driven demand. Gaining a better understanding of the impacts of climate variability and change on the energy sector is therefore an essential area of research [Refs 4-7].

Figure 4: Projections of winter mean temperature from the RCP4.5 climate scenario over the UK. The shaded region shows the smoothed upper and lower bounds from an ensemble of models, the red lines indicate the 1981-2010 winter mean temperature (top) and the 2009/10 winter temperature (bottom). The green line illustrates the variability from one model run from the full ensemble.

References
[1] G. Ouzeau, et al., 2011. European cold winter 2009-2010: How unusual in the instrumental record and how reproducible in the ARPEGE-Climate model? Geophysical Research Letters, 38, 11.

[2] The UK’s Energy Supply: security or independence? 26 May 2011

[3] J. Cattiaux, et al., 2010. Winter 2010 in Europe: A cold extreme in a warming climate.  Geophysical Research Letters, 37, L20704

[4] D. Brayshaw, et al., 2012. Wind generation’s contribution to supporting peak electricity demand: meteorological insights. Journal of Risk and Reliability, 266, 44-50

[5] D. Cannon, et al., 2015. Using reanalysis data to quantify extreme wind power generation statistics: A 33 year case study in Great Britain. Renewable Energy, 75, 767-778

[6] D. Drew, et al., 2015. The impact of future offshore wind farms on wind power generation in Great Britain. Resources, 4, 1, 155-171

[7] H. Bloomfield, et al., 2016. Quantifying the increasing sensitivity of power systems to climate variability. Environmental Research Letters, 11, 12

Posted in Climate, Climate change, Climate modelling, Renewable energy, Seasonal forecasting | Leave a comment

Exploring the impact of the Atlantic Multidecadal Variability (AMV)

By Dan Hodson

After 140 years of observations, we now know that the temperature of the surface of the Atlantic ocean slowly varied over time, cooling and warming over periods of decades (Figure 1). These slow variations in temperature sit atop the background global warming trend (A), the contrast with other regions of the globe can clearly be seen in spatial maps of SST difference (B). The term Atlantic Multidecadal Oscillation (AMO) was initially coined to describe these variations around the global mean trend, but recently the more general term Atlantic Multidecadal Variability (AMV) has been adopted by the community.

Figure 1. A) Black: Atlantic multidecadal Variability (AMV) (mean over black box). Green: mean over region outside black box
B) annual mean Sea Surface Temperatures: (1965–75) minus (1951–61).

The origin and mechanisms by which the AMV arises are still a matter of debate. It is ultimately impossible to deduce the origins from using observations alone (although we can hazard some educated guesses), so we have to turn to model studies. Some argue that the AMV arises due to internal ocean variability – involving variations in the heat transported by ocean dynamical processes, such as the Atlantic Meridional Overturning Circulation (perhaps responding to stochastic forcing from the atmosphere). Many coupled climate models do display AMV that arises due to this. 1 2. Others argue that models show that the historical AMV arose due to changes in external forcings, or question the role of ocean dynamics altogether.

Jon Robson has recently written about ongoing efforts to predict the evolution of the AMV by using ocean observations to initialize ocean models. These studies suggest an ocean-origin for the AMV is more likely. Whatever the origin of the AMV, and independent of our ability to predict it, we can still ask – what are the climatic impacts of the AMV? Again, we have to turn to models to start to answer this question. Multiple attempts have been made over the the past two decades to examine the possible impacts of the AMV on climate. Ten years ago we examined the idealized impact of a fix AMV pattern on climate in an Atmosphere-only model 1 2. We discovered significant, and potentially important, impacts on surface temperatures, rainfall and atmospheric circulation (Figure 2) – notably, these were consistent with the observational record in a number of regions.

Figure 2. (A to C) Observed JJA (Warm-Cold AMV periods). (A) Sea-level pressure.(B) Land precipitation (mm/day). (C) Land surface air temperature (°C). (F to H) As in (A) and (B) but Model response to AMV (warm – cold). D and E are AMV Warm – Cold composites from a model run with historical SSTs.

Motivated by this, during the DYNAMITE project, we repeated these experiments in a range of other atmosphere-only models, we discovered a range of similar responses, but a number of key uncertainties – e.g. the magnitude of the impact on rainfall.

Experiments such as these are the first step in elucidating the climatic impact of the AMV. However, since these experiments used atmosphere-only models with fixed sea surface temperature (SST), it wasn’t possible to investigate dynamical feedbacks – for example, how the atmospheric response to the AMV in turn affects the ocean, such feedbacks may ultimately modify the final atmospheric response. Modelling studies to date suggest that such feedbacks could be significant.

In order to address this, a new international multi-model experiment is underway to resolve these questions. It will run as part of CMIP6:DCPP – the Decadal Climate Prediction Project component of the fifth Coupled Model Intercomparision Project (there was no CMIP4) – the Decadal Climate Prediction Project . The experiments within DCPP will examine the impact of the AMV in coupled climate models. Each of the models in the experiment ensemble will allow SSTs in the models to evolve with the underlying ocean model, but will periodically nudge those in the Atlantic towards a warm AMV pattern. The idea behind this is to drive the models with a warm AMV, but without restricting the ocean coupling or responses. Reading are talking part in this international effort by using the MetUM-GOML2 coupled mixed layer ocean model developed by Nick Klingaman and Linda Hirons here in Reading. First results are just beginning to arrive, and it looks like we may have some interesting differences from the old AGCM results – most notably, the AMV appears to have a significant impact on the Pacific ocean across the globe. If these results are born in other models, it may point to a greater role of the Atlantic in modulating global climate than has hitherto been expected. Watch this space!

References

An anatomy of the cooling of the North Atlantic Ocean in the 1960s and 1970s Daniel L. R. Hodson, Jon I. Robson, Rowan T. Sutton, 2014: Journal of Climate, 27 (21), 8229-8243

Atlantic Ocean forcing of North American and European summer climate R. T. Sutton, D. L. R. Hodson, 2005: Science, 309 (5731), 115-118

Climate response to basin-scale warming and cooling of the North Atlantic Ocean R. Sutton, D. Hodson, 2007: Journal of Climate, 20 (5), 891-907 e-print

Climate impacts of recent multidecadal changes in Atlantic Ocean Sea Surface Temperature: A multimodel comparison Daniel Louis Richard Hodson, Rowan Timothy Sutton, C. Cassou, N. Keenlyside, Y. Okumura, T. Zhou, 2010: Climate Dynamics, 34 (7-8), 1041-1058

Aerosols implicated as a prime driver of twentieth-century North Atlantic climate variability Ben B. B. Booth, Nick J. Dunstone, Paul R. Halloran, Timothy Andrews & Nicolas Bellouin. Nature 484, 228-232 (12 April 2012)

The Atlantic Multidecadal Oscillation without a role for ocean circulation Amy Clement, Katinka Bellomo, Lisa N. Murphy, Mark A. Cane, Thorsten Mauritsen, Gaby Rodel, Bjorn Stevens Science 16 Oct 2015: Vol. 350, Issue 6258, pp. 320-324 DOI: 10.1126/science.aab3980

Decadal prediction of the North Atlantic subpolar gyre in the HiGEM high-resolution climate model Robson, J., Polo, I., Hodson, D.L.R. et al., 2017: Climate Dynamics

A Mechanism of Internal Decadal Atlantic Ocean Variability in a High-Resolution Coupled Climate Model Matthew B. Menary, Daniel L. R. Hodson, Jon I. Robson, and Rowan T. Sutton, Richard A. Wood, 2015: Journal of Climate 28:19, 7764-7785 

Posted in Climate, Climate change, Climate modelling, Numerical modelling, Oceans | Tagged | Leave a comment

Domestic implications of climate science

By Jonathan Gregory

I’m a climate scientist. I’ve been working in climate change research since 1990. During those years scientific information has become ever more detailed and convincing regarding the magnitude of climate change in both the past and the future due to human activities, principally the emission of carbon dioxide from fossil fuel combustion. I was an author of the most recent assessment (published in 2013) of the Intergovernmental Panel on Climate Change, which concluded, “Continued emissions of greenhouse gases will cause further warming and changes in all components of the climate system. Limiting climate change will require substantial and sustained reductions of greenhouse gas emissions.”

Although news reports about the IPCC may contain remarks such as, “In their latest international report, climate scientists warn that the world must cut greenhouse gas emissions to avoid disaster,” actually the role of the IPCC is only to provide policy-relevant information based on its assessment of published scientific work; it does not propose or advocate any policy. That is the job of policy-makers. One of the responses of the UK government is the Climate Change Act, which established a target for the UK to reduce its emissions by at least 80% from 1990 levels by 2050. This target is an appropriate UK contribution to global emission reductions consistent with limiting global temperature rise to as little as possible above 2°C.

UK carbon dioxide emissions come from many activities which use energy. Houses consume about 30% of the total. This arises mainly from burning gas in our boilers, for central heating and hot water, and partly from electricity use. The UK housing stock is old relative to most European countries, with many houses dating from Victorian times.

I’m a home-owner as well as a climate scientist, and this conjunction leads me to the conclusion that I ought to reduce my domestic carbon dioxide emissions. My house was built in 1873. It’s semi-detached, and has solid brick walls with no cavity. Over the last several years, I have been improving its energy efficiency. The energy “import” (from the gas and electricity mains) has fallen from about 30,000 kWh per year when I first moved in, to about 11,000 kWh per year recently (Figure 1).

Figure 1. Energy import per annum

This has been achieved through a variety of alterations, including insulation of roofs and walls, double and triple glazing, a condensing boiler, a wood-burning stove, and more energy-efficient electrical appliances. I am impressed by the performance of my new A+++ fridge/freezer, which has a greater capacity than the two old ones I used to run put together, but uses only 13% of the energy. That’s significant, because the fridges were the largest consumer of electricity in the house. I also have solar panels installed on the roof (Figure 2). These are of two kinds. The photovoltaic (PV) panels take up more space, and generate more than a half of our annual electricity consumption. The solar thermal panel heats all the hot water we need for showers and washing-up during the summer months. The solar thermal panel is better at collecting usable energy, because the efficiency of conversion of sunlight into electricity by PV panels is quite low.

Figure 2. Roof-mounted solar panels

The most dramatic and unusual undertaking to date has been to build a new wall on the outside of the gable-end wall of the house (Figure 3). The purpose of this was to create an insulated cavity. I decided to have it magnificently well-insulated, since the insulation itself is cheap compared with the cost of the work. I’m very pleased with it. Most people don’t notice it in daylight, because the bricks are a good match in colour, but you can see its effect in a thermal image (Figure 4) comparing my house with my neighbours’ on a cold day last winter. The side-wall of my house (on the left) is much colder than theirs, because less heat is leaking through it. The new wall has reduced the gas consumption by more than a third (it’s the downward step after 2010 in the graph).

Figure 3. Adding a new cavity wall to the existing house

Figure 4. Infrared image of Jonathan’s house side wall and his neighbour’s house side wall, showing the much lower heat loss (lower surface temperature) from the insulated wall (left)

The thermal conductivity of the insulating material (solid foam polyisocyanurate, or PIR) is 0.023 W m-1 °C-1. Hence a layer of thickness 190 mm has a u-value of 0.023/0.19=0.12 W m-2 °C-1, so for instance if the difference between the temperature inside and outside the house is 10 degC, the heat flux through the insulator is 1.2 W m-2. I don’t exactly know the thermal conductivity for the old bricks, but it’s probably over ten times greater than for PIR. It’s generally assumed that a traditional solid brick wall, two bricks or nine inches thick, has u of about 2 W m-2 °C-1. Thus, with the addition of the insulator, the side-wall of the house conducts between ten and twenty times less heat than before.

SuperHomes are old houses which have been refurbished by their present owners to reduce their carbon dioxide emissions by at least 60%. The SuperHomes scheme has a register of over 200 such properties. The aim of the scheme is to provide information about refurbishment for energy efficiency, through holding open days in SuperHomes each September. In August 2012, my house qualified as a SuperHome, but there’s still plenty more to be done!

Posted in Climate, Greenhouse gases | Tagged | Leave a comment