What’s in a number?

By Nancy Nichols

Should you care about the numerical accuracy of your computer?  After all, most machines now retain about 16 digits of accuracy, but usually only about 3 – 4 figures of accuracy are needed for most applications;  so what’s the worry?   To demonstrate, there have been a number of spectacular disasters due to numerical rounding error.  One of the most well known is the failure of a Patriot Missile to track and intercept an Iraqi Scud missile in Dharan, Saudi Arabia, on February 25, 1991, resulting in the deaths of 28 American soldiers.

The failure was ultimately attributable to poor handling of  rounding errors.  The computer doing the tracking calculations had an internal clock whose values were truncated when converted to floating-point arithmetic with an error of about 2-20 .   The clock had run up a time of 100 hours, so the calculated elapsed time was too long by 2-20 x 100 hours = 0.3433 seconds, during which time a Scud would be expected to travel more than half a kilometer.

 

(See The Patriot Missile Failure)

The same problem arises in other algorithms that accumulate and magnify small round-off errors due to the finite (inexact) representation of numbers in the computer.   Algorithms of this kind are referred to as ‘unstable’ methods.  Many numerical schemes for solving differential equations have been shown to magnify small numerical errors.  It is known, for example, that L.F. Richardson’s original attempts at numerical weather forecasting were essentially scuppered due the unstable methods that were used to compute the atmospheric flow.   Much time and effort have now been invested in developing and carefully coding methods for solving algebraic and differential equations such as to guarantee stability.   Excellent software is publicly available.  Academics and operational weather forecasting centres in the UK have been at the forefront of this research.

Even with stable algorithms, however, it may not be possible to compute an accurate solution to a given problem.   The reason is that the solution may be sensitive to small errors  –  that is, a small error in the data describing the problem causes large changes in the solution.  Such problems are called ‘ill-conditioned’.   Even entering the data of a problem into a computer  –  for example, the initial conditions for a differential equation or the matrix elements of an eigenvalue problem  –   must introduce small numerical errors in the data.  If the problem is ill-conditioned, these then lead to large changes in the computed solution, which no method can prevent.

So how do you know if your problem is sensitive to small perturbations in the data?  Careful analysis can reveal the issue, but for some classes of problems there are measures of the sensitivity, or the ‘conditioning’, of the problem that can be used.   For example, it can be shown that small perturbations in a matrix can lead to large relative changes in the inverse of the matrix if the ‘condition number’ of the matrix is large.  The condition number is measured as the product of the norm of the matrix and the norm of its inverse.  Similarly  small changes in the elements of a matrix will cause its eigenvalues to have large errors if the ‘condition number’ of the matrix of eigenvectors is large.   Of course to determine the condition numbers is a problem implicitly, but accurate computational methods for estimating the condition numbers are available .

An example of an ill-conditioned matrix is the covariance matrix associated with a Gaussian distribution.   The following figure shows the condition number of a covariance matrix obtained by taking samples from a Gaussian correlation function at 500 points, using a step size of 0.1, for varying length-scales [1].The condition number increases rapidly to 107 for length-scales of only size  L = 0.2  and, for length scales larger than 0.28, the condition number is larger than the computer precision and cannot even be calculated accurately.

This result is surprising and very significant for numerical weather prediction (NWP) as the inverse of covariance matrices are used to weight the uncertainty in the model forecast and in the observations used in the analysis phase of weather prediction.  The analysis is achieved by the process of data assimilation, which combines a forecast from a computational model of the atmosphere with physical observations obtained from in situ and remote sensing instruments.  If the weighting matrices are ill-conditioned, then the assimilation problem becomes ill-conditioned also, making it difficult to get an accurate analysis and subsequently a good forecast [2].  Furthermore, the worse the conditioning of the assimilation problem becomes, the more time it takes to do the analysis. This is important as the forecast needs to be done in ‘real’ time, so the analysis needs to be done as quickly as possible.

One way to deal with an ill-conditioned system is to rearrange the problem to so as to reduce the conditioning whilst retaining the same solution.  A technique for achieving this is to ‘precondition’ the problem using a transformation of the variables.  This is used regularly in NWP operational centres with the aim of ensuring that the uncertainties in the transformed variables all have a variance of one [1][2].  In this table we can see the effects of the length-scale of the error correlations in a data assimilation system on the number of iterations it takes to solve the problem, with and without preconditioning of the problem [1].  The conditioning of the problem is improved and the work needed to solve the problem is significantly reduced.  So checking and controlling the conditioning of a computational problem is always important!

[1]  S.A Haben. 2011. Conditioning and Preconditioning of the Minimisation Problem in

Variational Data Assimilation, University of Reading, Department of Mathematics and Statistics, Haben PhD Thesis

[2]  S.A. Haben, A.S. Lawless and N.K. Nichols.  2011. Conditioning of incremental variational data assimilation, with application to the Met Office system, Tellus, 63A, 782–792. (doi:10.1111/j.1600-0870.2011.00527.x)

WMO Symposium on Data Assimilation

by Amos Lawless

In the middle of September scientists from all round the world converged on a holiday resort in Florianopolis, Brazil for the Seventh World Meteorological Organization Symposium on Data Assimilation. This symposium takes place roughly every four years and brings together data assimilation scientists from operational weather and ocean forecasting centres, research institutes and universities. With 75 talks and four poster sessions, there was a lot of science to fit in to the four and a half days spent there.

 

The first day began with presentation of current plans by various operational centres, and both similarities and differences became apparent. It is clear that many centres are moving towards data assimilation schemes that are a mixture of variational and ensemble methods, but the best way of doing this is far from certain. This was apparent from just the first two talks, in which the Met Office and Meteo-France set out their different strategies for the next few years. For anyone who thought that science always provides clear-cut answers, here was an example of where the jury is still out! Many other talks covered similar topics, including the best way to get information from small samples of ensemble forecasts in large systems.

 

In a short blog post such as this, it is impossible to discuss the wide range of topics that were discussed in the rest of the week, ranging from theoretical aspects of data assimilation to practical implementations. Subjects included challenges for data assimilation at convective scales in the atmosphere, ocean data assimilation, assimilation of new observation types (including winds from radar observations of insects, lightning and radiances from new satellite instruments) and measuring the impact of observations. Several talks proposed development of new, advanced data assimilation methods – particle filters, Gaussian filtering and a hierarchical Bayes filter were all covered. Of particular interest was a presentation on data assimilation using neural networks, which achieved comparable results to an ensemble Kalman filter at a small fraction of the computational cost. This led to a long discussion at the end of the day as to whether neural networks may be a way forward for data assimilation. The final session on the last day covered a variety of different applications of data assimilation, including assimilation of soil moisture, atmospheric composition measurements and volcanic ash concentration, as well as application to coupled atmosphere-ocean models and to carbon flux inversion.

 

Outside the scientific programme the coffee breaks (with mountains of Brazilian cheese bread provided!) and the social events, such as the caipirinha tasting evening and the conference dinner, as well as the fact of having all meals together, provided ample opportunity for discussion with old acquaintances and new. I came home excited about the wide range of work being done on data assimilation throughout the world and enthusiastic to continue tackling some of the challenges in our research in Reading.

The full programme with abstracts is available at the conference web site, where presentation slides will also be eventually uploaded:

http://www.cptec.inpe.br/das2017/

Can cars provide high quality temperature observations for use in weather forecasting?

By Diego de Pablos

I am an Undergraduate student in the University of Reading that has recently finished his UROP placement (Undergraduate Research Opportunities Programme) in Reading University, this project was funded by the University and was in partnership with the Met Office. Since I am currently undertaking the Environmental Physics course at the Meteorology department, this project was of interest to me for two reasons: first, I plan on getting a PhD at Reading University and wanted to have a feel for that experience and secondly, the research topic seemed to have potential to improve weather forecasting and road safety overall. The project consisted on having a first look at the temperature observations from the built-in thermometer of a car, and compare them with the UKV model surface temperatures and nearby WOW [1] sites observations.

Even though the use of vehicles in weather forecasting has been studied before [2], advanced thermometers were installed on the vehicles to get the observations in most cases, or other parameters were used (i.e antilock brakes or windshield wipers states). This project aimed to assess the potential of the native ambient air temperature sensor most modern cars (less than ten years old) have. Having these observations available when predicting the road state in the nearby future.

A series of days of temperature observations registered by a car’s built-in thermometer were studied. The method used to extract these observed temperatures was an OBD dongle, which would be connected to the car’s engine management system via the standard OBD port cars have installed behind the steering wheel. The dongle would then send this information to the driver’s phone via Bluetooth. In the phone app, observations and other parameters available from the dongle are decrypted, and are later sent to a selected URL via 3G/4G connections. The data would then be stored in metdb, the database used by the Met Office in the UK, and made available for forecasting.

 

The trial showed a need for further testing regarding the thermometers, as it was suggested that the sensor readings could have a bias with height and speed. However, the potential availability of data, by sheer quantity alone is outstanding, as around 20 million cars would be available to take part in the data collection in the UK.

All in all, using car sensors for weather forecasting seems to have potential and will be studied thoroughly in the near future, to hopefully tie its advancements with those of car technologies.

References:

[1] Weather Observations Website – Met Office. https://wow.metoffice.gov.uk/. Accessed: 10th of August 2017.

[2] William P. Mahoney III and James M. O’Sullivan. “Realizing the Potential of Vehicle-Based Observations”. In: Bulletin of the American Meteorological Society 94.7 (2013), pp. 1007– 1018. doi: 10.1175/BAMS-D-12-00044.1. eprint: https://doi.org/10.1175/BAMS-D-1200044.1. url: https://doi.org/10.1175/BAMS-D-12-00044.1.

Wetropolis flood demonstrator

Wetropolis flood demonstrator

By Onno Bokhove, School of Mathematics, University of Leeds, Leeds.

  1. What is Wetropolis?

The Wetropolis flood demonstrator is a conceptual, life installation showcasing what an extreme rainfall event is and how such an event can lead to extreme flooding of a city, see below in Fig. 1. A Wetropolis day is chosen to be 10s and it rains on average every 5.5min for 90% of the time during a Wetropolis day, i.e., 9s in two locations both in an upstream reservoir and in a porous moor in the middle of the catchment. This is extreme rainfall and it causes extreme flooding in the city. It can rain either 10%, 20%, 40% or 90% in a day; and, either nowhere, only in the reservoir, only on the porous moor or in both locations. Rainfall amount and rainfall location are randomly drawn via two skew-symmetric Galton boards, each with four outcomes, see Fig. 2. Each Wetropolis day, so every 10s, a steel ball falls down the Galton board and determines the outcome, which outcome we can follow visually: at the first split there is a 50% chance of the ball going to the left and of 50% to the right, and the next two splits one route can only go right with a 100% chance and the other one splits even with 50%-50% again; subsequent splits are even again. An extreme event occurs with probability 7/256, so about 3% of the time. In 100 wd’s, or 1000s, this amounts to about every 5.5min on average. When a steel ball rolls through one of the four channels of the Galton board it optically triggers a switch and via Arduino electronics each Galton board steers pump actions of (1,2,4,9)s causing it to rain in the reservoir and/or the porous moor.

Fig. 1. Overview of the Wetropolis flood demonstrator with its winding river channel of circa 5.2m and the slanted flood plains on one side, a reservoir, the porous moor, the (constant) upstream inflow of water, the canal with weirs, the higher city plain, and the outflow in the water tank/bucket with its three pumps. Two of these pumps switch on randomly for (1,2,4,9)s of the 10s `Wetropolis Day’ (SI-unit: wd). Photo compilation: Luke Barber.

 

Wetropolis’ construction is based on my mathematical design with a simplified one-dimensional kinematic model representing the winding river, a one-dimensional nonlinear advection diffusion equation for the rainfall dynamics in the porous moor, and simple time-dependent box models for the canal sections and the reservoir, all coupled together with weir relations. The resulting numerical calculations were approximate but led to the design by providing estimates of the strength of the pumps (1-2l in total for the three aquarium pumps), the length and hence the size of the design with the river water residence time typically being 15-20s, and the size of the porous moor. The moor visually shows the dynamics of the ground water level during no or weak rainfall as well as strong rainfall, and how it can delay the through flow when the conditions are dry prior to the rainfall by circa 2-3wd (20-30s). When the rainfall is strong, e.g., for two consecutive days of extreme Boxing Day rainfall (see movie in [2]), the moor displays surface water overflow and thus drains nearly instantly in the river channel.

Fig. 2 Asymmetric Galton board. Every Wetropolis day, 10s, a steel ball is released at the top (mechanism not shown here). The 4×4 possible outcomes in two of such boards, registered in each by 4 electronic eyes (not shown here either), determine the rainfall and location in Wetropolis, repectively. Photo: Wout Zweers.

Wetropolis’ development and design was funded as an outreach project in the Maths Foresees’ EPSRC Living with Environmental Change network [1].

  1. What are its purposes?

Wetropolis was first designed to be a flood demonstrator in outreach purposes for the general public. It can fit in the back half of a car and can be transported. Comments from everyone, including the public, are positive. Remarks from scientists and flood practitioners such as people from the Environment Agency, however, made us realise that Wetropolis can also be used and extended to test models and explore concepts in the science of flooding.

 

  1. Where has Wetropolis been showcased hitherto?

The mathematical design and modelling was done and presented early June 2016 at a seminar for the Imperial College/University of Reading Mathematics of Planet Earth Doctoral Training Centre. Designer Wout Zweers and I started Wetropolis’ construction a week later. One attempt failed (see June 2016 posts in [2]) because I made an error in using the Manning coefficient in the calculations, necessitating an increase of the channel length to 5m to have sufficient residence time of water in the 1:100 sloped river channel. Over the summer progress was made with a strong finish late August 2016 so we could showcase it at the Maths Foresees’ General Assembly in Edinburgh [1]. It was subsequently shown at the Leeds Armley Museum public Boxing Day exhibit December 8th, 2016 and also in March 2017. I gave a presentation for 140 flood victims for the Churchtown Flood Action Group Workshop, late January 2017 in Churchtown, on the science of flood including Wetropolis. We showcased it further at: Be Curious public science festival, University of Leeds; the Studygroup Maths Foresees (see Fig. 3), at the Turing Gateway to Mathematics, Cambridge; and, a workshop of the River and Canal Trust in Liverpool.


Fig. 3. Wetropolis at the Turing Gateway to Mathematics. Photo TGM. Duncan Livesey and Robert Long (Fluid Dynamics’ CDT, Leeds) are explaining matters.

  1. What are its strengths and weaknesses?

The strength of Wetropolis is that it is a life visualisation of probability for rainfall and flooding in extreme events combined, river hydraulics, groundwater flow, and flow control, since the reservoir has valves such that we can store and release water interactively). It is a conceptual model of flooding rather than a literal scale model. This is both a weakness and a strength because one needs to explain the translation of a 1:200 return period extreme flooding and rainfall event to one with a 1:5.5min return period, explain that the moor and reservoir are conceptual valleys where all the rain falls, since rain cannot fall everywhere. This scaling and translation is part of the conceptualisation, which the audience, whether public or scientific, needs to grasp. The visualisations of flooding in the city and the ground water level changes will be improved.

  1. Where does Wetropolis go from here?

Wetropolis’ revisited is under design to illustrate aspects of Natural Flood Management such as slowing-the-flow by inserting or taking our roughness features, leaky dams and the great number of such dams needed to create significant storage volume of flood waters, as well as the risk of their failure. Wetropolis will (likely) be shown alongside my presentation in the DARE international workshop on high impact weather and flood prediction in Reading, November 20-22, 2017. Finally, analysis of river levels gauges combined with the peak discharge of the Boxing Day 2015 floods of the Aire River leading to the extreme massive flooding in Kirkstall, Leeds reveals that the estimated flood excess volume is about a 1 mile by 1 mile by 1.8m deep (see [3] and Fig. 4). Storing of all this excess flood volume in 4 to 5 artificially induced and actively controlled flood plains upstream of Leeds seems possible. Moreover, it could possibly have prevented the floods. Active control of flood plains via moveable weirs is now considered, also in a research project with Wetropolis featuring as conceptual yet real test environment. (PhD and/or DARE postdoc posts are available soon.)

Fig. 4. Leeds’ flood levels at Armley Mills Museum: 1866: bottom, 2015: top, 5.21m. Photo O.B. with Craig Duguid (Fluid Dynamics’ CDT, Leeds) showcasing Wetropolis.

 References and links

[1] Maths Forsees UK EPSRC LWEC network [2] Resurging Flows, public page with movies of experiments, river flows and Boxing Day 2015 floods in Leeds and Bradford, photos and comments on fluid dynamics. Two movies on 31-08-2016 show Wetropolis in action. In one case two consecutive extreme rainfall events led to a Boxing Day 2105 type of flood. (What is the chance of this happening in Wetropolis?) Recall that record rainfall over 48hrs in Bingley and Bradford, Yorkshire, contributed for a large part to the Boxing Day floods in 2015. [3] ‘Inconvenient Truths’ about flooding . My introduction at the 2017 Study Group.

Coupled atmosphere-ocean data assimilation re-interpreted

This gallery contains 1 photo.

Coupled atmosphere-ocean data assimilation re-interpreted by Polly Smith So my original plan for this blog was to write something about my research on coupled atmosphere-ocean data assimilation but then my PI Amos Lawless beat me to it with his recent post. I was pondering on how I might put a new spin on things when […]

Improving Aircraft Observations using Data Assimilation

Improving Aircraft Observations using Data Assimilation

By Jeremy Holzke

I am half way through my six week Summer research placement which is funded by the EPSRC DARE project. As a second year Robotics student at the University of Reading, I am interested in collecting data from various sources and processing it so it can then be used for a robot to interact with its environment. I am undertaking this project as it has a very similar goal to a robot sensing its environment apart that the processed data will be used for better estimates of temperature in our atmosphere. I am also taking part in this Summer placement to see if I would be interested to do research in the future as my career. I will be investigating how data from aircraft and data from a numerical weather prediction (NWP) model can be combined to give the best estimate of the true temperature at the location of the observation.

Collecting observations from aircraft for meteorological purposes is most definitely not a new concept; in fact, it has been around since World War 1. The number of observations collected has grown ever since, especially with the wide range of applications weather now/forecasting provides. Some of these include military applications, agriculture and in particular for air traffic management. A main advantage of using aircraft derived observations in the 21st century, is that there around 13-16 thousand planes around the world at any time that can transmit valuable meteorological data.

Most commercial airplanes transmit a report called Mode Selective Enhanced Surveillance (Mode-S EHS) which contains data such as the speed, direction and altitude of the plane, as well as the Mach number which can be used to derive temperature and horizontal wind observations. The advantage of using Mode-S EHS reports is that they are transmitted at a high frequency but because of the short nature of the reports, data precision is reduced. Hence, large errors can appear in the derived temperature.

The aim of this project is to take aircraft-derived observations and combine them with modelled weather data from the Met Office UKV (UK variable resolution model), to get a better estimate of the temperature observation. A technique known as Optimal Interpolation, that takes account of the relative uncertainties in the two data sources was implemented in MATLAB. I have carried out some initial tests of the method using observation data from the National Centre for Atmospheric Science’s research plane; the Facility for Atmospheric Airborne Measurements (FAAM).

References:

A.K. Mirza, S.P. Ballard, S.L. Dance, P. Maisey, G.G. Rooney, and E.K. Stone, “Comparison of aircraft-derived observations with in situ research aircraft measurements,” Quarterly Journal of the Royal Meteorological Society, vol. 142, no. 701, pp. 2949–2967, 2016, issn: 1477-870X. doi: 10.1002/qj.2864 [Online]. Available from Royal Meteorological Society

 

Can observations of the ocean help predict the weather?

Can observations of the ocean help predict the weather?

by Dr Amos Lawless

It has long been recognized that there are strong interactions between the atmosphere and the ocean. For example, the sea surface temperature affects what happens in the lower boundary of the atmosphere, while heat, momentum and moisture fluxes from the atmosphere help determine the ocean state. Such two-way interactions are made use of in forecasting on seasonal or climate time scales, with computational simulations of the coupled atmosphere-ocean system being routinely used. More recently operational forecasting centres have started to move towards representing the coupled system on shorter time scales, with the idea that even for a weather forecast of a few hours or days ahead, knowledge of the ocean can provide useful information.

A big challenge in performing coupled atmosphere-ocean simulations on short time scales is to determine the current state of both the atmosphere and ocean from which to make a forecast. In standard atmospheric or oceanic prediction the current state is determined by combining observations (for example, from satellites) with computational simulations, using techniques known as data assimilation. Data assimilation aims to produce the optimal combination of the available information, taking into account the statistics of the errors in the data and the physics of the problem. This is a well-established science in forecasting for the atmosphere or ocean separately, but determining the coupled atmospheric and oceanic states together is more difficult. In particular, the atmosphere and ocean evolve on very different space and time scales, which is not very well handled by current methods of data assimilation. Furthermore, it is important that the estimated atmospheric and oceanic states are consistent with each other, otherwise unrealistic features may appear in the forecast at the air-sea boundary (a phenomenon known as initialization shock).

However, testing new methods of data assimilation on simulations of the full atmosphere-ocean system is non-trivial, since each simulation uses a lot of computational resources. In recent projects sponsored by the European Space Agency and the Natural Environment Research Council we have developed an idealised system on which to develop new ideas. Our system consists of just one single column of the atmosphere (based on the system used at the European Centre for Medium-range Weather Forecasts, ECMWF) coupled to a single column of the ocean, as illustrated in Figure 1.  Using this system we have been able to compare current data assimilation methods with new, intermediate methods currently being developed at ECMWF and the Met Office, as well as with more advanced methods that are not yet technically possible to implement in the operational systems. Results indicate that even with the intermediate methods it is possible to gain useful information about the atmospheric state from observations of the ocean. However, there is potentially more benefit to be gained in moving towards advanced data assimilation methods over the coming years. We can certainly expect that in years to come observations of the ocean will provide valuable information for our daily weather forecasts.

Figure 1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Smith, P.J., Fowler, A.M. and Lawless, A.S. (2015), Exploring strategies for coupled 4D-Var data assimilation using an idealised atmosphere-ocean model. Tellus A, 67, 27025, http://dx.doi.org/10.3402/tellusa.v67.27025.

Fowler, A.M. and Lawless, A.S. (2016), An idealized study of coupled atmosphere-ocean 4D-Var in the presence of model error. Monthly Weather Review, 144, 4007-4030, https://doi.org/10.1175/MWR-D-15-0420.1

Tales from the Alice Holt Forest: carbon fluxes, data assimilation and fieldwork

by Ewan Pinnington

Forests play an important role in the global carbon cycle, removing large amounts of CO2 from the atmosphere and thus helping to mitigate the effect of human-induced climate change. The state of the global carbon cycle in the IPCC AR5 suggests that the land surface is the most uncertain component of the global carbon cycle. The response of ecosystem carbon uptake to land use change and disturbance (e.g. fire, felling, insect outbreak) is a large component of this uncertainty. Additionally, there is much disagreement on whether forests and terrestrial ecosystems will continue to remove the same proportion of CO2 from the atmosphere under future climate regimes. It is therefore important to improve our understanding of ecosystem carbon cycle processes in the context of a changing climate.

Here we focus on the effect on ecosystem carbon dynamics of disturbance from selective felling (thinning) at the Alice Holt research forest in Hampshire, UK. Thinning is a management practice used to improve ecosystem services or the quality of a final tree crop and is globally widespread. At Alice Holt a program of thinning was carried out in 2014 where one side of the forest was thinned and the other side left unmanaged. During thinning approximately 46% of trees were removed from the area of interest.

Figure 1: At the top of Alice Holt flux tower.

 

Using the technique of eddy-covariance at flux tower sites we can produce direct measurements of the carbon fluxes in a forest ecosystem. T

he flux tower at Alice Holt has been producing measurements since 1999 (Wilkinson et al., 2012), a view from the flux tower is shown in Figure 1. These measurements represent the Net Ecosystem Exchange of CO2 (NEE). The NEE is composed of both photosynthesis and respiration fluxes. The total amount of carbon removed from the atmosphere through photosynthesis is termed the Gross Primary Productivity (GPP). The Total Ecosystem Respiration (TER) is made up of autotrophic respiration (Ra) from plants and heterotrophic respiration (Rh) from soil microbes and other organisms incapable of photosynthesis. We then have, NEE = -GPP + TER, so that a negative NEE value represents removal of carbon from the atmosphere and a positive NEE value represents an input of carbon to the atmosphere. A schematic of these fluxes is shown in Figure 2.                                                               

Figure 2: Fluxes of carbon around a forest ecosystem.

 

The flux tower at Alice Holt is on the boundary between the thinned and unthinned forest. This allows us to partition the NEE observations between the two areas of forest using a flux footprint model (Wilkinson et al., 2016). We also conducted an extensive fieldwork campaign in 2015 to estimate the difference in structure between the thinned and unthinned forest. However, these observations are not enough alone to understand the effect of disturbance. We therefore also use mathematical models describing the carbon balance of our ecosystem, here we use the DALEC2 model of ecosystem carbon balance (Bloom and Williams, 2015). In order to find the best estimate for our system we use the mathematical technique of data assimilation in order to combine all our available observations with our prior model predictions. More infomation on the novel data assimilation techniques developed can be found in Pinnington et al., 2016. These techniques allow us to find two distinct parameter sets for the DALEC2 model corresponding to the thinned and unthinned forest. We can then inspect the model output for both areas of forest and attempt to further understand the effect of selective felling on ecosystem carbon dynamics.

Figure 3: Model predicted cumulative fluxes for 2015 after data assimilatiom. Solid line: NEE, dotted line: TER, dashed line: GPP. Orange: model prediction for thinned forest, blue: model prediction for unthinned forest. Shaded region: model uncertainty after assimilation (± 1 standard deviation).

 

In Figure 3 we show the cumulative fluxes for both the thinned and unthinned forest after disturbance in 2015. We would probably assume that removing 46% of the trees from the thinned section would reduce the amount of carbon uptake in comparison to the unthinned section. However, we can see that both forests removed a total of approximately 425 g C m-2 in 2015, despite the thinned forest having 46% of its trees removed in the previous year. From our best modelled predictions this unchanged carbon uptake is possible due to significant reductions in TER. So, even though the thinned forest has lower GPP, its net carbon uptake is similar to the unthinned forest. Our model suggests that GPP is a main driver for TER, therefore removing a large amount of trees has significantly reduced ecosystem respiration. This result is supported by other ecological studies (Heinemeyer et al., 2012, Högberg et al., 2001, Janssens et al., 2001). This has implications for future predictions of land surface carbon uptake and whether forests will continue to sequester atmospheric CO2 at similar rates, or if they will be limited by increased GPP leading to increased respiration. For more information on this work please see Pinnington et al., 2017.

 

References

Wilkinson, M. et al., 2012: Inter-annual variation of carbon uptake by a plantation oak woodland in south-eastern England. Biogeosciences, 9 (12), 5373–5389.

 

Wilkinson, M., et al., 2016: Effects of management thinning on CO2 exchange by a plantation oak woodland in south-eastern England. Biogeosciences, 13 (8), 2367–2378, doi: 10.5194/bg-13-2367-2016.

 

Bloom, A. A. and M. Williams, 2015: Constraining ecosystem carbon dynamics in a data-limited world: integrating ecological “common sense” in a model data fusion framework. Biogeosciences, 12 (5), 1299–1315, doi: 10.5194/bg-12-1299-2015.

 

Pinnington, E. M., et al., 2016: Investigating the role of prior and observation error correlations in improving a model forecast of forest carbon balance using four-dimensional variational data assimilation. Agricultural and Forest Meteorology, 228229, 299 – 314, doi: http://dx.doi.org/10.1016/j.agrformet.2016.07.006.

 

Pinnington, E. M., et al., 2017: Understanding the effect of disturbance from selective felling on the carbon dynamics of a managed woodland by combining observations with model predictions, J. Geophys. Res. Biogeosci., 122, doi:10.1002/2017JG003760.

 

Heinemeyer, A., et al., 2012: Exploring the “overflow tap” theory: linking forest soil co2 fluxes and individual mycorrhizo- sphere components to photosynthesis. Biogeosciences, 9 (1), 79–95.

 

Högberg, P., et al., 2001: Large-scale forest girdling shows that current photosynthesis drives soil respiration. Nature, 411 (6839), 789–792.

 

Janssens, I. A., et al., 2001: Productivity overshadows temperature in determining soil and ecosystem respiration across european forests. Global Change Biology, 7 (3), 269–278, doi: 10.1046/j.1365-2486.2001.00412.x.

2017 Annual European Geosciences Union (EGU) Conference

    by Liz Cooper

The 2017 Annual European Geosciences Union (EGU) conference was held at the International Centre in Vienna from 23rd to 28th April.  During that time over 14,000 scientists from 107 countries shared ideas and results in the form of talks, posters and PICOs .The PICO (Presenting Interactive COntent) format is a relatively new idea for presenting work, where participants prepare an interactive presentation. In each PICO session the presenters first take turns to give a 2 minutes summary of their work for a large audience. The PICOS are then each displayed on an interactive touch screen and conference delegates can chat to the presenters and get further details on the research, with the PICO for illustration. This format has features of both traditional poster and oral presentations and provides a great scope for audience participation. I saw several which took advantage of this, including a very popular flood forecasting adventure game by a fellow Reading Phd student Louise Arnal.

I was delighted to be able to present some of my own recent results at EGU, in a talk titled ‘The effect of domain length and parameter estimation on observation impact in data assimilation for inundation forecasting.’ (see photo)

Presenting at an international conference was a really valuable and enjoyable experience, if a little daunting beforehand. I found it a really useful opportunity to get feedback from experts in the field and find out more about work by people with related interests.

The EGU conference has many participants and covers a huge range of topics from atmospheric and space science to soil science and geomorphology. My research deals with data assimilation for inundation forecasting, so I was most interested in sessions within the Hydrological Sciences and Nonlinear Processes in Science programmes. Even within those disciplines there was a huge breadth of research on display and I saw some really interesting work on synchronization in data assimilation, approaches to detection of floods from satellite data and various methods for measuring and characterizing floods.

As well as subject-specific programmes, there was also a very good Early Career Scientist (ECS) programme at EGU, with networking events, discussion sessions and a dedicated ECS lounge with much appreciated free coffee!

EGU was a hugely enjoyable experience and Vienna is a beautiful city with excellent transport links. With so many parallel sessions it’s really essential to plan which talks and posters are a priority in advance but I would heartily recommend it to anyone involved in geosciences research.

Mathematics of Planet Earth Jamboree

 by Jemima Tabeart

On 20th-22nd March the Mathematics of Planet Earth Centre for Doctoral Training (MPE CDT) held its third annual Jamboree event. This is a celebration of the work of the staff and students of the CDT and includes seminars from industrial and academic speakers, as well as the opportunity for students to present their research. For the first time this year, the first two days of the Jamboree were used to host an Industrial study group. Representatives from EDF Energy and AIR Worldwide (catastrophe modelling for the insurance and re-insurance industry) posed real-world problems to cross-cohort groups of students, who then attempted to provide some new mathematical insight into possible solutions.

 

Our group was given a task by EDF Energy to investigate the interaction of extreme wind and rain events in the UK. EDF Energy’s assets in the UK include nuclear and other types of power plants, so understanding of extreme events is important in order to they can take appropriate safety measures. Currently extreme rain and wind events are considered separately, and we were asked to consider ways of determining how to define and deal with extreme wind-rain events. We were given hourly reanalysis data from the last 40 years, on a coarse 1 degree grid over the UK. The group split into two parts: one looking at more conceptual ideas about how extreme events can be caused by an interaction of factors, and the other considering the data provided.

 

Our part of the group identified some known extreme weather events, and focused on the data for these time periods. We looked at which events had both extreme wind and extreme rain, and mapped these to geographical locations to see where extreme wind-rain occurs most frequently. We also tried to see if there was a time lag between rain and wind events in the same location. Initial plots indicated that the most likely lag time was 0 hours, although this might be due to the relatively coarse resolutions. Other members of the group also suggested a method for combining the threshold values for extreme wind and rain to create a combined parameter. As well as a presentation of the main ideas that took place on the day to industry representatives, written reports will be sent to the respective companies so that they can take the suggestions further.

 

The study group was a great opportunity for cross-cohort work that brought together students with contrasting research interests. The challenge of producing something in a short amount of time is very different to what we normally expect as PhD students, and the ideas of getting stuck in straight away and not spending hours agonising over every decision is something that will be useful going forward. I really enjoyed working with real-world data and on problems outside my usual subject area – applying techniques I’ve learned during my PhD to other applications is very satisfying, and gives me the confidence that I am developing my transferable skills through my research!