Why does it always rain on me?

By Helen Dacre

Last Monday morning I got so wet on my cycle to work that I had to spend 10 minutes under the hand dryer in the toilets to stop myself looking like a drowned rat. Being the keen meteorologist that I am, however, my next steps took me to the coffee room to look at the synoptic charts to find out exactly why I’d got so wet. A fairly cursory glance at the chart for 00 UTC on Monday 20 June (Figure 1) showed me an occluding low-pressure system sitting to the north-west of the UK with a long trailing front extending over the entire length of the country (so I doubt I was the only person standing under the hand dryer that morning).

2016 06 23 Helen Dacre blog Fig 2

You don’t need to have studied meteorology to know that fronts mean clouds: and clouds, more often than not, mean rain – particularly those associated with an active low-pressure system like that passing through on Monday morning. For most of the morning we sat under low cloud in the warm sector (that’s the bit between the warm front and the cold front) and I was glad for once to be stuck at my desk with no need to trek through the wilderness to a meeting on the other side of campus.

Having experienced the passage of a low-pressure system over the UK many times over the last 30+ years (and taught Introduction to Weather Systems often enough), I knew things were about to change and sure enough around 12 o’clock the cloud began to lift, the rain stopped, the sunshine broke through and by the time I left work to cycle home (wearing my soggy shoes from the morning) there were glorious blue skies overhead with no trace of a cloud to be seen.

A quick look at the Reading atmospheric observatory measurements in the foyer as I left the Department confirmed the passage of the cold front at 12:00 (Figure 2), marked by an increase in pressure (known as a pressure kick), a wind shift from southerly to westerly (known as a wind veer), lifting of the cloud base (measured by our ground-based lidar) but no expected decrease in temperature. Why not? Probably due to the decrease in cloud cover allowing the solar radiation to reach the surface and warm the air above.  Other than that, a pretty classic frontal passage.

2016 06 23 Helen Dacre blog Fig 1

This all got me thinking on my cycle home about the demise of the synoptic chart.  In an age where text based postcode forecasts are growing in popularity, my phone can tell me, hour by hour, the chance of rain in my backyard. But it doesn’t tell me at a glance why it’s raining or why it’s going to stop, or if it’s raining in Reading is it also raining in Liverpool? It’s like the Indian proverb of trying to describe an elephant blindfold whilst only touching its leg, trunk or tail.  It’s very difficult to explain the weather in my backyard without knowing what’s going on elsewhere.

So, whilst I continue to use my phone to find out whether to pack my waterproofs, please lets keep the tried and tested synoptic chart so we can understand at a glance why the weather is doing what it’s doing.  Forget the cloud appreciation society (sorry Gavin Pretor-Pinney), how about a synoptic chart appreciation society – because a picture really does tell 1000 words (well 571 words according to my word count).

Posted in Measurements and instrumentation, Weather, Weather forecasting | Leave a comment

Understanding Summer Flash Flooding

By Adrian Champion

‘Flash flooding’ is flooding that only lasts between a few hours and a day and typically has very little warning. There are many causes of flash flooding, from the meteorological conditions that lead to the rainfall that cause the flooding, to the ground situation that results in flooding. Flash flooding is generally very localised, but can be very costly and result in significant disruptions.

Flash flooding is due to intense rainfall that only lasts a short period – from less than an hour to a few hours. The amount of rain recorded over the course of the day may be low in comparison to rainy winter days, however the difference is that this amount of rain falls in perhaps only a few hours (Figure 1). The difficulty in forecasting such rain events is that the meteorological conditions that lead to intense rainfall are very small in scale. The predominant cause of hourly extreme rain is a convective storm, or a feature with convective elements. These are only around a few kilometres in size, smaller than the forecast resolution of any national weather centre’s weather forecast model. There may also be other processes, or other factors from the prevailing wind conditions to the orography, that will act to enhance the convective system. It is due to the small size of the meteorological processes that cause intense rainfall that make it so difficult to forecast.

2016 06 16 Adrian Champion Department Blog Hourly Record Totals Met Office

Figure 1. Short-period depth-duration extremes of rainfall in the United Kingdom. Source: Met Office climate extremes

Once the rain reaches the ground there are also significant difficulties in predicting what will happen to all of the water. Outside of these hourly extreme rain episodes, we’re able to model how much of the water will be absorbed by the ground via infiltration and how much will run off the ground into rivers and drains (Figure 2). We’re also able to model the resulting changes in river flow from this over-land run off and water release from the ground. The natural (and man-made) systems are also able to respond to ‘normal’ rainfall intensities. During extreme rainfall it is a lot harder to model what will happen to the water. The ground is not able to absorb the water as quickly as it is falling, and other factors such as how wet the ground already is play a significant role. Therefore, it can be expected that the majority of the water will flow over the surface.

2016 06 16 Adrian Champion Example Hydrology Model SHETRAN Newcastle University

Figure 2. An example of a hydrology model as used by Newcastle University, their SHETRAN model

This surface run-off is difficult to model and is highly dependent on the type of land use – in towns and cities tarmac and concrete surfaces will result in fast run-off speeds resulting in a rapid accumulation of run-off water in low-lying areas, e.g. under a railway bridge when the road dips (Figure 3). It may take only tens of minutes for the water to collect and exceed the drainage capacity. In rural areas there will be natural barriers, e.g. trees and hedgerows, however intense rainfall can still result in rapid increases in local river levels causing localised river flooding typically of natural floodplains, as the river is unable to get rid of the excess water quick enough.

2016 06 16 Adrian Champion Department Blog Bridge Flooding London Fire Brigade

Figure 3. A recent example of flooding underneath a railway bridge in an urban area that would have accumulated quickly and took drivers by surprise – south London, 7 June 2016. Source: BBC News website, photograph credited to the London Fire Brigade.

Due to the rainfall events lasting only a few hours, the flooding also only lasts a few hours as drainage systems, either natural (rivers) or man-made (drains), recover and move water further downstream. However, the speed at which the flooding occurs can often have large consequences due to the lack of warning or the speed and volume of water. We usually only see such flooding in winter as the convective processes that dominate the hourly extreme rain dominate in summer due to the stronger incoming solar radiation (it’s summer, it’s warmer). Such convective processes cause “summery showers” that last only a few hours, or sometimes minutes.

Posted in Environmental hazards, Hydrology, Numerical modelling, Urban meteorology, Weather | Tagged , | Leave a comment

Standing up for Science

By Joanne Thomas, Project & Events Coordinator, Sense about Science

Voice of Young Science (VoYS) is a dynamic network of more than 2000 early career researchers and scientists across science, engineering and medicine. VoYS members are committed to playing an active role in public discussions about science; they challenge pseudoscientific claims, tackle popular misconceptions around controversial issues and respond to misinformation in all kinds of media. These early career researchers don’t wait until later in their careers to stand up for science.

2016 06 08 VOYS-print2010

VoYS members meet at one of four Standing up for Science media workshops organised each year by the charity Sense about Science. These workshops encourage the early career researchers to voice their opinions in public debates about science. During the full-day events, participants discuss science-related controversies in media reporting, and have the chance to hear directly from respected science journalists about how the media works, how to respond and comment, and what journalists want and expect from scientists.

Previous attendees have said:

  • “Incredibly useful workshop. I definitely feel more prepared to engage with the media about my research!”
  • “Great speakers, lots of useful stuff, well-focused on what we can do”
  • “Found the panellists’ comments very helpful and thought provoking”
  • “An enjoyable & relevant discussion”

Inspired and engaged by the peers they meet during the events, VoYS members are empowered to do more to stand up for science and have launched many successful mythbusting and evidence hunting campaigns. They’ve published a detox dossier debunking common marketing claims associated with ‘detox’ products, written an open letter to the World Health Organisation, prompting several disease department directors to clarify that they do not condone the use of homeopathy to treat serious diseases and most recently launched a weather quiz to address misuse of weather terms. This latest project was initiated by meteorologists at the University of Reading and launched in January 2016 (see below sample). Frustrated by sensationalised stories and misleading use of meteorological terms, and concerned that this could undermine public trust in meteorology, they launched this quiz to challenge everyone to test their weather know-how and arm themselves with the facts to decipher the truth behind weather stories.

2016 06 08 Havent the foggiest 2

The next media workshop is sponsored by the Department of Meteorology at the University of Reading and will take place in London on Friday 16 September –  priority places are available for early career researchers at the University of Reading (PhD students, post-docs or first job equivalents).


Posted in Climate | Leave a comment

The interaction between aerosols and clouds

By Nicolas Bellouin

As part of the Copernicus Atmosphere Monitoring Service (CAMS), I lead an activity that will provide in August new estimates of radiative forcing of climate due to changes in atmospheric composition.

One of the radiative forcing mechanisms that we are working to quantify is the interaction between aerosols and clouds. Aerosols are the small liquid and solid particles in suspension in the atmosphere. Human activities emit aerosols in the atmosphere, adding to natural levels and causing the formation of liquid clouds with droplets that are more numerous and smaller than in unpolluted clouds. A cloud made of more numerous droplets is brighter, reflecting more radiation back to space. A cloud made of smaller droplets may evaporate more easily, becoming thinner or even disappearing completely. Alternatively, smaller droplets may take longer to form rain, causing the cloud to linger in the atmosphere and reflect sunlight for longer. The physics of aerosol-cloud interactions are complex and have been the subject of many scientific studies, summarised in the latest assessment report of the Intergovernmental Panel on Climate Change.

Radiative forcing is a measure of the imbalance in the Earth’s energy budget caused by perturbations external to the natural climate system, such as the emission of aerosols into the atmosphere by human activities. Our preliminary CAMS estimate of radiative forcing due to aerosol-cloud interactions, based on satellite observations of aerosol amounts and cloud reflectivity, is –0.6 W m−2. The negative sign indicates a loss of energy for the climate system. The estimate of climate models for the same radiative forcing is stronger, typically larger than –1 W m−2. What causes that discrepancy? Over the past few months, I have discussed with experts in aerosol-cloud interactions, and there are reasons to expect that aerosol-cloud interactions are weaker than simulated by climate models – and perhaps even weaker than the preliminary CAMS estimate.

The modification of cloud properties by external perturbations is observed routinely. Ship tracks are emblematic examples: the aerosols emitted by ship engines provide additional sites for water vapour to condensate into cloud droplets, forming linear clouds along the ship’s route. If a single ship can create new clouds, surely the masses of aerosols emitted worldwide by transport and power generation must exert a strong radiative forcing. But crucially, ship tracks do not happen all the time, otherwise the busy shipping lanes linking Europe, Asia, and North America would leave a noticeable and persistent trail of clouds on satellite pictures (Figure 1). This is not the case.

2016 05 25 Nicolas Bellouin - atlantic_shiptracks_lrg
Figure 1. Ship tracks off the Atlantic coasts of France and Spain, as observed by NASA’s MODIS satellite instrument in January 2003.

Another event casts doubts on the possibility of strong radiative forcing from aerosol-cloud interactions. In late 2014/early 2015, the Holuhraun volcano erupted in Iceland. This eruption injected masses of aerosols into the atmosphere – so many aerosols in fact that at one point the volcano emitted as much in a day than the entire European Union combined. Such a large and precisely located perturbation was the perfect laboratory for studying aerosol-cloud interactions. And indeed, satellite instruments reported that clouds in the North Atlantic were composed of smaller droplets than normal, as expected from the physics of aerosol-cloud interactions. But were North Atlantic clouds brighter than normal during that period? Observations are inconclusive. It may be that aerosol-cloud interactions are lost in the noise of natural variability in cloud properties, but for such a large perturbation, the impacts are surprisingly hard to isolate.

In the end, aerosol-cloud scientists reckon that it will come down to counting how often clouds happen to show strong sensitivity to aerosol perturbations. Those discussions leave me with the feeling that such situations occur infrequently, and radiative forcing of aerosol-cloud interactions may need to be revised down to weaker values.

I thank Graham Feingold, Johannes Quaas, Annica Ekman, Leo Donner, and Ilan Koren for interesting discussions on current understanding of aerosol-cloud interactions. Note that they do not all agree that aerosol-cloud radiative forcing is weak: some argue that a value of up to −1.2 W m−2 remains consistent with scientific understanding.

Posted in Aerosols, Climate modelling, Numerical modelling | Leave a comment

Predictions and errors

By Javier Amezcua

Predicting is one of the most ambitious goals of science. It goes beyond describing and explaining, and it attempts to “tell the future”. The prediction process has the following basic steps:

  1. We have an estimate of the present conditions of a system, for instance, the atmosphere.
  2. We have a model –i.e. a set of mathematical rules coming from physical principles- which we evolve forward (or integrate) in time.
  3. We get an estimate of the future state of our system at any given time.

When computing a prediction, it is very important to provide a measure of the quality of this prediction. Intuition tells us that we are more certain, for example, in predicting the temperature in our neighborhood for tomorrow, than in predicting the temperature in the same place a year from now. Where does this certainty/uncertainty come from? Let us explore this next.

For the sake of this discussion, consider that the model mentioned in step (2) is perfect. That is, let us assert that have completely captured in our equation all the processes we are interested in, and that we can solve these equations perfectly with a computer code (this is not true in reality, but we will leave that for another blog entry). In this case the quality of a prediction is determined by the error of our estimate mentioned in step (1) –i.e. the error in our initial conditions– and the error growth in time.

As it turns out, errors grow differently in different dynamical systems. In some systems, making a tiny mistake is irrelevant for a future prediction, while in other systems a tiny initial error can ruin a forecast after a certain lead-time. Let’s take a quick view at different families of dynamical systems with the help of Figure 1. The figure has four panels; for each panel the x-axis corresponds to time, while the y-axis corresponds to the value of a physical variable (it can be wind speed, temperature, etc). Let us run a trajectory starte from a given initial condition; we label this reference trajectory (shown in black in the figure). Also, let us evolve trajectories initialised from ‘nearby’ initial conditions – i.e. initial conditions with errors; we label these trajectories as perturbed trajectories. In the figure, red lines indicate that initial perturbed values are larger than the initial true value, while blue lines indicate that the initial perturbed values are smaller than the initial true value. The behaviour in error growth is different in each case:

  1. a) In this example, the perturbed trajectories tend towards the reference trajectory. This is a typical dissipative system. Regardless of the initial conditions, the system evolves towards a fixed point, and any initial error disappears. Think of a pendulum with friction: it does not matter at what height you drop it, it will use its gravitational potential energy to swing for a while, but it will eventually stop.
  1. b) In this example, the errors of the perturbed trajectories grow as time increases, and they do not stop growing, instead, the perturbed trajectories tend towards plus and minus infinity. This system –in which errors grow without limit– is not feasible in reality, since it would require infinite energy. However, if we want to make predictions in a finite-time frame, the accuracy in the initial conditions is crucial, and we will see the quality of the forecast decrease with time.
  1. c) In this example, the initial error of the perturbed trajectories is preserved as time evolves; it neither grows nor decreases with time. This is typical for periodic systems, such as those found in celestial mechanics, or physical processes related from them, like the tides. If we are wrong in our position of the moon tonight, and do a forecast for the next days, the error will stay constant as time progresses. There is another type of systems, called quasi-periodic, which have similar characteristics, but I will not discuss them further.
  1. d) The last kind of systems is perhaps the most interesting to us; we are talking about chaotic systems. The atmosphere is a typical forced-dissipative system that presents chaotic behaviour. In this case, errors initially grow slowly, then the error growth turns faster, and eventually the perturbed trajectories do not resemble the reference trajectory at all, and in fact they do not resemble each other. The accuracy of the initial conditions is crucial for a good forecast, and the quality of a forecast decreases with time. In fact, even the tiniest initial errors will ruin a forecast after a given lead-time. What is different with respect to panel (b)? Errors do not grow forever and without limit, instead they saturate. After a long time, the trajectories – both the reference and the perturbed ones – evolve and live within a permissible range of values (without going to plus or minus infinity). This set of values is know as attractor (or climatology).

2016 05 25 Javier Amezcua Fig 1Figure 1. Error growth for different families of dynamical systems.

Let us discuss chaotic systems a little further using our example in panel (d). A forecast for time t=0.5 is more reliable than that for time t=1, and after approximately t=1.5 we have lost our capacity to predict. Something similar happens in the atmosphere. For large scale features, this limit of predictability is about 2 weeks. Operational centres release forecasts for up to 5 or 7 days in advance, and they equip these forecasts with some probabilistic measure (representing, in simple terms, how different are trajectories initiated from similar initial conditions). Unfortunately, some commercial forecast providers give no information on the accuracy of their forecasts at all. Furthermore, they are known for (irresponsibly) releasing ‘valid’ determinisitic forecasts for up to 45 days in advance (do not confuse this with the proper seasonal outlooks generated by meteorological agencies). As expected, these forecasts change considerably when updated every day, and these changes continue until the lead-time is within the predictability window. Such 45 day ‘forecasts’ are not prediction, they can be considered quasi-random draws from the climatology of different regions. At the end these forecasts have no value and they end up stating the obvious: July will be relatively warm and December will be relatively cold.








Posted in Numerical modelling, Weather forecasting | Tagged , | Leave a comment

A Random Blog

By Peter Clark

As a young scientist I was introduced to turbulent flow in the traditional way – we consider an ‘infinite ensemble of realisations’ of a random flow, and split each realisation into the average over the ensemble and the ‘random’ fluctuations. I remember being unsatisfied by this approach. Classical physics is not random! What actually is this ‘ensemble’? Why treat the fluctuations as just random noise when any curious eye can see there is a rich structure to the flow?

Many of these questions have (at least partially) been answered by the revolution in mathematics and thinking that is chaos theory (and siblings such as ergodic theory). Perhaps the most remarkable result is that some systems in which the future state is perfectly predictable in terms of the current state (‘deterministic’), evolve to become indistinguishable from a random system. The system ‘forgets’ its initial state, in the sense that to track backwards to find it out requires increasingly accurate knowledge of the current state the further one goes back, to a degree which soon becomes beyond any kind of practicality. This is the converse of the problem of forecasting.

At the same time the computer revolution has enabled us to simulate the evolution of at least a finite sample of an ‘ensemble’ explicitly – a process in weather forecasting sampling the ‘ensemble of initial states’ pioneered with considerable success (and rigour) by ECMWF and now a standard methodology.

Ensemble techniques are now a widespread practice in expressing (often poorly defined) ‘uncertainty’.  This powerful approach has become so universal we often forget to ask the question ‘what ensemble?’ The mere use of an ensemble technique is sometimes taken to give credibility to a piece of work. Too often, arbitrary random perturbations, or worse, an arbitrary mixture of model configurations are used to express ‘uncertainty’, even though it is difficult to know exactly what the results actually mean. While all science is uncertain, perhaps unsurprisingly, some users reject ‘uncertain’ advice with the cry ‘I need to be sure!’

We can, however, return to real physical ensembles arising from the turbulent processes in the atmosphere as an example where uncertainty really matters. When we build weather and climate models, we have to approximate (‘parametrize’) small-scale aspects of the flow (which may be smaller than anything from a few km to several hundred km, depending on the model and application). We simply don’t know how to do this, and there is no reason to suppose it is even possible. However, we do know that, with some restrictions, we can accurately predict an ‘ensemble mean’ behaviour of the small-scale flow. So we use that instead.

The trouble is, we don’t live in an ‘ensemble mean’ world – we live in ‘one realisation’. However, by returning to the quite rigorously defined ensemble, we can also make predictions about the variability of realisations. Figure 1 illustrates this with a very simple model of a real turbulent system. In practical weather forecast models we have shown that using physically realistic random variability can significantly improve the performance of a model (even if the ensemble system we use remains a simplification of the real world) – for example, thunderstorms may form at a more realistic time and evolve more realistically. The downside is that so-called ‘deterministic’ forecasts are an impossibility. Behaving like the real world means behaving, to a certain extent, randomly. Physical realism and not being sure go hand in hand.

Figure 1a

Figure 1a

Figure 1b

Figure 1b

Figure 1c

Figure 1c


Figure 1. Results using an ensemble of 10000 realisations of the Lorenz (1963) simple model of Rayleigh-Bénard convection
Top, Figure 1a)     Two realisations of the rate of heating at z=0.75 the height of the system. The ensemble mean must be zero.
1b)     The position of each realization in phase space – the ensemble is randomly distributed over the ‘Lorenz attractor’ – see animation 
1c)      The standard deviation of the time averaged heating rate as a function of averaging time. The red line varies as 1/averaging time.


Lorenz , E.N., 1963, Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences 20 (2): 130–141. doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.

Posted in Numerical modelling, Weather forecasting | Tagged , | Leave a comment

Characterising extreme event occurrence

By Reinhard Schiemann

When presented with a new data sample, the first thing many of us scientists do is to characterise it in terms of two numbers: the average or mean value of the sample, and the spread or variance of the sample values around the mean. This has become second nature and we rarely stop to think twice about it. Yet it is indeed quite remarkable that data as different as Reading summer temperatures, the chest circumference of Scottish soldiers, or the sum of points obtained by rolling several identical dice can all be characterised by just these two numbers. Essentially, this is a consequence of the Central Limit Theorem in statistics, which states that in the examples above and many other situations, where the data arise as an average of more elementary data (for example tossing individual dice, averaging temperature throughout a season), the samples will tend to follow a Gaussian or normal distribution. The bell-shaped curve of this distribution is ubiquitous in all areas of quantitative science and may be the only mathematical function that has made it onto a bank note (Figure 1). The curve is described by two numbers, the mean determining the location of the bell, and the variance determining the width of the bell.

2016 05 12 Reinhard Schiemann Figure 1

Figure 1. Carl-Friedrich Gauß (1777-1855) and the distribution named after him on the former 10 Deutsche Mark note (source: Wikipedia).

In meteorology we are often interested in extreme events such as strong windstorms, rain and flooding, heatwaves or drought. When we want to describe extreme behaviour, we have to change the way we collect data samples and characterise them. One option is to collect samples that comprise all strongest events in a block of data: the example I am presenting here is maximum daily winter precipitation (rain and snow) that falls over a river basin in each year. Unfortunately, such data samples can no longer be described by the tried and tested Gaussian distribution and its mean and variance. But mathematical statistics comes to the rescue in this situation too: there is an analogue of the Central Limit Theorem, called Extremal Types Theorem, telling us that we can replace the familiar Gaussian bell with a different function called the Generalized Extreme Value (GEV) distribution. We now need three numbers (or parameters) to characterise the GEV. They are called location μ, scale σ, and shape ξ, and their meaning is best illustrated graphically by so-called Gumbel diagrams shown in Figure 2. The vertical axis of these diagrams shows return values indicating the strength of an event (here daily river basin precipitation) and the horizontal axis shows return times, which tell us about the frequency of an event. The bold lines in the diagrams show different GEV distributions and they tell us how to relate a return time to an expected return value. For example, the brown curve in the top panel of Figure 2 shows that the expected return value for a return time of 20 years is 21 mm. We have to wait 20 years on average for a precipitation event of this amount to occur. The location parameter μ determines the vertical position of the GEV curve in the diagram – increasing it to μ=15 mm yields the green curve and the 20-year return value increases to 27 mm. The scale parameter σ determines the slope of the GEV curve in the Gumbel diagram as illustrated in the middle panel of Figure 2. The greater the σ, the more maximum precipitation will vary from year to year, and the more return values will increase with an increase in return time. Finally, the shape parameter ξ describes the curvature of the GEV curve (Figure 2, bottom panel).

2016 05 12 Reinhard Schiemann Figure 2

Figure 2. Illustrative Gumbel diagrams showing GEV distributions with different values for the location parameter (top), for the scale parameter (middle), and for the shape parameter (bottom).

What is all this good for? One application is model evaluation, the process where we assess how realistically numerical models simulate the observed weather and climate. Here, I am interested in how well two versions of a climate model, a low-resolution version (named N96 in Figure 3) and a high-resolution version (N512, also in Figure 3) simulate the extremes of daily winter precipitation over European river basins. To obtain a summary assessment of this performance, I estimate the three GEV parameters for each of the models (N96, N512) and for a reference dataset (E-OBS) based on observed precipitation data from rain gauges. The results are shown in Figure 3. The top row shows the location, scale and shape values for the observations, and the middle and bottom rows show differences between the two models and the observations. We see that both models tend to produce too high precipitation extremes over large parts of Europe, especially over the northern European plains from the Loire river basin in the west to the Vistula basin in the east (greenish colours for the model-observation differences for the location and scale parameters). We also see that this problem is alleviated in the high-resolution (N512) model, where these differences are smaller than in the coarse (N96) model.

The statistical summary assessment shown here is only the first step in model evaluation and many questions remain. How do our two models represent rain-producing Atlantic storms, and how do these storms interact with the European landmass and, in particular, major mountain chains, such as the Alps? Trying to answer such questions is called process-based model evaluation and is an important part of the meteorological research here at Reading. But we will have to leave that for another blog.

2016 05 12 Reinhard Schiemann Figure 3

Figure 3. Estimated GEV parameters for daily winter precipitation over European river basins. Top: precipitation observations (E-OBS), middle: difference between coarse model simulation (N96) and E-OBS, bottom: difference between high-resolution model simulation (N512) and E-OBS. Left: location parameter μ, centre: scale parameter σ, right: shape parameter ξ. Stippling shows statistically significant differences between N96 and E-OBS (middle row) and between N512 and N96 (bottom row).


Posted in Climate, Numerical modelling | Tagged , , , | Leave a comment

When Did Fronts First Appear in the Met Office’s Daily Weather Report?

By David Livings

One of the good things that can now be found on the web is a complete series of the Met Office’s Daily Weather Report going back to 1860. An overview of the early history of the report can be found on the Met Office’s web site. The reports themselves are available at the Met Office Digital Library and Archive.

On discovering this resource a few months ago, the first thing that I wanted to know (after finding out the weather on my birthday) was when fronts first appeared on the charts in the reports. Jon Shonk wrote about fronts on this blog in 2014. Although the concept of a weather front dates back almost a century, it still plays an important role in understanding mid-latitude weather systems.

The following images sample the evolution of the charts in the Daily Weather Report from the first charts in 1872 to a time when fronts had become an established part of the Met Office’s operations. Click on an image to see a larger version. The images show how successive generations of meteorologists have tackled the problem of presenting multiple meteorological variables in a compact, easily assimilated form. The reports also include pages of purely tabular or textual information, which are not shown here.


Figure 1. These charts are for 12 March 1872. Charts first appeared in the Daily Weather Report the previous day, but high pressure dominated then and it is doubtful that any fronts would have been shown, even if the concept had been available. On the 12th, there was low pressure to the NW of Scotland. A mixture of graphical, numerical, and verbal presentation is used, described in the keys to each of the four charts. Isobars are labelled in inches of mercury (inHg). The contour interval for isobars is not constant, being 0.1 inHg (about 3.4 hPa) in some cases, 0.2 inHg in others.



Figure 2. Twenty five years later (3 March 1897) and the four charts of 1872 have been merged into two. This has been achieved partly by omitting the attempt to show the general motion of the air and partly by replacing words with shading (for sea disturbance) or with Beaufort letters (for weather). Isobars are now regularly spaced every 0.1 inHg (about 3.4 hPa).


DWR-1922-03-01-p2m-2 DWR-1922-03-01-p3m-2

Figure 3. 1 March 1922. There is now one large main chart and three smaller ones. Since 1897, quantitative rainfall has been dropped from the variables plotted, but isallobars (contours of constant pressure change) and low cloud have been added. Isobars are now labelled in millibars and spaced every 2 mbar (1 mbar = 1 hPa). There is redundancy in this way of plotting the data: temperature, weather, and wind speed are all plotted twice. Fronts have not appeared yet, but the concept is still new.


DWR-1932-03-06-p2m-2 DWR-1932-03-06-p3m-2

Figure 4. Ten years later (6 March 1932) and the multiple UK charts have been merged into one. Some information has been lost (such as low cloud), but in compensation there is now a full-page chart covering much of the extratropical northern hemisphere. The representation of observations on the UK chart is converging towards the idea of the station model (see Figure 7). There are still no fronts.


DWR-1941-04-01-p3m-2 DWR-1941-04-01-p2m-2

Figure 5. 1 April 1941. The style of the charts is similar to nine years ago, although there has been a change in the style of the wind arrows, and weather on the hemispheric chart is now shown using symbols rather than Beaufort letters. Wartime has brought a reduction in the availability of observations. There are none from most of the European continent. Some days there are observations from Russia, America, or the Western Atlantic, but not always (compare Figures 6 and 7). There are still no weather fronts.




Figure 6. One month later – 1 May 1941. Look carefully and you will see the first fronts to appear in this section of the Daily Weather Report. In the SE corner of the UK chart there are two occluded fronts: one heading SE, one heading NE. A key to the fronts has been added by hand under the Further Outlook. Fronts are not yet shown on the hemispheric chart.


DWR-1942-03-01-p2m-2 DWR-1942-03-01-p3m-2

Figure 7. Ten months later – 1 March 1942. The UK chart has been expanded to a full page. A full station model has been adopted for plotting the UK observations. This enables the reinstatement of the low cloud that was lost in the early 1930s, but the sea disturbance, which had been plotted since the first charts in 1872, has been dropped. Fronts now appear on the hemispheric chart, and there is a printed key to a rather elaborate system of fronts.

Examination of the copies of the Daily Weather Report available on the web therefore gives the impression that fronts did not appear until 1 May 1941, but that is not the full story. In 1919 the Daily Weather Report was split into three sections: a British Section, an International Section, and an Upper Air Section. We have been looking at the British Section, which is the only section currently available on the web. Fronts first appeared in the International Section on 1 March 1933. Nevertheless, 1 May 1941 (75 years ago this month) is an important date, for it represents the arrival of fronts at the heart of the Met Office’s activities.


Posted in Historical climatology, History of Science | Tagged , | Leave a comment

A PhD student’s overview of the European Geosciences Union (EGU) General Assembly 2016

By David Flack

Last week (18 – 22 April) 13,650 scientists from 109 countries descended upon Vienna for the European Geosciences Union (EGU) general assembly. This includes a range of different disciplines, not just those associated with meteorology and hydrology, and amongst these were a large number of scientists from the UK (around 1,300). As a member of the Flooding From Intense Rainfall (FFIR) project I was obviously very interested in a lot of the work associated with precipitation and flash flooding, hence the angle of this blog. In this blog I’m going to try to give a brief overview of what EGU is like (from a PhD student’s point of view) and highlight some of the interesting topics among the hydro-meteorology community at EGU.

Indeed for the FFIR team, EGU started first thing on Monday with Matt Perks (Newcastle University) being schdeuled as one of the first talks of the conference. His talk was a summary of his recent work looking at unmanned aerial vehicles (UAVs) and their use for taking observations whilst floods are occurring, and how this can be used in modelling the water flow in flash flooding situations. Other highlights from the morning session included a talk from the ECMWF (European Centre for Medium Range Weather Forecasting) on a global flash flooding forecast system that they are developing and the links with high-resolution weather forecasts that are able to improve representation of heavy rain

Then after a range of other talks was a poster session in the evening in which Adrian Champion (a member of the Meteorology department here at Reading) was presenting his work on atmospheric precursors to flash flooding amongst various other interesting posters.

One thing that I started to notice, as a PhD student at my first international conference, was the size. There are so many interesting posters and presentations that you can’t get to all the ones or find all the people that you wanted to speak to. However, that size isn’t necessarily a bad thing as it allows you to meet a range of people presenting.

Another good aspect about EGU is the location in Vienna, you are never too far from the city centre via the underground, so you were able to go out in the city centre in the evening and look at the wonderful architecture and experience the Viennese culture (see below).

2016 04 29 David Flack Fig 1 Vienna - IMG_2143 (972 x 648)

Throughout the week there were lots of talks and posters on precipitation including talking about how intense rainfall would vary with climate change in terms of frequency and intensity and hence the impact for flash flooding. Also interesting from my point of view were the various talks on modelling precipitation and the different ways of measuring them and the advances in satellite technology.

I had a couple of posters at EGU presenting my previous work looking at convective regimes in the UK and my current work on uncertainty in models with these regimes. Many people were interested which is always useful when presenting material at a conference. The poster sessions for me were the most useful as you were able to interact with many different people and make useful contacts and collaboration ideas for the future.

For my first time at an international conference I thought the range of disciplines and size of the conference would put me off, but having attended I found the complete reverse happening and would definitely go back again.

Posted in Conferences, Environmental hazards, Hydrology, Students | Tagged | Leave a comment

Polar Prediction School

By Jonny Day

During the last 2 weeks Dr Jonny Day spent two weeks lecturing and coordinating a Polar Prediction School for graduate students and early career researchers. The school is a joint initiative from the World Weather Research Programme (WWRP) – Polar Prediction Project, World Climate Research Program (WCRP) – Polar Climate Predictability Initiative and Bolin Centre for Climate Research.

The school was based at the Abisko Scientific Research Station in northern Sweden – an appropriately Arctic environment (Figure 1). It brought in 28 PhD students and early career researchers from all over the world and a wide range of disciplines for nine days of lectures and practical exercises on the theme of polar prediction. Organised by Jonny Day (Reading University) and Gunilla Svensson (Stockholm University), the invited lecturers included Ian Brooks (University of Leeds), James Screen (Exeter University), Helge Gossling and Thomas Jung (AWI), Cecilia Bitz (University of Washington), Don Perovich (CRREL), Erik Kolstad (University Bergen), Jen Kay (Colorado University), and Matthew Chevallier (Meteo France).

2016 04 21 Jonny Day - Fig 2 - IMG_20160413_141615364

Figure 1. Fieldwork at the Abisko Scientific Research Station in northern Sweden

Polar regions are experiencing rapid changes to their climate; this is opening up new possibilities for businesses such as tourism, shipping, and oil and gas extraction. At the same time it brings new risks to these delicate environments. Effective weather and climate prediction is essential to managing these risks. The complexity of the polar environmental systems, and very limited measurements in these remote regions, make them very challenging environments to provide accurate forecasts for any time scale from days to decades.


2016 04 06 1700z - 037

Figure 2. Making measurements of near-surface wind and temperature profiles and the surface energy budget using a micro-meteorology mast erected on the frozen surface of Lake Torneträsk, northern Sweden

As well as an intensive program of lectures and modelling exercises, students conducted practical work based around measurements of near-surface wind and temperature profiles and the surface energy budget made from a micro-meteorology mast erected on the frozen surface of Lake Torneträsk (Figure 2). Radiosondes were released each day, with one day of intensive measurements where radiosondes were released every 3 hours for 24 hours to study the diurnal cycle of boundary layer structure (Figure 3). All the observations were drawn together on the final day to study the full range of processes governing the surface energy balance over the previous week. Other lectures and exercises covered chaotic systems and predictability, operational ocean prediction, modelling polar boundary layer processes, ensemble climate prediction, sea ice processes, and polar lows.

Links: Storify by Denis Sergev (UEA)

2016 04 21 Jonny Day - Fig 1 - IMG_20160412_133320299

Figure 3. Radiosondes were released each day, with one day of intensive measurements where radiosondes were released every 3 hours for 24 hours to study the diurnal cycle of boundary layer structure


Posted in Climate, Climate change, Cryosphere, Measurements and instrumentation, Polar, University of Reading | Tagged , | Leave a comment