A Random Blog

By Peter Clark

As a young scientist I was introduced to turbulent flow in the traditional way – we consider an ‘infinite ensemble of realisations’ of a random flow, and split each realisation into the average over the ensemble and the ‘random’ fluctuations. I remember being unsatisfied by this approach. Classical physics is not random! What actually is this ‘ensemble’? Why treat the fluctuations as just random noise when any curious eye can see there is a rich structure to the flow?

Many of these questions have (at least partially) been answered by the revolution in mathematics and thinking that is chaos theory (and siblings such as ergodic theory). Perhaps the most remarkable result is that some systems in which the future state is perfectly predictable in terms of the current state (‘deterministic’), evolve to become indistinguishable from a random system. The system ‘forgets’ its initial state, in the sense that to track backwards to find it out requires increasingly accurate knowledge of the current state the further one goes back, to a degree which soon becomes beyond any kind of practicality. This is the converse of the problem of forecasting.

At the same time the computer revolution has enabled us to simulate the evolution of at least a finite sample of an ‘ensemble’ explicitly – a process in weather forecasting sampling the ‘ensemble of initial states’ pioneered with considerable success (and rigour) by ECMWF and now a standard methodology.

Ensemble techniques are now a widespread practice in expressing (often poorly defined) ‘uncertainty’.  This powerful approach has become so universal we often forget to ask the question ‘what ensemble?’ The mere use of an ensemble technique is sometimes taken to give credibility to a piece of work. Too often, arbitrary random perturbations, or worse, an arbitrary mixture of model configurations are used to express ‘uncertainty’, even though it is difficult to know exactly what the results actually mean. While all science is uncertain, perhaps unsurprisingly, some users reject ‘uncertain’ advice with the cry ‘I need to be sure!’

We can, however, return to real physical ensembles arising from the turbulent processes in the atmosphere as an example where uncertainty really matters. When we build weather and climate models, we have to approximate (‘parametrize’) small-scale aspects of the flow (which may be smaller than anything from a few km to several hundred km, depending on the model and application). We simply don’t know how to do this, and there is no reason to suppose it is even possible. However, we do know that, with some restrictions, we can accurately predict an ‘ensemble mean’ behaviour of the small-scale flow. So we use that instead.

The trouble is, we don’t live in an ‘ensemble mean’ world – we live in ‘one realisation’. However, by returning to the quite rigorously defined ensemble, we can also make predictions about the variability of realisations. Figure 1 illustrates this with a very simple model of a real turbulent system. In practical weather forecast models we have shown that using physically realistic random variability can significantly improve the performance of a model (even if the ensemble system we use remains a simplification of the real world) – for example, thunderstorms may form at a more realistic time and evolve more realistically. The downside is that so-called ‘deterministic’ forecasts are an impossibility. Behaving like the real world means behaving, to a certain extent, randomly. Physical realism and not being sure go hand in hand.

Figure 1a

Figure 1a

Figure 1b

Figure 1b

Figure 1c

Figure 1c


Figure 1. Results using an ensemble of 10000 realisations of the Lorenz (1963) simple model of Rayleigh-Bénard convection
Top, Figure 1a)     Two realisations of the rate of heating at z=0.75 the height of the system. The ensemble mean must be zero.
1b)     The position of each realization in phase space – the ensemble is randomly distributed over the ‘Lorenz attractor’ – see animation 
1c)      The standard deviation of the time averaged heating rate as a function of averaging time. The red line varies as 1/averaging time.


Lorenz , E.N., 1963, Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences 20 (2): 130–141. doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.

This entry was posted in Numerical modelling, Weather forecasting and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *