Don’t (always) blame the weather forecaster

By: Ross Bannister

There are (I am sure) numerous metaphors that suggest that a small, almost immeasurable event, can have a catastrophic outcome – that adding the proverbial straw to the load of the camel will break its back. In 1972, the mathematical meteorologist Ed Lorenz famously gave the presentation, “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” Unlike for folks who do keep a domestic camel, this title was not intended to be interpreted literally, but instead to ask how a system like the Earth’s atmosphere is affected by vanishingly small perturbations. But is it possible for a butterfly’s flap to really have consequences? Without the ability to experiment on two or more otherwise identical Earths, demonstrating this is impossible.

Learning from computer simulations

Atmospheric scientists are acutely aware that computer-derived forecasts are sensitive to the ‘initial conditions’ provided to them. Modern weather forecasting is done by representing the atmosphere at an initial time with vast sets of numbers stored inside a computer (this set is called the initial conditions of the model). The computer marches this state forward in steps into the computer’s version of the future. The rules that the computer uses to do this task boil down to Newton’s laws of motion (i.e. how forces acting on air masses change their motion), and other processes that affect the behaviour of the atmosphere, like heating and cooling by radiation and by condensation/evaporation of water. Unlike in the real world, it is possible in the computer to create two identical sets of initial conditions apart from small differences, and then to let the computer calculate the two possible future states.

Sensitivity to initial conditions

So, what do scientists find from these experiments? At first the forecasts are virtually indistinguishable, but at some time they start to show noticeable differences. These appear typically on small scales and then start to affect larger scales (known as the inverse energy cascade). Lorenz discovered this serendipitously in the 1950s when he ran simplified weather simulations on a research computer (a valve/diode-based Royal-McBee LGP-30 with the equivalent of 16 kilobytes of memory). He found that if he stopped the simulation, and restarted it with similar, but rounded, sets of numbers representing the weather, the computer simulated weather patterns that became very different from those that are forecast had he not stopped and restarted the simulation. Lorenz had discovered sensitive dependence to initial conditions (or colloquially, the “butterfly effect” in connection with the title of his presentation). Faced with two such different outcomes, which one, if any, is the better forecast? Hmm …

Figure 1:

Numerical solutions of two x, y, z trajectories obeying the (non-linear) Lorenz-63 equations to demonstrate sensitive dependence to initial conditions (red and yellow lines/points). At t = 0 the initial conditions are indistinguishably close and at t = 3 the two trajectories virtually overlap. At t = 6 small differences appear, which become more obvious at t = 9. By t = 12 and t = 15 the two trajectories are so different that they occupy separate branches. The beauty of the structure that emerges by solving the Lorenz-63 equations is quite amazing. For the record, the Lorenz-63 equations are: dx/dt = σ(yx), dy/dt = –xz + rx y, and dz/dt = xy bz, with σ = 10, r = 28, and b = 8/3. The multiplication of one of x, y, z with another such variable gives these equations their non-linear property.

Try this at home

This effect is also seen in simple non-linear equations. In 1963, Lorenz published a seminal work, “Deterministic non-periodic flow”, where he introduced some equations that describe how variables, x, y, and z change in time. These equations may be regarded as representing a highly simplified version of the atmosphere. It is only possible to solve these equations approximately with the help of a computer (note to reader – try this, it’s fun!). One can visualise the solution by taking particular x, y, and z values at a given time as the co-ordinates of a point in space. Joining the points up in time shows the forecasts as trajectories, and one may think of different positions as representing different kinds of weather. Figure 1 shows two such trajectories (red and yellow), whose initial conditions are nearly identical at time t = 0. As time progresses, they diverge, slowly at first, until by t = 12 they represent completely different states (note the resemblance to a butterfly).

Ensemble weather prediction

Scientists routinely run large models from many initial conditions, each subject to a slight variation – a technique called ensemble forecasting. The initial conditions differ by amounts believed to be around the level of uncertainty that the weather is known using observations and previous forecasts. These are combined in a physically consistent way, using data assimilation (which is my area of research). As a rule of thumb, differences seen in the small-scale weather forecast patterns emerge first. Indeed if the forecast grid is small enough to resolve cloud systems then the ensemble members will likely first disagree in the forecast of convective events, like showers and thunderstorms. This is why patterns of convective precipitation are so hard to predict beyond a few hours. One forecast may predict heavy rain at a particular location between 4.00 and 4.10pm, another between 4.30 and 4.35pm, and another may predict no heavy rain. Ensemble forecasting allows forecasters to understand the range of likely outcomes (usually all ensemble members will predict heavy showers, but with slightly different locations), and to give probabilistic forecasts for individual locations. While small-scale features will differ, large-scale weather patterns (such as high and low pressure systems) are usually predicted accurately at these early stages. As forecast time progresses the uncertainty develops in larger scales and eventually the forecast of the large-scale systems become unpredictable.

Fundamental limits

As a rule of thumb, km-scale motion is predictable to no more than about half-hour, 10 km-scales to about one hour, 100 km-scales to about 10 hours, 1000 km-scales to about one day, 10000 km-scales to about four or five days, and the largest scales no more than about a week or two. In extra-tropical regions, for example, there is a particular kind of atmospheric instability (baroclinic instability) between scales of around one to three thousand km which can lead to a lowering of predictability on those scales, although observing the weather at these scales is given special attention so that the uncertainty at these scales is reduced in the initial conditions.

[We should note that climate models make projections many years, decades, or centuries into the future and use the same building blocks as weather models. Climate models though predict different things: long-time averaged conditions rather than the weather at particular times, which is thought to be very useful as long as realistic forcings (e.g. the radiative forcing associated with changes in greenhouse gas concentrations in the atmosphere) are known.]

Room for improvement?

So what hope is there of improving weather prediction given these fundamental limits? There are other factors that can be improved. The spread in the ensemble’s initial conditions can be reduced with more observations and better assimilation. Model error can also be reduced. No model is perfect, but there is room for improvement by decreasing the grid size and time step (severely restricted by cost and available computer power), and by improving the representation of physical processes (also restricted by computing and on research activity).  While scientific and technological barriers can be broken, the fundamental limits of nature cannot. As the air motion of the butterfly’s flap mixes with all the other fluctuations, it is impossible to say exactly how it will change the course of the atmosphere, just that it will.


Lorenz E.N., The Essence of Chaos, UCL Press Ltd., London (1993), ISBN-13: 978-0295975146. A readable a thought provoking popular account of chaos theory.

Lorenz E.N., Deterministic nonperiodic flow, Journal of the Atmospheric Sciences 20 (1963), 130–141, DOI:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2. An exploration of the derivation and interpretation of the Lorenz-63 equations.

Lorenz E.N., The predictability of a flow which possesses many scales of motion, Tellus 21 (1969), 289–307, DOI:10.1111/j.2153-3490.1969.tb00444.x. This paper explores different kinds of predictability and how predictability depends on scale.

Tribbia J.J. and Baumhefner D.P., Scale interactions and atmospheric predictability: An updated perspective, Monthly Weather Review 132 (2004), 703–713, DOI:10.1175/1520-0493(2004)132<0703:SIAAPA>2.0.CO;2. An update on earlier work of Lorenz with more modern weather prediction models.

Palmer T.N., Dring A., and Seregin G., The real butterfly effect, Nonlinearity 27 (2014), R123–R141, doi:10.1088/0951-7715/27/9/R123. A discussion of the “butterfly effect” term to necessarily refer to a finite time limit to predictability in fluids with many scales of motion.

Data Assimilation Research Centre, What is data assimilation?, A brief introduction to data assimilation.

Met Office, The Met Office ensemble system, An introduction to the Met Office’s ensemble prediction system.

University of Hamburg, Forecasts diagrams for Europe, Choose a European city for ensemble forecasts of temperature and precipitation. A graphic illustration of the growth of uncertainty with forecast time from weather forecast models.

This entry was posted in Climate, Climate modelling, data assimilation, Numerical modelling, Predictability. Bookmark the permalink.