Hate quants? But it’s awesome!

If you’re the average first year undergraduate student (yes, I know, nobody’s really the average, but anyhow) you’ll either really hate quants (econometrics), or you’ll feign dislike in order to avoid seeming to be a geek.

My hope is that as you learn more about economics, you’ll learn to enjoy and even love the subject more, but also realise that data, and hence econometrics, is utterly central to all of it. All of the theories we teach you in micro and macro need to be verified out there in the real world, and the only way to do that properly is to collect data about the real world. Testing theories properly also requires that we learn appropriately what the data can, and is, telling us. This bit is econometrics. It’s absolutely essential if we’re going to determine which economic theories are worth taking seriously, and which we can safely discard.

Data can be pretty awesome at times, too. For example, in this day and age betting is ubiquitous on all kinds of events – see www.oddschecker.com/ if you want to get some sense of this. Data on the bets multiple bookmakers offer for events as diverse as the Premiership (Leicester City, really?!), and the next elimination on Strictly Come Dancing. These are predictions, or forecasts, about unknown as yet future events. Economic activity relies entirely on predictions about future events – how many sales will my company get with that new product, will that job be just right for me, should I take out insurance on my new phone, and so on…

If you’re concerned about more conventional data though, and the important messages we can learn from a proper and detailed look, here’s an example from yesterday on earnings. Hopefully it makes the point really clear: it’s vital for our good as a nation, and as a society, that we know about our statistics. Stagnant earnings growth that spawned the whole “cost of living crisis” (however real it felt for your dear lecturer over that period ;-)) may well have been bad statistics caused by a misleading calculation of the average that treats new entrants to the labour market, on low wages, equally to existing members of the workforce who are receiving more “normal” pay rises. Worth a read.

It makes the bigger point though: there’s an issue with how our statistics are calculated, and that needs to be investigated. Thankfully that is happening; I’m no fan of the Chancellor of the Exchequer, but this is one of his better moved by some distance: he has set up a review into how statistics like GDP are being calculated, particularly in this day and age of masses of data (think about how much data Tescos and Sainsburys must have on you). Dry stuff I’ll grant you, but this section is particularly relevant for the first week of term after Christmas:

The Review was prompted by the increasing difficulty of measuring output and productivity accurately in a modern, dynamic and increasingly technological economy. In addition, there was a perception that ONS were not making full use of the increasingly large volume of information that was becoming available about the evolution of the economy, often as a by-product of the activities of other agents in the public and private sectors. Finally, frequent revisions to past data, together with several recent instances where series have turned out to be deficient or misleading, have led to a perception by some users that official data are not as accurate and reliable as they could be.

Evidence? Who needs evidence…

Economics is an “observational science”. What does that mean? It means that, by and large, its something we observe rather than something we recreate in the laboratory under controlled conditions.

Why does this matter? Because it makes strong statements about what caused what more difficult. If we can’t be sure that something else didn’t cause what we’re looking at, then that casts doubt on our proposed explanation.

This is perhaps most acute in the area of public policy, particularly when it comes to the macroeconomy. Earlier in the week the Chancellor of the Exchequer (in charge of fiscal policy) was able to make a lot of concessions to a lot of people because of better than expected growth (that’s expected to happen). But is that better than expected growth due to great fiscal policy since 2010 (as some might argue), or due to more favourable other factors (less austerity than the government wanted to do since 2010)? Or something else entirely – better growth in our trading partners? The answer to this question matters a lot, since it tells us what fiscal policy would be more effective.

It’s always interesting to draw parallels with other fields, particularly when things are topical. As you’ll be aware, at the moment there are reforms being proposed for the NHS – namely to turn it into a seven-day service from its current (supposed) non-seven day service. A huge amount has been written and said and shouted about this, but the bottom line is if you’ve ever been into hospital on a weekend you’ll know that the NHS does indeed function on weekends.

As such, the argument has switched to whether it’s more risky to have been admitted on a weekend. Two studies have been relied upon, but the problem is that these studies are based on observational data. A comment published two days ago by the British Medical Journal (BMJ) says:

1. This is an observational study and cannot determine causation.

This is, alas, true. We cannot determine it. We can have a bloody good go at it, nonetheless, and you’ll learn a lot in econometrics about how we can go about this. But we cannot determine it, we can only be sure of it to some degree of confidence – usually 95%.

This doesn’t nullify the use of economics as a field of study, nonetheless – there’s a lot we can understand much more clearly via further study – but it is important to keep this in mind when studying economics…