Hate quants? But it’s awesome!

If you’re the average first year undergraduate student (yes, I know, nobody’s really the average, but anyhow) you’ll either really hate quants (econometrics), or you’ll feign dislike in order to avoid seeming to be a geek.

My hope is that as you learn more about economics, you’ll learn to enjoy and even love the subject more, but also realise that data, and hence econometrics, is utterly central to all of it. All of the theories we teach you in micro and macro need to be verified out there in the real world, and the only way to do that properly is to collect data about the real world. Testing theories properly also requires that we learn appropriately what the data can, and is, telling us. This bit is econometrics. It’s absolutely essential if we’re going to determine which economic theories are worth taking seriously, and which we can safely discard.

Data can be pretty awesome at times, too. For example, in this day and age betting is ubiquitous on all kinds of events – see www.oddschecker.com/ if you want to get some sense of this. Data on the bets multiple bookmakers offer for events as diverse as the Premiership (Leicester City, really?!), and the next elimination on Strictly Come Dancing. These are predictions, or forecasts, about unknown as yet future events. Economic activity relies entirely on predictions about future events – how many sales will my company get with that new product, will that job be just right for me, should I take out insurance on my new phone, and so on…

If you’re concerned about more conventional data though, and the important messages we can learn from a proper and detailed look, here’s an example from yesterday on earnings. Hopefully it makes the point really clear: it’s vital for our good as a nation, and as a society, that we know about our statistics. Stagnant earnings growth that spawned the whole “cost of living crisis” (however real it felt for your dear lecturer over that period ;-)) may well have been bad statistics caused by a misleading calculation of the average that treats new entrants to the labour market, on low wages, equally to existing members of the workforce who are receiving more “normal” pay rises. Worth a read.

It makes the bigger point though: there’s an issue with how our statistics are calculated, and that needs to be investigated. Thankfully that is happening; I’m no fan of the Chancellor of the Exchequer, but this is one of his better moved by some distance: he has set up a review into how statistics like GDP are being calculated, particularly in this day and age of masses of data (think about how much data Tescos and Sainsburys must have on you). Dry stuff I’ll grant you, but this section is particularly relevant for the first week of term after Christmas:

The Review was prompted by the increasing difficulty of measuring output and productivity accurately in a modern, dynamic and increasingly technological economy. In addition, there was a perception that ONS were not making full use of the increasingly large volume of information that was becoming available about the evolution of the economy, often as a by-product of the activities of other agents in the public and private sectors. Finally, frequent revisions to past data, together with several recent instances where series have turned out to be deficient or misleading, have led to a perception by some users that official data are not as accurate and reliable as they could be.

Discussions on Deaton

Slow start today to blogging; my urgent attention was directed towards preparing the discussion at our weekly Conversations slot this week, the topic being birthday boy and recent Nobel prize-winner Angus Deaton.

I’ve prepared these slides: https://www.dropbox.com/s/7n6u1wgq6eiqfq2/deaton.pdf?dl=0

Hopefully they are of use, I’ll be delighted if they generate some interesting discussion.

Deaton seems to be, by and large, a very empirical man, which means he has less inclination towards wild outspoken comments that might make a discussion spicy.

He’s advocated the use of more and more micro data in saying things at the aggregate level, the macroeconomy, so he’s far from irrelevant when it comes to us thinking about macroeconomics in the Spring, and such a focus on individual level, or disaggregate level data, does raise a question of how widely applicable any conclusions that can be drawn will be. Individual data is messy as a general rule, whereas aggregate data is that bit less messy (albeit full of puzzles due to the effect of aggregation).

Interestingly enough, this does appear to be his main criticism of the use of Randomised Control Trials (RCTs) in a lot of economic development work: do they reality apply more widely? An RCT is a trial in economics that resembles a medical trial – somehow the treatment is randomly administered in the population at large, allowing more to be learnt about the impact of that treatment than might be the case without random assignment.

Probably most controversially, Deaton has argued that foreign aid is useless – mainly because it undermines local governments, and he argues that poorer countries need their governments to function better in order for them to grow longer term.

See you at 1pm in HumSS 125!