With fifteen minutes of play left in most matches on Tuesday, our scoreline predictions were spot on in seven matches, and just one goal in a number of other matches could have yielded more exact scores. It looked like it might just be a bonus week.
Leeds then equalised at Swansea, and one by one, until Crawley equalised with the last kick of the evening against nine-man Swindon, all seven disappeared, and none of the possibilities materialised.
At the same time, while we bemoan how close we have been, we’ve also been spectacularly out. We had Stoke to start strongly and win at Leeds. We picked QPR for a surprise win at West Brom, 1-0. The actual score was 7-1 to West Brom. Last night we thought Scunthorpe would beat Fleetwood 1-0 at home, but by the 29th minute they were 4-0 down, and eventually succumbed 5-0 at home.
Humblings all around. We got no scores, below expectation, and we got 14 results out of 34 matches, which means we got about 42% of results. That’s about the same frequency as the number of home wins, meaning had we just predicted a home win in every match, we’d have done about as well.
Which brings to light questions of how do we evaluate our forecasts? Do we just record a zero for a 1-7 when we picked 1-0? And a 1 when that 2-1 does actually happen that we predicted? Or do we sum up how many goals out, so that we were 7 out for West Brom, and 6 for Scunthorpe, (1-0)+(0-(-5))=6?
We plan to develop a little the ways we evaluate forecasts, not least to reflect the way we are evaluating our own forecasts to try and make them better.