Melbourne Cup: Quant Style Going Max Active
Thanks to a subscriber for this report which arrived a day too late for me to post before the “race that stops a country”. Here’s to next year. This section may be of interest:
As the Melbourne Cup rolls around again for 2017, we turn our attention back to that most worthy and intellectually satisfying of pursuits: Figuring out how to take a good punt at the races. Despite the lacklustre performance of the Macquarie Quant Halpha Model at the 2016 Melbourne Cup, we believe that fundamental approach behind the model – to pick undervalued horses rather than those with the greatest absolute probability of winning – remains the rational objective for the fiscally minded punter. Our model does this by identifying factors that other punters systematically overvalue.
For example, punters tend to over-value the form of a horse. Hence, while horses with good form are indeed more likely to win, the odds offered on these are typically too short to justify them as a systematically profitable bet. As with our standard Macquarie Quant Alpha Model, the Halpha model is designed to statistically capture inherent biases in the preferences of other market participants. These biases skew both betting odds and stock prices away from fair valuation. Quantitative models such as the Quant Halpha Model (and our regular Alpha Model) then takes advantage of these inefficiencies by betting (or trading) against the direction of the skew.
In order to improve the confidence and robustness of the Halpha model, this year, we have a (not-so-secret) secret weapon: More data. Thanks to Luke Byrne and Jared Pohl, the fine folks behind Kaggle’s Horses for Courses dataset, we have been provided access to data for an additional 3,700 horse races from 2017 to complement the existing dataset of 3,400 races from 2016. The expanded data sample both allows a larger training set to construct the Halpha model, and enables us to partition out-of-sample validation and test sets.
While inventing strategies for horse-racing betting markets is mostly just for a bit of fun, the quantitative processes we apply here (i.e. identifying the forecast parameters and detecting pricing inefficiencies) largely reflect those used to address more sophisticated cash equity markets. In comparison to the latter, betting markets provide a cleaner prediction environment based on behavioural biases with less interference from macroeconomic cycles and idiosyncratic news-flow. As such, the Halpha model provides a useful didactic tool for exploring underlying concepts behind quantitative equities models.
Here is a link to the full report.
I have fond memories of the Melbourne Cup from my time living downunder in 1999/2000 and thought this report would offer a retrospective look at might have worked and how to plan for next year’s speculation. Max Dynamite placing in today’s race will have proved at least a particular comfort for the Macquarie team’s buy low strategy.
The pace of technological innovation and the simultaneous digitisation of just about all data sets means that quantitative research is permeating every area of our lives. Horse racing might not be the most productive use to which it is put but the reality is that every cashflow, every key stroke, every clickthrough, every mile driven and increasingly every word spoken form part of an increasingly large dataset that can be analysed to optimise how each consumer’s monthly cashflow is allocated.
Back to top