If you have read my books then you will be aware that I consider race profiling to be a useful tool when analysing races. It is preferable to ten or twenty-year trends simply because a race profile is based on a much larger sample of “similar” races, which means that trends, with respect to the type of horse that wins a particular race, are more reliable.
Furthermore, due to the larger number of races used to form the profile, subtle changes to the trends can be highlighted more quickly than from an analysis of the specific race trends. Most changes to long-term race profiles result from the BHA tinkering with race conditions. The obvious recent example from flat racing is the increase of Stakes races for two-year-olds. But changing race conditions also affects jumps racing.
Not so many years ago it was common to see novice chasers arrive at Cheltenham in March with a string of ones next to their name, sometimes as many as eight. This resulted from the better horses dominating the pre-Cheltenham events then facing-off at the Festival in widely anticipated contests.
This was good for racing in general, for instance horses with long unbeaten runs were often mentioned in general sports programmes on even editions of the evening news due to the perceived magnitude of the forthcoming races in which they would meet.
However, allowing the best to dominate was perceived to have a negative impact on betting, so changes were made to the novices’ chase calendar to make it more difficult for horses to run up a sequence of wins. Such changes have an immediate effect on trends and need to be accounted for by anyone following systems.
In order to generate the race profiles I use custom written software. This allows the user to define a race in general or specific terms, and then creates a profile based on these attributes. But more relevant to this article, the software also allows me to test and save betting systems using a range of data associated to the runners and race conditions.
Over the years I have tested, and often discarded, hundreds of systems, as well as saving a few for my own use and to include in books and articles. Whilst searching through my PC recently I came across a file detailing a system that I had completely forgotten about and, in fact, it was one I cannot ever remember using.
According to the date on the file it was generated in August 2011 and I suspect I had intended to use it during the 2011/12 winter jumps season. What surprised me about the system was that, firstly, it was applied to handicap hurdle races, a race category that has never appealed to me from a betting perspective and, secondly, it is very complicated with several conditions. Here it is:
For handicap hurdle races consider all runners aged 8yo or younger that won last time out within 30 days over a distance equal to or greater than today’s distance, when not favourite.
For the period of analysis the method produced 1,359 bets and made a level stake profit at Bookmakers’ starting price of 4p/£. This seemed a reasonable rate of return so I thought it may be worth testing over the intervening years just to see how it performed subsequently.
After entering the conditions into the profiling software, I could immediately see why I bothered to save the system: from 2007/08 to 2010/11 the annual profits were: +21p/£, +1p/£, +12p/£, and +15p/£. No doubt I would have been confident about the first betting year, 2011/12.
However, as is often the case, and a good reason for live testing any approach, the return for 2011/12 was a paltry 38 winners from 204 bets returning a loss of 17p/£, which was possibly why I had forgotten about this approach. So, what went so badly wrong?
It could have been a change of race conditions, but handicap hurdles are pretty consistent in that regard, or more simply, the use of back-fitting to over condition a method and produce one which will not perform as well in subsequent races.
But in this case, and with the benefit of hindsight, I’m not sure it was either of these. One reason that is often forgotten when a system fails within a short time period is simply statistical randomness. In the four years up to 2011/12 the method had produced win rates of 26%, 23%, 25%, and 26%. For the first “live” year the win rate was just 19%. Maybe it was a freak year and fewer horses won than would have been expected.
Fortunately, we have the luxury of being able to look at years beyond that first year to verify this. In 2012/13 the win rate was back up to 27% and didn’t drop below 23% for each subsequent season to 2016/17.
Clearly, 2011/12 was an outlier, unfortunately it happened to coincide with the first season I would have used the method and, as a result, I would possibly have discarded it.
Over the full ten-year period to the end of last season the method produced 1,855 bets and made a 2p/£ profit at Bookmakers’ starting price. The profit is lower than for the original test period, but that is to be expected, and to make a profit at all using industry starting price is impressive because we know that bets should never be placed with Bookmakers’ near to the off time.
Switching to exchange prices, this profit increases from 2p/£ to 10p/£ (19p/£ for the 222 bets in 2016/17) with only one losing year, and you know which one that was.
Analysing the selections by other factors shows a reasonable level of consistency, though seven day winners have a much higher win rate at 37% and current race favourites won at 35%. But the level of consistency across the other variables suggests this method may be worth employing in the future and it will be interesting to see how it gets on this season. Maybe I should reanalyse some of my other short-lived systems just to see whether they really were failures or just victims of statistical variability.