top of page
Neil Dewart

An Examination of the Inaugural World Test Championship

With the ongoing series between England and India signalling the beginning of the second World Test Championship, it felt like a great time to look back at the inaugural competition, which concluded recently with New Zealand coming out as champions after beating India in a dramatic finale at Lord's. Specifically, it was a chance to re-examine a previous post on the site, which discussed the various pros and cons of the points system used to rank the teams, and how likely we were to see an ultimately fair outcome.


As we noted then, there were various oddities to the league system used in the World Test Championship - mostly derived from fitting the competition around a pre-existing and irregular fixture schedule - which gave rise to a number of concerns about the integrity of the competition. Some commentators even went as far as to declare the competition as an "unjust farce".


In that post we identified three factors that might unfairly skew the outcome of the competition - 1) some teams had an 'easier' set of opponents, 2) differences in series lengths meant that some test matches counted for more points than others and 3) some teams played a higher proportion of their matches at home. We then applied three different scoring systems to a hypothetical set of World Test Championship results and compared the outcomes to see if we really were likely to see the best two sides in the final.


We will do so again here, only this time armed with the actual results from the tournament. We will look at the same three systems - those being the actual system used in the tournament, an adjusted 'Points per game' system that adapts a more traditional league table structure, and a Bradley-Terry ranking system - a modification of a logistic regression which we use extensively elsewhere on the site.


First, we will look at the actual final standings using the system used in the tournament itself. It's worth noting that due to several series being cancelled due to the COVID-19 pandemic, some teams did not have the opportunity to earn as many points as others, and so the scoring system was adjusted so that teams were sorted by the percentage of points they earned out of those that they contested.



Intuitively a New Zealand vs India final seems like a fair outcome. They are both highly respected teams who have been at the top of their game over the last few years, and this is also reflected by their positions as the top 2 sides in the ICC Test World Rankings. It should be noted, however, that Australia had a 4 points deduction for over rate - without that they would have dead level with New Zealand and would actually have contested the final ahead of them due to a superior Runs per Wicket ratio.


For our first point of comparison, we will look at a more traditional league points scoring system - similar to what we might see in 'round robin' sports tournaments like The 2019 World Cup or most major professional football leagues. We have gone with 4 points for a win, and 2 for a draw. Since the teams have played a different number of matches, these points have been standardised to a ‘points per game’ figure, which is then multiplied through by 13.33 – which is the average number of games played by each team:


A small but significant change here sees Australia leapfrog New Zealand into second place, indicating that Australia might count themselves a little unlucky to not make it to the showpiece occasion at Lord's. The fact that it was their slow over rate that ultimately cost them, however, means there is not necessarily much cause for sympathy. We also see Pakistan leapfrog South Africa into 5th, but elsewhere things remain as they were in the official standings.


As we noted in the previous article referenced above, this bodes well for the WTC scoring system, showing that another scoring system gives broadly similar results. Despite this, it still doesn’t account for some of the more contentious aspects we identified earlier – in particular the uneven fixture schedule and the potential to play more games either at home or away.


For that we have the Bradley-Terry model, which is used extensively elsewhere on the site. For the uninitiated, it is essentially an adaptation of a logistic regression model that gives a true reflection on the abilities of a set of competing entities based on a set of pairwise comparisons. It is particularly useful in ranking sports with an irregular fixture schedule, such as international cricket, where a standard points system may not work.


So, in this case, it looks at all the results in the WTC, and fits abilities for each team that most accurately explain the observed results. This ensures that variations in fixture difficulty are accounted for and gives a more accurate reflection on the respective qualities of the competing teams.


We have also introduced an ‘order effect’ for home advantage. This effectively means that away wins are given greater weight than home advantage. This weight is automatically derived from observing at what rate home teams tend to win and ensures that the final ability ratings are not influenced by a home or away skew in the fixtures - this is vital given the difference in win rates for home and away teams - 55% of games were won by the home team vs just 30% won by away sides.



In these standings we don't see any change in terms of rankings from our 'traditional' points method, but interestingly we do see Australia pull further ahead of New Zealand, so it's worth examining the fixtures to see why this might be.


Both sides had home series against both Pakistan and India. Whilst they both registered 2-0 wins over Pakistan, New Zealand also managed to beat India 2-0 whilst Australia fell to a 2-1 loss in their thrilling 4 game series. Both had drawn series away from home - New Zealand's was a 1-1 series with Sri Lanka, who we have as one of the weaker sides in the competition, but Australia managed 2-2 against England, who are considerably stronger. New Zealand also managed a 2-0 win at home to West Indies, but this is unlikely to have a significant impact on their rating here due to the weakness of the West Indies side.


Most crucially, however, was a 3-0 series whitewash for Australia at home to New Zealand which, according to the Bradley-Terry ranking system, firmly establishes them as the superior side.


Looking at it more holistically sheds a bit more light on why Australia have been ranked higher in our model - in their 4 series they played each of the other top 5 ranked teams (excluding themselves) and their only series defeat was a narrow one against the top ranked side, India. New Zealand on the other hand, had 2 of their 5 series against weaker sides, and one of those was the aforementioned 1-1 draw with Sri Lanka.


All of this tells us that New Zealand, whilst clearly an excellent team and deserving finalists (and winners), did benefit from a slightly more favourable fixture list than their closest rivals Australia.


More broadly, this tells us that the WTC scoring system does a good job in giving us two worthy finalists since, especially if we consider that Australia only missed out due to a slow over rate in their series with India, the final standings are nearly identical even when we control for differences in fixture difficulty and home advantage.

 

This article is not meant to be either an endorsement or a criticism of the World Test Championship - only an examination of the scoring system and how well it delivers on being a fair sporting competition despite its structural irregularities. In that respect it seems to broadly be a success, and we can more or less draw an identical conclusion to our original article:


"It does seem as though the World Test Championship will at least provide us with two worthy finalists – effectively debunking the idea that the tournament is some kind of unfair aberration ... however, a favourable fixture schedule might give the opportunity for an outsider to make a good run at the top two."


It's also worth re-iterating from that piece that we do not necessarily think it is a bad thing should the final not be contested by the two best teams. We already have the ICC Rankings to give us a more 'objective' view on who the best teams are, so this can be seen as a race to become World Test Champion alongside that. As we noted then:


"A good sporting competition should see the best team come out as winners most of the time – but it’s also important that there is some scope for an unfancied team to have a chance if they get a bit of luck along the way"


 

That wraps up our analysis for the time being, hopefully as the competition evolves it will throw up interesting stories and discussion points for future analysis, but for now, thanks for reading! If you liked this please check out some of the other articles on the site, follow the Twitter, and keep an eye out for upcoming posts!



424 views0 comments

Recent Posts

See All

Comments


bottom of page