Why Tournament Inclusion is a Issue
Nevertheless, there is one sense in which the choice to include a tournament has an impact on outcomes and the predictive accuracy of the ratings. Ratings based exclusively on head-to-head data still depend on every team's opponents being measurably related to every other team's opponents. At a basic level, this means that sample size is important, but sample size can be easily addressed by excluding teams whose sample is too small. The harder problem to overcome is the distortion that can be caused by teams whose results are based on an excessively regionalized set of opponents. Since the meaning of a team's rating is really nothing more than their relative position within the overall network of teams, Glicko ratings run into a problem when a team is not adequately "networked."
Take an extreme example. Imagine a small regional tournament in which none of the participants ever attend another tournament. Since the ratings are determined relationally, the winner may hypothetically emerge from the tournament with a rating of 1750 or higher (1500 is average; a team rated 1750 would be considered a 4:1 favorite over an "average" team). Since their and their opponents' ratings were not ever influenced by the ratings of other teams outside the tournament, this rating would be highly unreliable. Basically, what we have is a "big fish in a little pond" problem.
A less extreme, but still noticeable, version of this problem exists when the ratings attempt to include small regional tournaments. Many of the teams that attend these tournaments never achieve a point where their opponents have become representative enough of the larger pool of debaters.
People already consciously or subconsciously account for this problem somewhat intuitively when they make judgments of team quality. That's why, for instance, at-large bid voters may account for wins at regional tournaments but weigh them less, and generally it is very difficult for a team to get a second-round bid if they haven't attended at least a couple of national-level tournaments.
I believe that I have figured out way to include (nearly) all tournaments in a way doesn't undermine the reliability of the ratings.
Social Networking Analysis & Eigenvector Centrality
Eigenvector centrality (EV) is a metric used in social networking analysis to determine how connected any given individual is to the larger network. A basic definition can be found here. In short, "Eigenvector centrality is calculated by assessing how well connected an individual is to the parts of the network with the greatest connectivity. Individuals with high eigenvector scores have many connections, and their connections have many connections, and their connections have many connections … out to the end of the network" (www.activatenetworks.net). Eigenvector centrality has been used in a number of applications, most notably Google's PageRank algorithm.
While it could theoretically be possible to use eigenvector centrality as the central mechanism for an entire ratings system, for my purposes I am using it much more limited capacity. It can be used to set a baseline degree of centrality that a team must meet for their results to be included in the Glicko calculation. Here, a high eigenvector centrality is not necessarily an automatic virtue (you could be highly interconnected and yet lose a lot of debates), but a very low EV score could be grounds for exclusion because of the unreliability of the data.
Below I hope to show that setting a low EV hurdle is both a means to sidestep the tournament inclusion problem as well as a way to improve the overall accuracy of the ratings.
Effect of EV Baselines on Ratings Accuracy
First, the NDT. Lower scores mean less error in the predictions. The number of teams listed includes a lot of partial repeats due to debaters switching partners.
MSE and MAE are pretty basic measures of error, but they have the advantage of being intuitive. MAE, or mean absolute error, simply measures the average amount that the ratings miss per round. Here it is represented as a percentage, meaning that all of the ratings miss by an average of roughly 25%. If this sounds like a lot, it is important to keep in mind the way that prediction error is calculated. Let's say that the ratings pick Team A as a 75% favorite (3:1) over Team B. If Team A wins in front of a single judge panel, then they score "1," or "100%." The ratings error is calculated by subtracting the outcome (1.00) minus the predicted outcome (0.75), meaning that despite picking Team A as a heavy favorite, there was still an error of 25%. It is also this difference between actual outcome and prediction that forms the basis of the calculation that determines how and how much the ratings change for each team after the round. MSE, or mean squared error, is only mildly more complicated. Its importance here is that it more heavily weights "big misses."
Looking at the numbers, we can spot a few things:
- Including only majors & large regionals does pretty well at reducing MAE in the NDT predictions, but it performs the absolute worst when big misses are weighted.
- Including all rounds from all tournaments has the worst MAE, but an average MSE.
- Setting an EV baseline of 0.2 produces both the best MAE and the best MSE, but we see diminishing returns with the more restrictive 0.25 baseline. In fact, the 0.25 baseline produces the second worst MSE of the group.
Now to the simulated tournaments. You may notice a significant difference in the size of the numbers. This is because of the way that the ratings calculate error in rounds with multi-judge panels. The ballot count on a panel produces a fraction of a win ranging from 0 (unanimous loss) to 1 (unanimous win), with various percentages in between (for example, a 3-2 ballot count would count as a 0.6 win). This allows the predictions to get much closer to the correct outcome on panels than it does when the outcome is a single judge binary win (1) or loss (0). Since the NDT is judged entirely by panels, whereas the bootstrap simulation draws from the entire season of results, the majority of which are standard single judge prelims.
To help address this difference, I've also added a third metric that is a bit better at evaluating the error in binary outcome situations called binomial deviance.
EDIT: In the original version of this post, there was a significant error in the way that binomial deviance was being calculated. As a result, I had to rerun the simulation. In the end, the correction only very mildly influenced my conclusions, giving a tiny bit more evidence in favor of the EV .15 ratings.
Bootstrap Simulation Tournament Predictions
Things to note:
- Including only the majors and large regionals is the worst by every metric and by a large margin.
- Including all rounds from all tournaments is second worst at both MAE and MSE and only average at BDV.
- The 0.15 EV baseline comes out best in both MSE and BDV, and only barely behind in MAE. EV .2 performs well in terms of MAE and MSE, but is average according to BDV.
Both analyses are based on a single seasons worth of data, but they both seem to confirm the same basic conclusion: basing the ratings on either just the majors & large regionals or on nearly all tournaments produce the least accurate predictions. By contrast, setting a eigenvector centrality baseline of somewhere around 0.15 or 0.2 produces the most accurate ratings.
Next, I am going to look at how these ratings match up with 1st & 2nd round at-large bid voting. While this voting admittedly comes from a small group of individuals, it should provide us at least one point of comparison to see whether the ratings produce outcomes in line with community expectations.
Effect of EV baselines on At-Large Bid Ranking
First Round At-Large Ranking
The first thing you should notice is a striking amount of similarity. The rankings produced by the Glicko system are pretty similar across the board with minor variations. Additionally, they produce a nearly identical set of bid teams. If Harvard HX gets excluded for being the 3rd Harvard team, then
- The majors & large regionals ratings pick 16 of 16
- The All Rounds ratings pick 15 of 16, preferring Oklahoma LM over Kansas BC
- The EV 0.15 ratings also pick 15 of 16, with Oklahoma LM again stealing the 16th spot
One may wonder about the Wake Forest MQ discrepancy. This was discussed in my previous post about round robins, and it is largely due to the distorting effect of the Kentucky RR, the removal of which would bump them up into the 9-11 range.
Some notable, but relatively small, gainers are Michigan AP, Wake Forest LW, Georgetown EM, Michigan State RT, West Georgia AM, and Harvard HX. Moderate losers include Towson JR, Berkeley MS, and Texas FM. On the whole, however, teams are pretty close to the spots assigned by the voters.
The average (mean) deviation for each of the rankings from the voters are in the neighborhood of 2.1 spots. And though I didn't bother to run the calculations again with fixes made to the Kentucky RR, my rough estimate is that making those changes could get the mean deviation somewhere close to 1.5 or 1.6 Such a deviation would be in line with the average voter's difference from the aggregate voting preference.
One final observation: it makes intuitive sense that the different Glicko ratings wouldn't be too far off from one another for these teams. They mostly attend majors and probably don't compete as much against those teams that would either be included with the All Rounds rating or those that would be excluded by the EV baseline ratings.
Now, for second rounds. One important factor here is that the Glicko ratings did not include district tournaments in their calculations. If a team had an exceptionally strong or weak district tournament, it may have been accounted for by the voters, which would produce a discrepancy.
Second Round At-Large Voting
- The All Rounds and EV .15 ratings produce the same set of teams, and match the voters on 15 of 17 bids. The Majors & large regionals ratings match the voters on 14 of 17.
- The Majors & large regionals ratings really hurt Oklahoma BC, who is highly preferred in the other sets of ratings but drops to 11th (and out of the bids due to being behind too many other "3rd teams").
- As I noted in an earlier post, the ratings really like Baylor EB, who jumps more than 10 spots compared to the voters, but this is one that could be heavily influenced by the fact that the ratings didn't include their poor districts results. Nevertheless, I don't think there's a world in which they'd drop out of the top 17.
- The All Rounds and EV .15 ratings were not as high as the voters on Wake Forest DL, but the Majors rating did like them. My best guess for the reason for this is just that the ratings for teams 4 through 9 are quite compact. It wouldn't take much to push a team up or down a couple of spots.
Overall, the mean deviance of the ratings from the voting average was 2.83 for the Majors, 2.5 for All Rounds, and 2.39 for EV .15, making it the closest. In fact, all of the ratings were closer to the voting average than were the average of each individual voter's deviance, which was 3.13 spots. More narrowly, only 3 individual voters were closer to the voting average than the EV .15 ratings (DCH, Lee, and Repko).
One might wonder as to why I've said "nearly" all tournaments. This is mostly a function of time. It takes a not insignificant amount of time to format, clean, and process each tournament's results. Then, I would need to further test them to make sure that nothing super kooky happens with extremely small tournaments. So, for now, I'm keeping it at tournaments that have at least 20ish teams. I might dip below that if it's close enough to round up. Or I might include a smaller tournament if there is a specific reason that it ought to be included.
Here is the list of tournament that fell into each category above...
Majors & Large Regionals (15 tournaments): UMKC, GSU, Kentucky RR, Kentucky, UNLV, Harvard, Wake Forest, USC, UTD, CSUF, UNT, Dartmouth RR, Pittsburgh RR, Weber RR, Texas.
(Nearly) All Tournaments (30 tournaments): UMKC, Northeast Regional Opener, GSU, Gonzaga, Kentucky RR, Kentucky, JMU, KCKCC, ESU, UNLV, Harvard, Clarion, UCO, Vanderbilt, CSUN, Wake Forest, Weber, USC, UTD, CSUF, UNT, Chico, Navy, Dartmouth RR, Indiana, Pittsburgh RR, Weber RR, Wichita State, Georgia, Texas.