As an initial impression, I figured that there's a possibility that the first rounds would be easier for the ratings because those at the top achieve greater differentiation since they tend to debate each other more due to elims, round robins, and how power matching produces something like a normal distribution of records. Once you get further down in the ratings, teams start to get more clustered. Here is a histogram of the adjusted weighted ratings distribution for the regular season last year.
So how would the ballots produced by the ratings compare to the actual votes? Here's the table:
Instead of focusing on each of the differences, I think it will be more useful to look at the instances where there's a noticeably large gap between the ratings and the voters. In nearly each case, the discrepancy can probably be accounted for by the fact that each team had relatively strong performances at smaller regional tournaments that were not included in the data set.
Oklahoma BC is one of the big losers because even while they end up ranked 11th in both sets of ratings, they drop behind other 3rd teams and get eliminated. OU is most harmed by the exclusion of Wichita State, where they closed out finals. Looking at the field, I might have to reconsider whether Wichita State should be included in the ratings, especially because there were at least a few second round applicants in the field.
Cal-Berkeley EM dropped 13 spots, most likely attributable to the exclusion of the Gonzaga and UNLV tournaments, where Cal performed pretty well. Although, the ratings also didn't include Chico, where they had a pretty poor performance.
Gonzaga BJ dropped 10 spots due to their focus on mid-level regional tournaments (Gonzaga, Lewis & Clark, UNLV, Weber, Navy). They only attended 3 tournaments included in the ratings (UMKC, Fullerton, Texas), where their performances were not strong.
One team that significantly benefited was Baylor BE, who jumped 10 spots. They also had a few of their regular season tournaments not counted (UNLV, UCO, WSU). However, my guess is that they benefited most from 2 things. First, the ratings gave them a substantial amount of credit for their performance at Texas. That single tournament, where they went 5-3 and failed to break, jumped them from 24th to 16th. The reason for the jump is that they were significant underdogs in all 3 of their losses (Towson JR, West Georgia AM, Oklahoma BC), which would have heavily mitigated any ratings losses. They also had a pretty big win against Liberty CE. The second factor is more speculative, but I suspect that the exclusion of districts results from the ratings probably benefited Baylor, whose quite poor performance may have influenced the voters.
In the end, I think that the results indicate that the specific needs of second round bids (who tend to travel to more regional-level tournaments) might require further consideration of which tournaments get included in the data set. Some of this may have already been remedied for the 2014-15 ratings through the inclusion of the UNLV and Weber State tournaments. I'll have to take a second look at Wichita State. In the future, I may rerun the ratings with a more inclusive tournament schedule to see how it affects the results. One possible middle ground could be to include elimination round results but not prelims from smaller tournaments. This might give teams additional credit for strong performances but avoid the potential distorting effects of an isolated pool of debaters.
The addition of Wichita State tournament results bumps Oklahoma up to the 4-5 range, much more consistent with their rank by the voters.