The table below shows what the "ballot" produced by the weighted and unweighted ratings would have been for the 2013-14 First Round at-large bids. Again, it's important to note that no effort was made to "fit" the results to match the coaches' ballots. To the extent that any fitting has been done, it was exclusively to optimize how well the ratings predicted actual round results.
It's interesting to note that both ratings systems would have produced almost the exact same set of bids as the actual voters did. The only point of disagreement is that the ratings didn't like Kansas BC quite as much as the voters. This is pretty remarkable, especially when emphasizing that Kansas was the 16th and final bid.
The ratings would have prefered a couple of teams before Kansas, including Harvard HX (who was ineligible), Oklahoma LM, and Minnesota CE. However, it should be noted that in the raw rating score Kansas, Oklahoma, and Minnesota were virtually identical with only a couple of points separating one another (OU and UM are even tied in one). It would be interesting to go back and examine each team's results more closely. In broad strokes, I can see why KU and OU would be so close to one another. There are few major differences in their performances. KU made it to finals of UMKC whereas OU attended the Kentucky RR. OU didn't break at Harvard, but KU didn't break at Wake. KU made it a little further at Fullerton, but OU made it a little further at Texas. The bigger suprise is the presence of Minnesota, who regularly struggled in early elims. However, they did break at every tournament. The biggest piece in their favor though is probably their performance at the Pittsburgh RR, where they substantially outshone KU and OU. Without going back to dig into the data, I suspect that it was at Pittsburgh that Minnesota got boosted back into the conversation.
Out of curiosity, I compared the ratings to each voter's ballot. The voter whose ballot most resembled the weighted ratings was Will Repko, the difference between them only being on average (mean) 1.25 spots. The voter who most resembled the unweighted ratings was Dallas Perkins, with an average difference of 1.75 spots. To put those numbers into a little bit of perspective, Dallas's average difference from Repko was 2.33 spots.
Also, the weighted ratings were slightly more aligned with the overall preferences of the voters. The average devation of the weighted ratings from voter preferences was 2.1 spots, whereas the average deviation of the unweighted ratings was 2.3 spots. I suspect that a big part of that difference can be found with the significant difference between how the two ratings evaluated Wake MQ. The unweighted version was not very friendly to them (a huge factor being the difference in how the weighted ratings evaluated the quality of their opponents at the Kentucky tournaments).