[h=2]How the NCAA abuses statistics to stack the deck against small schools.[/h] By
Ken Pomeroy
<section class="content">
<figure class="image inline " style="width: 590px; display: block;margin: 0 auto;float: none;">
<figcaption class="caption">Wichita State’s Zach Brown and Illinois State’s Paris Lee chase a loose ball during the Missouri Valley Conference tournament championship on March 5 in St. Louis, Missouri.</figcaption> Dilip Vishwanat/Getty Images
</figure>
This year, as it has every season since 1981, the selection committee for the NCAA men’s basketball committee relied on something called the
Rating Percentage Index as its primary analytical tool to pick the teams and seed the field. The RPI was a useful tool in 1981, when computer rankings were far more rudimentary. But in the three-plus decades since, many folks—
including myself—have noted the metric’s flaws. For one thing, the RPI doesn’t account for a team’s margin of victory. The strength of schedule component is also quite primitive, as it’s mostly based on an opponent’s record. This allows shrewd schedulers to game the system by loading up on teams that are weaker than their record indicates.
Now the good news: In January, the NCAA invited me and several other people
to discuss using new metrics to support the tournament selection process. It is encouraging that the people in charge of men’s basketball at the NCAA are interested in using the best tools available. Change does not come easy for large organizations, so it’s worth celebrating the fact that we’re even having a discussion about dropping the RPI for something better. But while I’m happy to help with the process of replacing the RPI, I’m most invested in changing the way we think about whatever statistical system we end up using.
Consider the case of Illinois State, the highest-rated team in the RPI that missed the 68-team tournament. It wasn’t a surprise that the No. 33 Redbirds were excluded from the field.
According to bracketmatrix.com, just nine out of the 100 prognosticators that posted a projected bracket on Sunday had ISU in the field.
Why did Illinois State miss out? We have a system designed to rank teams based on their records and the quality of their schedules, but when it comes to a small-conference school like ISU, the human evaluators don’t trust the computer ranking. Instead, they rely on metrics like “quality wins” and “bad losses,” where “quality” and “bad” have arbitrary definitions.
<aside id="newsletter-3391" class="newsletter-signup-component"> [h=2]Get
Slate in your inbox.[/h] </aside> Let’s compare Illinois State to Marquette, whose RPI ranking was nearly 30 spots worse but received an at-large bid. It’s true that Illinois State has more “bad losses” than Marquette—if we’re defining bad losses as those happening against teams ranked outside the top 100 of the RPI, then Illinois State had two compared to Marquette’s zero—but that fact is deceiving. Marquette played 12 games outside the RPI top 100 while Illinois State played 26. It should be obvious that the team that plays more bad teams is more likely to incur losses to bad teams.
<aside class="pullquote"> There’s a line of thinking that since the field is large, a team that doesn’t make it has nobody to blame but itself. That’s a cop-out.
</aside>
But it’s even less fair than that for the Redbirds. Power-conference teams are also less likely to lose to bad teams, because the power-conference teams usually play those games at home. Most of their poor opponents are found in the nonconference schedule, where they have the economic leverage to schedule programs from lesser conferences. Indeed, of the eight games that Marquette played against teams outside the top 100, just one—Big East rival DePaul—was on the road. To make things even easier for Marquette, four of its “bad” opponents were really, really bad, falling in the bottom 100 of Division I basketball’s 351 teams. Those games are virtually automatic wins for any team with even remote tournament dreams.
Teams from a competitive mid-major conference like the Missouri Valley play a much different kind of schedule. Most games against teams outside the top 100 are conference games, which are just as likely to be on the road as they are at home. Also, very few of those “bad” opponents are going to be as bad as Howard or Western Carolina, whom Marquette played. Although it played many more teams outside the top 100, Illinois State still had fewer games (three) against teams in the bottom 100 than Marquette. As a consequence, a whole lot more of Illinois State’s games against poorer teams were potentially loseable, if the Redbirds had a particularly bad night or their opponent was feeling it. And the Redbirds did lose two of them—road games to Murray State and Tulsa.
Those bad losses were part of the justification for keeping Illinois State out of the field. Those bad losses also explain why it’s virtually impossible for a team from outside of the top 10 conferences to get an at-large bid.
If Marquette and Illinois State swapped schedules, the Golden Eagles would almost surely lose some games to teams outside the top 100. If you put Illinois State in the Big East, it would have earned some quality wins. No doubt, though, the Redbirds would do much worse than their 17-1 Missouri Valley Conference record when facing the tougher competition. But consider that Xavier went 8-10 against Big East teams not named DePaul and easily earned an at-large bid. The standard for small-conference teams is incredibly high, while the standard for major-conference teams is not as high as you think.
Regardless of what system is used, humans need to be smarter about how to think about that system. It’s a mistake to ignore game location, and it’s a mistake to use arbitrary thresholds to bundle together quality wins and bad losses. Under the current process, when comparing teams of similar quality, you can throw out all the analysis of wins and losses and simply use the size of a school’s basketball budget as the tiebreaker. It works just as well.
Truth be told, Illinois State does not deserve to be 30 spots higher than Marquette in any ranking system. But almost any algorithm that ignores margin of victory has the two teams in the same neighborhood. In other words, each team’s accomplishments are approximately the same if you look at their records and whom they played. It’s only when you start using arbitrary standards like wins in the top 50 or losses outside the top 100 that the picture gets skewed.
Complete fairness is going to be difficult to achieve, but that doesn’t mean we shouldn’t try. There’s a line of thinking that since the tournament field is large, a team that doesn’t make it has nobody to blame but itself. That’s a cop-out. Illinois State performed as an NCAA tournament at-large team should against the schedule it played and got sent to the NIT. We can do better.
</section> <footer> <section class="about-the-author multi">
Ken Pomeroy has been providing analysis of college basketball at kenpom.com for eight years. You can follow him on Twitter.
</section>
</footer>
<aside class="sidebar" id="main_sidebar" style="position: absolute; right: 0px;"> <section class="nav-panel" id="slate_logo"> Slate
</section> <section class="items"></section>
</aside> <footer id="article_footer_tiles"> </footer>