Layout Image

The Best Fan Bases in the Big 10 & Some Details on Our Methodology


Our post on the Best Fan Bases in college basketball generated several interesting comments and questions.  One common request was to see how other schools stacked up.  There were also a number of questions related to the methodology.

Today we start with the complete results for the Big Ten Conference (Our next post will examine the PAC-12).  Indiana comes out on top followed by Minnesota, Ohio State and Wisconsin.  At the bottom of the list we have Penn State and Michigan.  Nebraska is not included in these ratings due to lack of data.

The difference between Indiana and national runner up Michigan highlights the way our method works. For most of the last decade, Michigan and Indiana both struggled on the court.  Consequently, Michigan fans stayed away, while Indiana continued its streak of ranking in the top 15 in the nation in terms of attendance.  We should also add for those that want to claim some sort of bias, Professor Mike Lewis is a diehard Illini fan, and it pains him to have Indiana rank number 1.

It may also be useful to provide a bit more of the methodology used to generate the rankings.  We start with information on men’s basketball revenues reported by the Department of Education.  As an aside, we should also point out that the analyses reported on the website all rely on publically available data. While this data may not be perfect (like just about any other data set), we do not have any reason to believe that the data is systematically biased.

We then build a regression model that predicts these revenues as a function of data that corresponds to team quality and market potential.  The following equation is a portion of the model used (we are trying to keep the stats to a minimum as we expect that 95% of readers just want to see the results):

The actual statistical model included a number of other factors such as dummy variables for each conference and several nonlinear measures of team quality such as a quadratic term for winning percentage.

We use this model to make a prediction of revenue for each school (i) in each year (t).  We call this prediction Revhat(i,t).  We next compute the residual for each observation in the data (Rev(i,t)-Revhat(i,t)).  This residual represents the difference in actual revenues versus the revenues expected based on market potential and on-court performance.  The fan equity rankings are based on the sum of the residuals for the last 5 years (the model is estimated using ten years of data).

A couple of points are worth noting.  First, we do not use a school fixed effect because we are interested in how this residual changes.  Using the last five years is a compromise between eliminating noise that occurs in a single year and also capturing the enduring but evolving fan equity.

A second issue that merits discussion is the role of conferences.  In our model we estimate a conference effect.  The reason we do this is to eliminate the benefits that a weak school can collect simply by being in the right conference.  For example, if we do not control for conference revenues schools like Northwestern actually do very well in the rankings because their revenues are extreme given their (lack of) on-court success.

The issue of conferences is a tough one and one that is beyond the type of analyses we do for the website.  The issue is that it is difficult to disentangle the conference effects from the school effects.  The outcome of this problem is that a school like Indiana ends up suffering in the overall ratings because some of the Big Ten “equity” should really be allocated to the Hoosiers.

The table below shows the rankings of conferences.  As expected the Big Ten leads the way followed by the ACC.  The key caveat for this chart is that the Big Ten network is what pushes the Big Ten ahead of the ACC.

Look out for our next post that will examine the PAC-12

Be Sociable, Share!