Layout Image

“Revenue Premium” Versus Survey-Based Attitudinal Measures

A criticism of our previous rankings of fan bases is that our approach is overly financial and doesn’t capture the “passion” of fans.  This critique has some validity but probably less than our critics realize.  When we talk about quantifying customer loyalty in sports or even in general marketing contexts we very quickly run into some challenges.

For example, when I speak to classes about what loyalty means, the first answer I get is that loyal customers engage in repeat buying of a brand.  I will then throw out the example of the local cable company.  The key to this example is that cable companies have very high repeat buying rates but they also frequently have fairly unhappy customers.  When asked if a company can have loyal but unhappy customers students quickly realize that it is difficult to cleanly measure loyalty.

Another distinction I make when teaching is the difference between observable and unobservable measures of loyalty.  As a marketer, I can often measure repeat buying and customer lifetime.  I can even convert this into some measure of customer lifetime value.  These are observable measures.  On the other hand other loyalty oriented factors such as customer satisfaction, preference or likelihood of repurchase are unobservable, unless I do an explicit survey.

I think what our critics are getting at is that they would prefer to see primary / survey data of customer preference or intensity (questions such as on a 1 to 7 scale rank how much you love the Florida Gators).  BUT, what our critics don’t seem to get is that this type of primary data collection would also suffer from some significant flaws.  First, whenever we do a consumer survey we worry about response bias.  The issue is how do we collect a representative sample of college or pro sports fans?  This is an unsolvable problem that we tend to live with in marketing since anyone who is willing to answer a survey (spend time with a marketing researcher) is by definition non-representative (a bit weird, I know).

A second and more profound issue is that it would be impossible to separate out the effects of current season performance from underlying loyalty using a survey.  I suspect that if you surveyed Michigan basketball fans this year you would find a great deal of loyalty to the team.  But I think we all know that fans of winning teams will be much happier and therefore respond much more positively during a winning season.

Related to the preceding two issues is that our critics seem to assume that they know what is in the heart of various fan bases.  Mike Decourcey took exception with our college basketball rankings that rated Louisville over Kentucky and Oklahoma State over Kansas.  A key mistake he makes is that he assumes that somehow he just knows that Kentucky fans are more passionate than Louisville’s, or that Kansas fans love their team more than Oklahoma State loves theirs.  He knows this based not on any systematic review of data, but based on a few anecdotes (this is especially convenient since the reliance on anecdotes means that there is no need to control for team quality) and his keen insight into the psyches of fans everywhere.

The other issue is whether our “Revenue Premium” captures fan passion or just disposable income.  This is another impossible question to fully answer, but in our defense the nice thing about this measure is that it is observable, and willingness to pay for a product is about the best measure of preference you can get short of climbing into someone’s head.  I think another way in which our critics are confused is that they associate noise with loyalty.  Is an active and loud student section a true measure of the fan base quality?  Perhaps so, but do we really believe that the 19 year old face painter is a better fan than the alumni who has been attending for 40 years but no longer stands up for the entire game?

Be Sociable, Share!