99 years of economic insights for Indiana

The IBR is a publication of the Indiana Business Research Center at IU's Kelley School of Business.

Executive Editor, Carol O. Rogers
Managing Editor, Brittany L. Hotchkiss

Overwhelmed by all those ratings, rankings and lists? Advice on being a smart consumer of information

Research Associate, Indiana Business Research Center, Indiana University Kelley School of Business

Deputy Director and CIO, Indiana Business Research Center, Indiana University Kelley School of Business

image

Most of us do not go a day without seeing some sort of ranking or rating list in the news or social media. We can find a list to compare just about anything. For example, U.S. News and World Report’s college rankings can help us make a complicated choice like what school to attend—a big decision with lots of consequences. Others, like a list of the city’s best restaurants, help us choose where to go for dinner tonight. Yes, rankings and lists can definitely be useful.

They help us make quick decisions because they take tons of measures and data and boil it down into one easy to understand number (Ovadia, 2004). Say you’re considering getting an MBA. Most of us don’t have the time to find and compare every little aspect—the graduation rates and starting salaries, GMAT/GRE scores, acceptance rate, undergrad GPA, etc. Besides, could we even get to that information? Or say you want to eat dinner out? We often turn to crowd-sourced rankings (e.g., Yelp) or expert evaluations (e.g., a food critic column).

But while ratings put a wealth of information at our fingertips, not all have the same implications for our lives. When we choose a college, we’re making a decision that potentially affects the rest of our career, demands thousands of hours and costs tens of thousands of dollars. In contrast, selecting a restaurant has similar implications, but on a much tinier scale (e.g., less than a day, time spent at the venue and meal cost).

These personal decisions give some insights into understanding and using state ratings. Specifically, think about location rankings. In some cases, we might use them just to brag about our city. In other cases, some influence huge and wide-reaching policies! These affect businesses by telling them the best place to put their next location. When a city is low on a list, rankings can influence big budget decisions, like how much money to put into business infrastructure—roads, utilities, tax breaks and so on.

Below is a short list of popular state rankings, specifically the ones Indiana currently points to in promoting our top placements:

How do you use what list for what decision? We at the Indiana Business Research Center aim to help you be a discerning consumer by asking some simple questions about any rating or ranking system (see Figure 1).

Figure 1: Understand a ranking or rating

graphic

Source: Indiana Business Research Center

What is the ranking?

This seems like a dumb question, but it’s really a fundamental one. State best/worst rankings each have their own features to distinguish them from others (e.g., one may use different sources of data than the others).  The name doesn’t always give us a good feel for what is actually being ranked. For example, S&P and the Tax Foundation both put out information on state finances, but they couldn’t be more different.

S&P specifically rates states’ creditworthiness. They rank (grade) states primarily to inform investors about government bonds. While a state’s bond rankings could have an effect on business (e.g., states can raise taxes to pay off their debt), they don’t have a direct impact usually.

In contrast, the Tax Foundation’s index measures have different aims. Rather than measuring how much taxes states collect, it provides information on how well states manage their tax systems. They do provide best/worst ranks, and they work explicitly to inform a variety of policymakers and stakeholders. Furthermore, they suggest how to interpret scores. Some of the top 10 states have fewer types of taxes, while others have more taxes at lower rates like Indiana. They clearly explain the index is just the starting point for understanding how tax-friendly a state is.

A ranking might also measure something different than what the publisher intended. For example, some researchers find that some state ranking indicators don’t necessarily correlate with economic performance (Motoyama & Konczal, 2013; Fisher, 2005).

Most importantly, an organization’s ideology comes into play. What things make a state friendly to business? For example, what does it mean to be tax friendly? Is it the number, type or extent? Is it the net effect on business income? Does it matter how well these taxes correspond to business benefits?

Take-away: Even though rankings have similar names, they measure different things.

What information goes into it?

Indiana’s business friendliness ranking varies across the different lists: #7 on Business Facilities best business climate, #16 on CNBC’s top states for business, #1 on Pacific Research Institute’s Small Business Index, #10 on Tax Foundation’s state business climate, #11 on Forbes’ best states for business.

Why? Different publishers have different ideas of what makes a good business climate. Their categories may include: tax climate, workforce, quality of life, industry diversity, capital investment, cost of doing business, foreign investment, etc. Not only do they use different categories, they can put a different emphasis on each. Workforce quality can count more than tax climate or vice versa. Generally, we call this weighting the data (see Figure 2).

Figure 2: What information goes into a ranking?

graphic

Source: Indiana Business Research Center

Just because two publishers use the same category, doesn’t mean they measure it the same way.  For example, there are lots of ways to measure quality of life: physical health, job satisfaction, wealth, education, rights and freedoms, safety, etc. We cannot observe “life happiness,” so we use several different measures to approximate it, but not everyone agrees on what measures are most important (Ovadia, 2004). Additionally, some of these measures may lead to happiness or be a product of life quality (Fisher, 2005). In other words, is someone wealthy because they have a high quality of life, or did their quality of life lead to higher wealth? It can be hard to tell.

Each of these may come from various sources. A publisher may use publicly available government data sets that feel more objective. Alternatively (or additionally), they may survey CEOs to get expert opinions. Both approaches have their pros and cons. Not all rankings provide detailed and easy-to-find information on their source data. Chief Executive announces they use CEO surveys in many articles, while Business Facilities tends not to advertise their data sources as much. The reasons behind transparency vary. For example, some indexes require more math than others, making them harder to explain.

Take-away: There’s no one definition of business climate.

How do they transform that information into a simple number?

Some rankings see-saw. One year a state is #1 and the next year the same state is at #10. News outlets jump on these changes—they make great headlines—but the reasons behind the changes are often overlooked or too obscure to explain. How many locations can make changes so quickly to make such a leap? Government generally isn’t known for being that agile.

So what is happening? The raters may be “cooking the numbers,” but more likely it’s an issue of substance versus significance. The difference between #1 versus #2 could be small or huge. A familiar example is a high school valedictorian. Their GPA is 3.97, but the #2 student might have a GPA of 3.95 (a tiny difference) or 3.1 (a bigger difference). There is no way to tell without digging into the methodology.

Business rankings have the same issue. Two ratings may be statistically the same, but have different ranks, because there can’t be two #1s (Gnolek, Falciano, & Kuncl, 2014). That’s why it helps when a publisher includes ratings or indexes (scores) to explain the rankings.

Take-away: The difference between #1 and #2 can be small or huge.

Who published the ranking and why?

In many cases, finding the methodology is a challenge. This is especially true if a business wants to profit from the ratings.

If ads appear on the same page as the ranking, there’s likely a profit motivation. If they sell details on their rankings, a profit motivation is obvious. Of course, we aren’t saying that profiting is bad! But just like you would evaluate any product, you want to know something about the company and how they manufacture their product. This is a lot like buying a television. We don’t know how it’s made, but we can check the company’s reputation in general and if TVs are an important product line.

In contrast to businesses, universities and research centers tend to be pretty transparent. Even so, some have their own motivations based on their mission and funding (e.g., the Cato Institute vs. the Center for American Progress). While we definitely aren’t saying one is better than another, all products derive from some motivation, even if it is not profit.

Take-away: No ranking is objective. Different publishers have different motivations.

What impacts do ratings have?

Not all consumers respond to rankings the same way, which has implications for how important they are to both businesses and states.

We can learn some from college ratings. One bump up in the ranks can increase college applications 1 percent (Luca & Smith, 2013), but that isn’t necessarily uniform. Rankings might only come into play as a tie-breaker. Some researchers found that students apply to a list of colleges and then choose the highest-rated one that accepted them (Griffith & Rask, 2007). Businesses looking for locations may operate the same way.

Also, college rankings illustrate the “front page” effect. The top 10 of any list tend to get more press than others from the lower ranks (Bowman & Bastedo, 2009). The additional attention might pressure these colleges to make different decisions and policy changes than schools ranked between #11 and #39.

Additionally, the power of ratings change over time. Using colleges as an example again, about 10 percent of students cared about school rankings in 1995, but this number had doubled by 2015 (Eagan et al., 2016). For a business example, green energy might be more important now than it was in the past.

Take-away: Pay attention to how much a ranking influences your stakeholders—even if the ranking system is technically a bad one.

Conclusion

Based on all this background information, how should we use ratings, rankings and lists? Our advice:

  • Ratings are a good place to start a discussion about what makes a good business location.
  • Consider the effects of ranking. How important are these numbers to businesses, governments and the public?
  • Never make an important decision based on rankings alone.
  • Don’t feel like you need to check out the methodology behind every single rating. Don’t get overwhelmed, but remember there is more to it than just a number.

References

  • Bowman, N. A., & Bastedo, M. N. (2009). Getting on the front page: Organizational reputation, status signals, and the impact of U.S. News and World Report on student decisions. Research in Higher Education50(5), 415-436.
  • Eagan, K., Stolzenberg, E. B., Ramirez, J. J., Aragon, M. C., Suchard, M. R., & Rios-Aguilar, C. (2016). The American freshman: Fifty-year trends, 1966-2015. Los Angeles: Higher Education Research Institute, UCLA.
  • Fisher, P. S. (2005). Grading places: What do the business climate rankings really tell us? Washington, DC: Economic Policy Institute.
  • Gnolek, S. L., Falciano, V. T., & Kuncl, R. W. (2014). Modeling change and variation in U.S. News & World Report college rankings: What would it really take to be in the top 20? Research in Higher Education55(8), 761-779.
  • Griffith, A., & Rask, K. (2007). The influence of the US News and World Report collegiate rankings on the matriculation decision of high-ability students: 1995-2004. Economics of Education Review, 26(2), 244-255.
  • Luca, M., & Smith, J. (2013). Salience in quality disclosure: Evidence from the U.S. News college rankings. Journal of Economics & Management Strategy22(1), 58-77.
  • Motoyama, Y., & Konczal, J. (2013). How can I create my favorite state ranking? The hidden pitfalls of statistical indexes. Journal of Applied Research in Economic Development, 10.
  • Ovadia, S. (2004). Ratings and rankings: Reconsidering the structure of values and their measurement. International Journal of Social Research Methodology, 7(5), 403-414.