How TeamRankings Makes College Football Preseason Rankings

Georgia Football

This post describes our methodology and process for creating college football preseason rankings for all 130 teams in the Football Bowl Subdivision (FBS).

As one would expect from TeamRankings, our CFB preseason rankings are driven by stats and modeling, and not by less objective methods like film study or media scouting reports. However, we still apply a dose of subjectivity to fine tune these rankings.

Before we dive into the details of our approach, let’s cover a few basics.

NCAA Bracket Picks

March Madness Picks

Brackets - Survivor - Betting

Learn MoreGet Picks Now

What Our College Football Preseason Rankings Represent

First, it’s important to know that at TeamRankings, our preseason rankings simply represent the rank order of preseason predictive ratings that we generate for every college football team.

So to create our preseason rankings, the first thing we do is calculate preseason ratings for every team.

Predictive Rating Definition

In simple terms, a team’s predictive rating is a number that represents the margin of victory we expect when that team plays a “perfectly average” FBS team on a neutral field.

This rating can be a positive or negative number; the higher the rating, the better the team. A rating of 0.0 indicates a perfectly average team.

Finally, because our predictive rating is measured in points, the difference in rating between any two teams indicates the projected winner and margin of victory in a neutral-site game between them.


In a college football or NFL pick’em pool?
Get an edge with our Football Pick’em Picks


What Our College Football Preseason Rankings Represent

How Ratings Translate To Predictions

For example, our system would expect Alabama, which has a 2022 preseason rating of 33.4, to beat an average FBS team (with a 0.0 rating) by about 33 points on a neutral field.

It would expect Alabama to beat New Mexico State, which has a -26.9 rating, by about 60 points. And New Mexico State would be expected to lose to an average team by about 27 points.

Ratings Are More Precise Than Rankings

Understanding the nature of predictive ratings is critical, because they are a more precise metric than a simple ranking.

For example, Oklahoma fans may cringe to see Clemson ranked ahead of them, at No. 4 in our 2022 preseason rankings. However, Oklahoma’s predictive rating is 18.7, only 3.2 points lower than Clemson’s rating.

So yes, if you put a gun to our head and forced us to rank order every team, we’d say Clemson is going to be better than Oklahoma this season. But the difference isn’t that significant.

However, it’s a 7-point drop in preseason rating between No. 3 Georgia and No. 4 Clemson, which is more significant. That means we have a pretty clear top three teams in 2022.

So don’t place too much stock in a team’s ranking. Ratings tell the more refined story.

When and Why We Make College Football Preseason Ratings

Once the college football season starts, our predictive ratings go on autopilot. As game results from CFB Week 1 and beyond come in, our system automatically adjusts team ratings (and the resulting rankings) within a few hours of receiving a new box score.

Teams that win by more than our ratings had predicted see their ratings increase. Teams that suffer worse than expected losses see their ratings drop. Software code controls all of the adjustments and no manual intervention is required.

Generating preseason ratings, however, involves a more labor-intensive process that we go through before every new season starts. What we are trying to do, in basic terms, is to pre-calibrate our predictive ratings system. We want to give it a smarter starting point than simply having every team start out with a 0.0 rating.

Put another way, our preseason ratings are our first prediction of what we think every college football team’s predictive rating will be at the end of the upcoming season. And we need to make that prediction before any regular season games are actually played.

Despite being a substantial challenge from a data perspective, our approach to this process is still mostly data-driven and objective. However, there are some judgment calls incorporated, which we’ll explain below.

Why We Make Preseason Ratings

Before we get into the details, it helps to explain a brief history of how and why our current preseason ratings process evolved:

  • In the way old days (early 2000s), every team would start the season with a 0.0 rating, and we’d put a note on the site not to trust our ratings until Week 5 or Week 6. Before then, with such a tiny sample size of games, big surprises or lopsided results could really produce some really funky ratings.
  • In the semi old days (mid to late 2000s), we started having each team begin the season with its end of season rating from the prior year. Until Week 5, the impact of the prior year rating would gradually decay to zero, and by Week 6, we’d only consider current season results. Better, but still not the best.
  • Starting in 2011, we implemented the framework we use today. We looked at years of historical data and built a customized model to generate preseason ratings for college football. This approach is completely divorced from our automated in-season ratings updates.

Why we took that final step is simple. Generating preseason team ratings using a customized model significantly improved the in-season game predictions made by our ratings — and not only in early season games, where one would logically expect to see the biggest improvement.

In fact, still giving the preseason ratings some weight even at the very end of the season improved our prediction performance over the final weeks too.

The payoff was clear and measurable. In 2018, our predictive ratings ranked fifth in the NCAA Football Prediction Tracker (out of 60+ systems tracked) for predicting game winners. In 2017, we ranked sixth. Outside of the updated Vegas line, TeamRankings was the only ratings system to crack the top six in both years. We are also the only rating system to finish in the Top 10 every year from 2017 to 2020.

When We Make Preseason Ratings

During every college football offseason, we first put in work to improve our preseason ratings methodology. We investigate new potential data sources, and refit our preseason ratings model using an additional year of data.

After implementing any offseason refinements to our process and model, we then gather the necessary data from various sources, and generate our preseason ratings for the upcoming season. We typically complete the process a week or two before the regular season starts.

How We Make College Football Preseason Ratings

Now let’s get to the meat. By analyzing years of historical college football data, we’ve identified a short list of descriptive factors that have correlated strongly with end-of-season power ratings.

Some of these predictive factors include:

  • Prior season performance. How good a team was in the latest season, measured by predictive rating, not win-loss record.
  • Program success history. How good a team has been in recent history, not including the latest season. A basic measure of legacy of success, the ability to recruit and develop new talent, etc.
  • Returning strength. The percentage contribution to key stat categories we’ve identified, such as passing and rushing effectiveness, that is returning for the upcoming season.
  • Prior season luck. Last year’s performance in stat categories highly impacted by luck, or not very reproducible for other reasons (e.g. turnover margin, red zone defense).

We use a regression model to determine each factor’s weight in our preseason ratings. As a result, the relative importance of each stat factor is based on its demonstrated level of predictive power in past seasons.

Example Predictive Factor: Defensive Lineman Continuity

To illustrate how we identify and incorporate specific predictive factors, let’s look at an actual stat used by our model: the percentage of defensive lineman games played returning. Let’s call it RetDL% for short.

At a very basic level, RetDL% measures the continuity of a team’s defensive line. High-usage defensive linemen from the previous season can graduate, get drafted into the NFL, or otherwise leave the team, so we wanted to see what happened based on how much of that experience needed to be replaced.

So first, we counted up all the games played in the prior season by all players whose main position was on the defensive line. That’s the denominator.

Then we did the same thing, but only counted players who are on this season’s roster. That’s the numerator.

The second number divided by the first number equals the RetDL%. And as it turns out, entering the 2019 season, in our historical training data set there were:

  • About 25 teams with RetDL% higher than 90%. Their “next season” ratings increased by an average of +2.5 points.
  • About 200 teams with RetDL% between 75% and 90%. Their “next season” ratings increased by an average of +1.8 points
  • About 600 teams with RetDL% between 55% and 75%. Their “next season” ratings, on average, stayed the same
  • About 200 teams with RetDL% between 40% and 55%. Their “next season” ratings decreased by an average of -1.1 points
  • About 25 teams with RetDL% lower than 40%. Their “next season” ratings decreased by an average of -4.6 points

Across a sample size of more than 1,000 team-seasons, that’s some fairly convincing evidence that defensive line continuity is a plus when it comes to predicting the upcoming season.

If all you knew was the percentage of defensive lineman games played returning for every team, you could already begin to make an educated guess as to whether the team will be better or worse this year.

A regression model does this level of analysis for all of the metrics we examine, only it breaks the data down much more granularly.

Step 2: Review & Refine The Initial Results

After our model generates its 100% data-driven preseason ratings for college football, we then compare those ratings (and the resulting team rankings) to the betting markets and human polls.

If our assessment of a specific team seems way out of whack in comparison to those benchmarks, we’ll investigate more. Primarily, we’re looking to identify some factor not taken into account by our model (e.g. a coaching change or an abnormally good or bad recruiting class) that is likely to impact the expected performance level of a team.

In most of those cases, we end up adjusting our rating to be closer to the consensus. As a result, this final part of the process does inject some subjective judgment calls into our process.

Why Adjust CFB Ratings Manually?

We’re data guys, so it typically takes a lot of convincing for us to incorporate some level of subjectivity into our predictions.

In the case of college football preseason ratings, though, we think it makes sense. Especially with so few games played each season, there’s not much historical data to begin with in college football.

Yet there’s still a very high statistical bar to reach in order to anoint a particular stat as generally predictive of future performance. Consequently, very few stats pass the test.

That’s a good thing. One of the biggest challenges of predictive modeling is filtering out the signal from the noise, and “false positives” based on small sample sizes can ruin the future accuracy of a model.

At the same time, lots of different factors are still likely to impact the future performance of a particular team in some significant way. But until we have a large enough sample size of similar events to analyze, it would be very risky to incorporate them into our model.

The “Intangibles” Of College Football Preseason Ratings

In some cases, factors our model doesn’t currently consider are simply in areas where we haven’t yet built up an extensive historical database of relevant information. For example, in the future we plan to do more analysis of coaching changes and recruiting impact, after acquiring deeper data.

In other cases, the factors are a more unique or complicated — true outliers. Consider the hiring of a new offensive coordinator who, based on his 10-year track record, appears to be more skilled than the previous coordinator at implementing a scheme that best fits a team’s personnel, who are undersized but athletic.

There are so many contextual variables at play in a situation like that, it’s hard to even say if any other obviously-similar events have happened in recent college football history.

In those cases, manual adjustments to our initial data-driven ratings, in order to incorporate either crowd wisdom or betting market pricing, may continue to be our best solution for the foreseeable future.

Side Note: We Still Take Stands…

As a final point, it’s important to remember that predicting how good a team will be before the season starts (especially for a sport like college football with 130 teams to project) is one area where the betting markets and “expert” crowd wisdom have proven to be good predictors overall — but certainly not perfect.

And while it has its blind spots, our methodology is rooted in a level of statistical analysis that is far more rigorous than what most other rankings-makers apply. As a result, only in some cases will we adjust our numbers all the way to match the market consensus.

For example, in 2018 our single biggest preseason outlier vs. the AP Poll was Iowa. Our preseason rankings had Iowa at No. 26, while the AP Poll had Iowa at No. 41. The Hawkeyes ended up going 9-4, finished 15th in our predictive rankings, and 25th in the final AP Poll.

Conclusion

There are many different ways to make preseason rankings for college football. The approaches can vary greatly, from media polls to “expert” analysis, from building a complex statistical model to making inferences from futures odds in the betting markets.

And speaking frankly, there’s plenty of crap out there. But there’s also no Holy Grail (yet).

The primary goal of our preseason analysis is to provide a baseline (or “prior” in statistical terms) that makes our ratings better at predicting regular season games. For that purpose, we’ve settled on a mostly data-driven, but still subjectively adjusted approach for preseason ratings. For our goals, this approach has proven valuable.

Who’s College Football Preseason Rankings Are The Best?

Are our preseason rankings “the best” out there, compared to other methods? Honestly, we’re not sure. We haven’t yet done extensive historical comparisons to sources like the preseason AP Poll, ESPN’s preseason FPI rankings, or Bill Connelly’s preseason SP+ rankings. We hope to do more comparative research in the future.

However, asking which rankings are “best” is also a loaded question. Why? Because the goal of AP voters in ranking teams isn’t the same as our goal.

For example, we’d bet a lot of money that most AP voters care about a team’s win-loss record when they form their opinions on which teams should be ranked highest. Our ratings don’t give a hoot about win-loss record. All we care about is predicting margins of victory in future games.

Within ten seconds of looking over our preseason rankings, you’ll probably find several rankings you disagree with (vehemently), or that differ from what most other “experts” or systems think. That’s to be expected.

And when the end of this upcoming season arrives, we’ll have gotten plenty of teams wrong. (Not to mention that in some seasons, our preseason ratings will end up being a lot more accurate overall than in others.)

On balance, though, the system we’ve built has proven its value (to us, at least) over the long term.

Remember, if you’re in a football pool or planning on betting some games this football season, check out our Football Pick’em Pool Picks , NFL Survivor Picks and College Football Betting Picks.