NFL Survivor Picks Historical Performance Review

In 2017, our NFL Survivor Pool Picks customers reported winning more than four times as much prize money as one would expect, based on their pool size and their own number of entries.

This post reviews in detail how we measure the success of our subscribers in NFL survivor pools, and breaks down how our picks did compared to expectations across a number of different performance angles.

Why This Post Focuses On 2017 Survivor Pick Results

First, a quick history. We started posting survivor pool pick advice on TeamRankings.com in 2010, starting with Week 1 of the 2010 NFL season. At first we would just make a single “official pick” in our survivor blog post each week, assuming a modestly-sized, standard-rules survivor pool.

(Although that kind of one-size-fits-all pick strategy is far from optimal for many types of pools, as skill and luck would have it, our official survivor picks finished the 2011 season 17-0. Still, we didn’t have a great idea of how many people were using them, how much money they won as a result, etc.)

After three seasons of blogging about survivor picks, we then made the resource investment to build a full-fledged “product” featuring a range of survivor pool related data and features. We released the first version of our first premium survivor picks product for the 2013 NFL season.

However, before 2017, we didn’t have a precise way of knowing what types of pools our subscribers were playing in, how they were applying our advice to real-world survivor pools, or how they ended up doing compared to expectations for an “average” player. Early iterations of our survivor picks product simply focused on providing a ranked list of pick options by pool, along with some pretty generic written advice on how to split up multiple survivor entries. At first we weren’t even saving each user’s past weekly picks in our product, and taking those past picks into account to make pick recommendations for the current week.

Most recently, our NFL Survivor Picks product underwent a huge redesign and upgrade before the 2017 season. The level of data we collect and pick customization we apply is now leagues beyond what we offered in previous years. As a result, 2017 is the first year for which we can provide highly detailed breakdowns of our survivor pick performance.

NFL Survivor Picks Product Overview

In case you aren’t familiar with our NFL Survivor Pool Picks product, here’s a quick overview:

  • We’ve built proprietary analytics to figure out your best picking strategy for winning NFL Survivor pools.
  • Our data-driven approach is based on thousands of computer simulations of pools similar to yours, and factors in details like the size of your pool, your pool rules (like strikes or multiple picks), and what teams you’ve already used.
  • We support “portfolios” of multiple entries across multiple pools (up to 30 total picks per week), and provide a pick suggestion for each specific entry.

As far as we know, we are the only site that has a system to optimize survivor pool picks for such a broad range of survivor pool rules, and secondarily, for multiple-entry survivor pick portfolios.

The Implications Of Customized Picks

Because we take so many strategy factors into account, the weekly survivor picks we suggest can differ by pool, by entry, and by customer.

First, the rules of your survivor pool can have a massive effect on what the best pick is each week. For example, in a pool that requires multiple picks per week late in the season, saving teams with cushy late season matchups is more important. The size of your pool also matters, with far-in-the-future matchups being less important in smaller pools, which are more likely to end earlier in the season.

Secondly, every Survivor entry is different, because teams can only be picked once. Even if the Patriots are the best pick in a given week for a given pool, some of a customer’s entries may have already used them earlier in the season, so the best available pick for each entry could be different.

Finally, even for entries with the same teams available, every survivor pick portfolio is different. A player with only a single entry is generally going to want to pick the best available team for that entry. But if a player has a 10-entry portfolio, and the Rams look like the best available pick for all 10 entries, it generally makes sense to pick a team other than the Rams with some of those entries, in order to avoid putting all of their eggs in one basket.

The combined effect of having all of these strategy factors accounted for in pick advice leads to a very high level of complexity in terms of calculations. It can also lead to a fairly wide variety of recommended picks depending on a specific subscriber’s situation. That’s especially true late in the season, when the available teams on each entry dwindle. In Week 17 of the 2017 season, for example, we suggested 17 different teams as a pick to at least one customer. (Of course, most of those were extreme corner cases. Only 8 different teams were suggested to more than 1% of users. And only 3 different teams were common pick suggestions in standard pools.)

How We Measure Survivor Pick Success

The high level of customization that we now apply to survivor picks means that there is no “official TeamRankings survivor pick” in a given week. Instead, we have a distribution of picks that represents the sum of all the various picks we recommended across our entire subscriber base, based on their individual pool rules, past weekly picks, etc.

Consequently, there is no single set of picks we can track to tell us whether our suggestions did well. In addition, all we really care about is whether our pick recommendations give our subscribers an edge in their survivor pools. It’s impossible to determine that if all we know is that the top pick we advised to a given person survived 5 weeks, or 7, or 15.

So there’s only really one good way to measure the effectiveness of our NFL Survivor Pool Picks advice. We ask subscribers directly how our picks recommendations did for them, via a survey we email out at the end of the season.

In order to get custom pick advice from our NFL Survivor Picks product, customers have to set up their pool(s) on the site. That involves telling us their pool rules, as well as the overall pool size, and how many entries they personally are entering in the pool.

The end of season survey asks customers how they did in each specific pool they set up in our system. This allows us to not only get an idea of the overall performance of our pick suggestions, but also to look at how they fared based on various splits of the data (by pool rules, by pool size, etc).

Calculating Survivor Pool Win Expectations

Knowing how many customers won their pool is nice. But to get a real sense of whether our picks are providing an edge, we need to know what the baseline expectation should be. Is winning a pool 5% of the time good? 10%? 20%?

To define our baseline expectations, we assume every player in a given survivor pool is equally skilled. Then we calculate what percent of the prize pool our subscriber would expect to win, based on the number of entries they submitted and the overall pool size. That math is simply the number of customer entries divided by the total number of pool entries.

For example:

  • 1 entry in a 10-entry pool … 1/10 … 10% expected prize share
  • 1 entry in a 100-person pool … 1/100 … 1% expected prize share
  • 5 entries in a 100-person pool … 5/100 … 5% expected prize share
  • 10 entries in a 5,000-person pool … 10/5000 … 0.2% expected prize share

This gives us the expected prize share for every customer in every pool. It tells us how much our customers would expect to win if our pick advice was not providing any edge in the pool.

To calculate the actual prize share, we ask customers (1) if they won their survivor pool(s), and (2) how many other entries they had to split the pot with.

If they won, then their actual prize share is simply 100% divided by the total number of entries splitting the pot. If they won the whole pot, their prize share is 100%. If they split the pot with 1 other entry, their prize share is 50%. If they split the pot with 2 other entries, their prize share is 33.3%. And so on.

Dividing the actual prize share by the expected prize share gives us a “Winnings Multiplier,” like 2 or 3. This Multiplier number tells us that our customers won 2x or 3x as much prize money as you’d expect an average person in the pool to win.

If our Multiplier is greater than 1, that means our pick advice has been delivering an edge, on average.

2017 NFL Survivor Picks Performance Results

Now that we’ve explained our methodology for measuring success, let’s examine how our NFL Survivor Picks customers did in 2017, compared to an “average” pool player:

Prize Share
Year% Won PoolAvg % of Pot WonActualExpectedMultiplier
201724.3%49.4%12.0%2.8%4.3

Our customers won a prize in 24% of their pools in 2017. Their average “% of Pot Won” was 49%, which indicates that on average the winning customers split the pot with one other person.

That gave our customers an average Prize Share of 12%. Based on their number of entries, and the overall size of their pools, we’d expect them to earn only a 2.8% prize share, if our advice was providing no edge over the rest of the pool. What we actually saw, though, was that our customers won over 4 times as much as expected.

Survivor Pick Performance Splits

The numbers above show overall performance. However, we provide picks for a wide variety of pool rules and sizes. It’s worth looking at performance by pool type or by other factors, to see if only certain types of pools perform well, or if the edge holds across various types and sizes.

By Type Of Survivor Pool

First, here is customer performance by type of pool. This table is sorted from the most common pool type to the least. Also, note that we support combinations of these types, but if we break it down any further, the sample size gets too small to be meaningful:

Prize Share
Pool Features …% Won PoolAvg % of Pot WonActualExpectedMultiplier
Standard Rules20.1%52.1%10.4%2.3%4.6
Multiple Picks22.1%29.0%6.4%1.4%4.5
Starts Midseason22.1%52.5%11.6%4.2%2.8
Strikes31.3%59.0%18.4%4.0%4.6
Buybacks30.7%43.6%13.4%3.1%4.4
Season Wins Tiebreaker21.3%51.5%11.0%1.9%5.9
Continues Into Playoffs31.6%59.7%18.9%6.7%2.8
Byes6.3%3.0%0.2%3.8%0.0

As you can see, in 2017 our picks delivered an edge in all types of supported pools, except for pools featuring Byes. It’s worth noting that:

  1. Bye pools are our smallest sample, so this could just be noise
  2. Performance across pool types is bound to vary by season, so this could just be noise for that reason as well.
  3. We made major improvements to the Bye pool logic midway through last season, so that now the relative value of a Bye pick versus other picks changes dynamically each week, rather than being fixed at a constant value. This should improve Bye pool performance, but may have been implemented too late last season to make a difference.

By Survivor Pool Size

Now, here is performance by pool size:

Prize Share
Pool Size% Won PoolAvg % of Pot WonActualExpectedMultiplier
0-2432.3%69.6%22.5%10.6%2.1
25-4929.0%65.5%19.0%4.5%4.2
50-9927.0%62.5%16.9%2.6%6.5
100-24921.3%50.1%10.7%2.0%5.3
250-49930.3%39.2%11.9%1.1%11.0
500-99924.1%31.9%7.7%0.7%11.4
1000-999919.6%18.9%3.7%0.2%16.6
10000+0.0%n/a0.0%0.0%0.0

This is a pattern we’ve seen before in our office pool product performance. As pool sizes go up, the absolute win rate goes down, but the edge delivered by our picks goes up. This makes some sense. If you start a pool with, say, 20% win odds, realistically there’s an upper bound on how much we can improve that. We also suspect there is more “dead weight” in huge sized pools — players who just make dumb picks because they either don’t know any better or don’t put in the effort required to do so.

One note on the 10,000+ pool size bin, which shows a 0% win rate. The sample size in that bin is small enough (less than 100 pools) that even if we delivered a 10x multiplier, we wouldn’t expect to see any wins. A 10x multiplier would move your win odds from 1 in 10,000 to 1 in 1,000. So this sample size is simply too small to tell us anything very meaningful about our edge in giant pools.

By Number Of User Survivor Pool Entries

Finally, here is performance by number of user entries in a pool:

Prize Share
Number of User Entries% Won PoolAvg % of Pot WonActualExpectedMultiplier
120.2%55.9%11.3%2.9%3.9
224.6%47.9%11.8%2.7%4.4
331.2%52.6%16.4%3.2%5.1
430.6%46.5%14.2%2.2%6.5
531.3%37.1%11.6%2.3%5.0
6-1032.6%45.5%14.8%1.6%9.0
11-3020.0%18.0%3.6%2.9%1.2
31-6533.3%66.7%22.2%12.6%1.8

We delivered an edge for our customers no matter how many picks they entered in a pool. The sweet spot seems to be around 6 to 10 picks.

Smaller edges with even more entries makes some logical sense — if there is one ideal entry, then every successive entry you place in a pool has a lower expected return-on-investment than the previous one. The sample sizes (not shown) on some of these bins are fairly low, though. So we’re not totally sure how much of this trend is real, and how much is random.

Year 1 Survivor Pick Results: So Far, So Good

Our first year of highly customized, automatically-updating survivor portfolio picks covering a huge variety of pool types is in the books.

Based on these subscriber survey results, moving from generic weekly write ups (which by their nature can’t cover every little rules wrinkle, and can’t update as input data changes) to a customized, automated system was almost certainly a strongly profitable refinement for our customers. That was, of course, the motivation for making some massive improvements to our NFL Survivor Picks product during the summer of 2017, so it was great to see an immediate impact.

Even in great years for our picks overall, every customer isn’t going to win their pool — not even close. But our customer base on average winning over four times as much as expected is a clear demonstration of the edge our product delivers. If that edge holds for long term customers, the investment in TeamRankings survivor picks should pay off extremely well.