How Our NCAA Bracket Picks Customers Did In 2016

We are the only site that collects and publishes extensive data on how our bracket pick advice performs in actual customer pools.

Every year after the NCAA tournament ends, we ask our NCAA Bracket Picks customers how our recommended brackets performed in the bracket contests they entered. This post breaks down our 2016 results.

In 2016, our customers were 2.3 times as good as their competition

Based on our post-tournament customer survey, here’s how our bracket picks delivered an edge in 2016:

  • 41% of our customers (about 2 out of every 5) won a prize in at least one bracket pool, compared to an expectation of 20%.
  • Our customers won a prize in 24% of the bracket pools they entered (nearly 1 out of every 4), compared to an expectation of 11%.

In other words, compared to expectations, our NCAA Bracket Picks customers were 2.3 times as likely to win a prize in any bracket pool they entered.

Those results are very solid. Winning a prize in a bracket pool typically nets a very large return on pool entry fees, and winning prizes more than twice as often as expected should generate extremely compelling profits in the long run.

What worked for us in 2016

2016’s performance demonstrated the effectiveness of our value- and portfolio-based approaches to winning bracket pools. Three factors in particular appeared to drive our success:

1. Several of our Alternate Brackets had Villanova going deep

Our algorithmic tournament predictions saw eventual champion Villanova as only the 7th most likely team to win it all, with a 5.0% chance. However, the Wildcats were a relatively unpopular NCAA champion pick, having been picked by only 2.5% of the public.

As a result, Villanova ended up being our second most undervalued champion pick, and we picked them either to win the title or to make a deep run in several of our Alternative Brackets. (These brackets are designed to be played along with our Best Bracket when putting multiple entries into a pool.)

Many of our customers played multiple brackets either in the same pool or across different pools, and by doing so, increased the chance that they played one of our brackets that made a big bet on Villanova.

2. Strong early round picking made a difference in smaller pools

Our most undervalued champion pick in 2016 was Virginia, and we had the Cavaliers as our NCAA Champion pick in most of our Best Brackets for various pool types. As a result, our #1 recommended bracket was unlikely to take first place in most larger bracket pool.

However, some of our Best Brackets still performed very well in smaller pools because of highly accurate early-round pick performance.

For example, in the First Round, our Best Bracket for small pools with traditional 1-2-4-8-16-32 scoring went a perfect 8-for-8 on the #8 vs. #9 and #7 vs. #10 games, and it also correctly predicted two of the three #11-over-#6 upsets.

3. Compared to the public, we “faded” Michigan State in most pool types

Although Michigan State had strong odds to win the tournament, we saw the Spartans as an overvalued team in many types of pools in 2016, and our Best Brackets for the popular 1-2-4-8-16-32 scoring system didn’t have Michigan State advancing past the Elite 8.

We certainly didn’t predict their first-round loss to Middle Tennessee, but compared to the general public, most of our our top recommended brackets weren’t hurt nearly as badly by MSU’s early exit. (The primary exception was pools in which seed number factored prominently into the scoring system in later rounds.)

Results by pool characteristics

While we’re most interested in the overall frequency with which our customers win prizes, investigating performance in different types of pools is also informative.

Our post-tournament survey asks our customers for information about every pool that they defined in our system. Consequently, we can review how our resulting bracket pick recommendations did based on factors such as:

  • Pool size
  • Scoring system
  • Number of brackets entered into the pool

Results by pool size

The results by pool size look about as we’d expect. As pool size increases, absolute win rate decreases but the edge our picks provide rises.

Pool SizeExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
10 or fewer entries21%38%1.8x
11 to 30 entries14%30%2.1x
31 to 50 entries12%25%2.1x
51 to 100 entries10%25%2.6x
101 to 250 entries7%17%2.4x
251 to 1000 entries4%20%5.0x
1001 to 9999 entries4%10%2.6x
10,000 or more entries*<1%0%0.0x
Grand Total11%24%2.3x

*Our system didn’t let people enter a pool size of more than 10,000, so this is a catch-all bin for really giant pools. We ask customers to name their pools, and many of the names for pools this large are “ESPN,” “CBS,” “Yahoo!,” etc. In other words, many of these pools were likely our customers competing for some major site’s grand prize against millions of other brackets, so our 0% prize-winning rate is not a big surprise.

Small pools

Even though most of our Best Brackets had Virginia or Kansas winning it all, our picks still provided a decent edge in smaller pools, thanks to generally strong early-round picks.

In the end, our customers in pools with 50 or fewer entries won a prize 80% to 110% more often than one would expect.

Large pools

Customers tend to use our supplemental Alternative Brackets more often in large pools than in small pools, since many are playing more than one entry in a pool. That dynamic likely drove a lot of the success you see in the above table.

Most 5-bracket portfolios that our system recommended, for instance, had at least one bracket with Villanova making the title game or winning the title.

With most pools heavily rewarding late round picks, and with Villanova being an unpopular pick, those supplemental brackets generally fared very well.

Results by scoring system

With hundreds of different scoring systems entered by users in our custom bracket picks tool, we have to group them into broad categories here.

In the table below, we grouped pools by whether the points awarded for every correct pick (“base scoring”) took into account a winning pick’s seed number or not. If not, we then subdivided based on whether they awarded upset bonus points or not.

Finally, we split out the most popular 1-2-4-8-16-32 round-based scoring system.

When you look at it this way, our performance was solid across the board, but in 2016 it was better in pools with non-traditional scoring of some kind.

Scoring TypeExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
Round-Based (1-2-4-8-16-32)10%16%1.7x
Round-Based (Other)10%28%2.7x
Round-Based w/ Upset Bonus11%35%3.3x
Seed-Based12%32%2.6x
Grand Total11%24%2.3x

We suspect this result has to do with two factors.

First, many people don’t properly take into account their pool’s scoring system when making picks. As we’ve written about, pool scoring systems can make a big difference in what types of strategies are rewarded.

For example, beyond the basic scoring system, upset bonuses impact strategy in ways that many people don’t fully grasp, making early round upset picks super valuable.

Our algorithms do a great job of optimizing around the risks and rewards presented by a particular scoring system, especially since for some upset-heavy systems, many of the optimal picks will seem pretty crazy to the typical human fan.

Finally, as mentioned previously, our early round picks did very well in most pool types in 2016. Non-traditional scoring systems tend to place more emphasis on early round picks than on later round picks.

Results by number of brackets entered

In theory, as a smart player enters more brackets into a specific pool, two things should happen:

  • Their chance of winning a prize should increase
  • Their overall edge against the competition should decrease

To explain the second point, consider that for any type of bracket pool, there will be one combination of picks that gives you the absolute best chance to win (i.e. the maximum edge over your opponents). A smart player does their best to identify that bracket, and play it as their first entry.

By definition, then, any additional brackets the smart player enters, assuming those brackets have some different picks than the first bracket, are projected to be not quite as likely to win compared to the optimal, first bracket played.

Playing more brackets in a pool therefore should increase your overall odds of winning a prize, but your expected return on investment also decreases a bit, since you’re paying the same price to enter the pool with a second, third, etc. bracket that is not quite as good as the first bracket you entered.

However, that’s not what we saw in 2016. Our customers’ chance of winning did go up with more brackets, as expected. But so did their win rate vs. expectation:

Number of Brackets EnteredExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
19%18%2x
211%26%2.4x
313%31%2.4x
more than 314%39%2.7x
Grand Total11%24%2.3x

This is almost certainly due to Villanova being a more common pick suggestion in our supplemental Alternative Brackets than in our Best Brackets. The more brackets a user entered, the more likely they were to use a bracket with Villanova making a deep run.

The pattern becomes more evident if we look at the numbers again, but this time take pool size into account:

Pool Size: 50 or fewer
Number of Brackets EnteredExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
112%24%2.0x
217%37%2.2x
321%47%2.2x
more than 331%50%1.6x
Grand Total15%30%2.0x

Pool Size: 51 to 250
Number of Brackets EnteredExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
14%9%2.4x
28%19%2.5x
312%30%2.5x
more than 318%45%2.6x
Grand Total9%21%2.5x

Pool Size: 251 to 9,999
Number of Brackets EnteredExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
11%0%0.0x
24%17%3.9x
33%13%4.2x
more than 37%39%5.7x
Grand Total4%18%4.6x

The data above is a bit noisy since some of these bins have a pretty small sample size; not many people enter more than 3 brackets in a pool that has fewer than 50 total entries. Still, a couple trends seem evident:

  • In small pools, even our Best Brackets (generally with Virginia or Kansas as champ) provided an edge, most likely on account of good early-round performance.
  • In large pools, that good early-round performance wasn’t enough to win a prize, so users entering only a single bracket in large pools generally didn’t win anything.
  • However, if users entered multiple brackets in large pools, our pick suggestions made them 4 to 6 times as likely to win a prize.

Most impressively, when our customers entered more than 3 brackets in pools of 250-9,999 total entries, they won a prize 39% of the time. That’s an extremely high win rate for pools that large.

Results by individual customer

The previous section explored our results on a pool by pool basis. But what’s more interesting to many people is the percentage of our customers that won a prize in at least one pool.

Data from our customers indicates that for most people, winning something in pools, on a more frequent basis, tends to be valued more highly than winning a big prize once in a long while.

As a result, we encourage customers to diversify picks across their pools, if they enter more than one pool.

For example, if a customer has one entry in each of three pools, using one of our recommended brackets with a Virginia champion pick, another with a Kansas champion pick, and a third with a Villanova champion pick would be a better strategy than using three brackets that all have Virginia as their champion.

The result of a “diversified portfolio” approach like the one above should be that a customer’s average pool win rate decreases slightly, but their chance to win a prize in at least one pool increases. Put another way, this strategy reduces the extremely high variance inherent in bracket pool contests.

In the tables below, we combine the effects of pool size, number of pools entered, and number of brackets entered in each pool to get the expected chance of an NCAA Bracket Picks subscriber winning a prize in at least one pool in 2016.

We then compare that expectation to the percent of customers that actually won a prize in at least one pool.

Results By Number Of Brackets Entered Across All Pools

# of Brackets Entered Across All PoolsExpected To Win A PrizeActually Won A PrizeWin Rate vs. Expectation
111%27%2.5x
215%30%2.0x
319%36%1.9x
421%41%2.0x
521%50%2.4x
6 to 1031%61%2.0x
more than 1036%63%1.8x
Grand Total20%41%2.0x

As you’d expect, the more brackets a customer entered across all pools, the higher their expected and actual rates to win at least one prize were.

However, we expected the edge provided by our pick suggestions to decrease as the number of brackets increased, and that wasn’t the case.

We realized that was because not all 5-entry customers are created equal. Entering 5 brackets in single 10-person pool is very different from entering 1 bracket each in 5 different 1,000-person pools.

Results By Customer Expected Prize Win Rate

This next table accounts for that confounding effect by grouping customers by their expected chance to win a pool.

Remember, when we say a customer had a 50% chance to win a prize in a pool, that’s without using our pick suggestions, and assuming everyone in a pool is equally skilled. So a customer with a prize chance that high is either entering multiple brackets in a pool, or entering very small pools, or (most likely) doing both.

We were a bit worried that our pick suggestions would have trouble improving such already-high odds, but it turned out we shouldn’t have been concerned:

Customer "Chance To Win At Least One Prize" BinExpected To Win At Least One PrizeActually Won At Least One PrizeWin Rate vs. Expectation
less than 5%3%10%3.5x
5% to less than 10%7%25%3.4x
10% to less than 20%15%29%2.0x
20% to less than 30%25%58%2.3x
30% to less than 50%38%68%1.8x
50% or more60%90%1.5x
Grand Total20%41%2.0x

These results show the trends that we expect. As our customers’ baseline expected chances to win a prize went up, their actual win rate increased, but the relative edge they get from our pick suggestions decreases a bit too.

Some of that is simply a ceiling effect. When a customer starts with a 60% chance to win a prize, the maximum theoretical ratio of actual win rate to expected win rate is only 1.67x. In that context, we’re very happy with a 1.5x rate.

Closing thoughts

When evaluating bracket pick performance, it’s imperative to understand the nature of bracket pools. You’re pretty much never expected to win — but when you do win, the return you earn more than makes up for multiple past losing entries.

If you use the right strategies and commit to playing for the long term, the expected returns from bracket pools are extremely compelling. That’s why we’ve made pool picks an area of focus for TeamRankings, even though we know that the best advice in the world still won’t generate pool wins every year.

As objectively as we can measure, our customized bracket picks delivered a strong edge to our customer base as a whole in 2016, and particularly to customers who played multiple of our recommended brackets in pools. On average, our customers were more than twice as likely as expected to take home at least one bracket pool prize come April, and the year’s worth of bragging rights that come with it.

Throw in the fact that our subscribers outsource all of the time and stress of bracket pick research to us — especially for pools with uncommon or downright crazy scoring systems for which it’s much more difficult to optimize picks — and the overall value proposition passes the test.

Of course, we also realize that not all of our customers were happy with our picks in 2016. Some of our subscribers only played one bracket in a small or mid-sized pool and may not have come that close to winning a prize, or maybe their pool(s) used a particular scoring system for which our picks didn’t happen to do well this year.

As much as it bothers us, that fact is an inevitable reality of the business of selling bracket advice. Playing in bracket pools is a risky business, and our sophisticated approach isn’t afraid to take educated gambles that we know will pay off in the long run — but they are not going to hit big every year.

Our commitment, as always, is to keep improving and refining our methods every year, so that we offer our customers the best possible chance to win, and deliver a the best possible ROI over the long term.

Since we pioneered data-driven bracket picks and analysis tools over fifteen years ago, our success has cultivated a base of loyal customers who are winning bracket pool prizes much more often than they used to win, and much more often than they are expected to win. That’s the only reason why we’ve been able to build a successful business.

We’re improving the NCAA Bracket Picks product even more for 2017, and you can now take a free product tour until Selection Sunday.

Appendix 1: How we define success

As you’ve likely surmised if you’ve read this far, we really only care about one metric when it comes to measuring the success of our bracket picks: How often do our customers win bracket pool prizes using our picks?

There are a number of other objective metrics we could use to measure the performance of our bracket advice. We could look at how many picks our top brackets got right, how many points they scored, or their finishing percentile in a national bracket contest like ESPN’s Tournament Challenge.

But those metrics are secondary to our customers, and there is no consistent benchmark to measure against across years. Some years, getting two Final Four picks right qualifies as an outstanding performance; other years, you may need to get three Final Four picks right to have a shot at a top finish in your pool.

Perhaps more importantly, if your goal is to win a prize in a pool, avoiding picking all the most likely winners (and using a more of a contrarian picking strategy instead) is often the best move.

In the end, what matters most to our paying subscribers is winning prizes, and making a positive financial return on their investment in TeamRankings. So that’s what we measure.

Appendix 2: How we measure success

If you’re curious how we come up with the customer prize win rates and baseline expectations quoted in the post, here are some key details:

  • To calculate expectations for prize wins, we assume that all competitors in customer pools are equally skilled. So presuming that no one uses our picks, each contestant in a 100-person, single-entry, winner-take-all bracket pool would have a 1% chance to win the pool.
  • We adjust prize winning expectations to account for cases where our customers played multiple brackets in the same pool, or played in multiple bracket pools. (If the baseline expectations for winning a prize quoted in the post seem high, that’s why.)
  • To get our picks, subscribers provide us with details about each bracket pool they intend to enter (e.g. scoring system, total number of entries, prize structure). After the tournament ends, we email them a custom-built survey to ask whether they actually played our picks in each pool, and if so, how they finished.
  • We typically collect results data on around 1,000 customer pools each year. As far as we know, no other site even comes close to measuring the real-world performance of their bracket advice on anywhere near that scale.