Now is right around the time when NFL teams are supposed to be getting hot for the postseason. If there’s any narrative about the postseason taken more seriously than the power of having nobody believe in you, it’s the power of peaking at the right time. It’s an argument that’s easy to understand and ingrained in the culture of the game. Players believe it. Media members believe it. Fans believe it.
All that belief leaves just one problem: There’s scant evidence that any late-season peaking effect actually exists.
In truth, there are a lot of problems with nailing down the actual argument itself. The gist of it is clear — teams who get hot at the end of the regular season do well in the postseason — but the specifics can be gerrymandered to fit just about any situation you’d like. Are teams supposed to start getting hot in Week 14? Week 16? Week 17? In a way, the narrative can become a tautology if you suggest that teams who raise their game and get hot at the right time actually do so when the postseason begins, since one team inevitably has to win the Super Bowl each year. Say that the Super Bowl winner got hot at the right time and it’s going to be hard to argue against it.
Heck, even if we include those games from the final few weeks of the regular season, you don’t even need to win to get hot. The 2007 Giants lost two of their last three regular season games, but because they came close to defeating the 16-0 Patriots in Week 17, they had created momentum for the playoffs. But isn’t the opposite true? Couldn’t it be the case that a team that narrowly beats a weak opponent at the end of the season actually loses some level of momentum or hotness? You’ll never hear that come up in a discussion of whether this stuff actually makes any sense, of course, because the anecdotal examples used to justify the end-of-season peaking theory are subject to massive amounts of confirmation bias. When a team is successful in the playoffs after a late-season run, it’s because they got hot and played their best football at the right time. When that’s not the case and a team wins the Super Bowl anyway, the late-peak theory doesn’t get discussed, and some other aspect of the team’s performance comes up as a positive instead.
There’s also a fair amount of selection bias that creates some false positives for this theory. A team that peaks at the end of the regular season and wins a bunch of games there doesn’t necessarily do so because they’re raising their game and playing at the highest level; they’re often winning those games because they’re great teams who win the vast majority of their games, regardless of when they’re played.
Take the 36 teams who’ve made it to the Super Bowl since 1994. Over their final four matchups of the regular season, those teams won 75 percent of their games. That sounds impressive, but it’s nothing out of the ordinary, since those teams also won 77.1 percent of their first 12 games. In fact, if you split the season up into four quarters, those 36 teams posted their best winning percentage in the first quarter of the year, where they won at a .799 clip. Splitting out their winning percentage by week, you can see how the teams tended to decline over the final three weeks of the year:
Part of that undoubtedly owes to the fact that there are successful teams resting their stars and taking their lumps over the final couple of weeks of the regular season to stay healthy for the playoffs, but that in and of itself should refute the argument that getting hot at the end of the regular season means anything. You don’t have to look far to find a team who slowed down at the end of the year before heating back up again and succeeding. The 2009 Saints started 13-0, lost their final three games of the year, and then rolled off three consecutive victories to win the Super Bowl. For every team like the 2010 Packers, who finished the regular season with two wins over stiff competition (the Giants and Bears) before making an impressive Super Bowl run on the road, there’s a team like the ’09 Saints.
In a way, though, that doesn’t address the argument implied by the get-hot-at-the-right-time crowd. The implication is that teams who get hot at the right time, ones who know when to peak, will be better in the playoffs than similar teams who don’t know to peak at the appropriate time. We want to compare apples to apples. So let’s do that. In looking at every playoff team from 1994-on, I split them each into groups by their win total. Then, based on what they did in the playoffs, I gave them a “score” for their playoff performance. The score is arbitrary; it’s merely supposed to be a quick-and-dirty way of capturing playoff performance. The scale goes as follows:
• Teams who won the Super Bowl earned 10 points.
• Teams who lost in the Super Bowl earned 7 points.
• Teams who lost in the conference championship game earned 5 points.
• Teams who lost in the divisional round after winning in the wild-card game earned 2 points.
• Teams who failed to win a playoff game did not earn any points (even if they got a bye).
If we take teams with similar win totals and separate them by how they performed at the end of the year, the teams who were peaking with impressive winning stretches should fare better in the playoffs than the ones who were coasting into the postseason, right?
Well, that’s not the case. Take the 55 teams who made it into the playoffs over that time frame with 10 wins, the most common total. 11 of those 55 teams went 3-0 over their final three regular season games, seemingly peaking for the playoffs. Once they got in, though, not a single one of those 11 squads made it to the conference championship, and they averaged just 0.7 points per postseason berth. Meanwhile, the 30 teams who went 2-1 in their final three regular season games averaged an even point each, while the 12 teams who finished 1-2 produced 2.3 playoff points each. It’s a bunch of small sample sizes, but the small sample sizes don’t show anything suggesting proof for the getting-hot-in-December crowd to hang their hats on.
The total irrelevance is even more obvious if you look at teams with 12 or more wins. There, 30 teams finished 3-0 in their final three regular season games while averaging 4.0 points. Not bad, right? Well, 35 such teams closed out their year by going 2-1 in their final three, and they averaged 4.1 points in the postseason. The 11 teams that went 1-2? Of course, they averaged 4.0 points! And the lone 0-3 team was the aforementioned 2009 Saints, who won the Super Bowl. There’s just no evidence if you take anything resembling a long-term view of the playoff picture. (Running the same data on the even smaller sample sizes created by only looking at teams from 2002-on, when the league last changed its playoff rules, doesn’t change much of anything.)
And once you get past all the numbers suggesting that the whole thing’s hogwash, well, throw the numbers out and just apply a bit of common sense. If it were really so easy for teams to raise their game at the end of the year and start playing their best football in December and January, why wouldn’t they all do it? And if they all do it, wouldn’t the level of play from team-to-team be exactly the same as it was during the previous months? Getting hot at the right time is an idea that makes sense when it first comes up, but the more critical thought thrown in its direction, the quicker it dissipates into obscurity. The truth is that the team who executes the best and gets the luckiest during the playoffs wins. That’s a strategy, though, that teams would also do well to apply to any other subset of the season.