For the first two months of the 2014 season, oglers of sexy minor league stat lines suspected that the Pittsburgh Pirates were wasting Gregory Polanco. The outfielder had entered the year as the game’s 10th-best prospect, according to Baseball America, and initially hit like he should have been ranked even higher, slashing .347/.405/.540 in 274 Triple-A plate appearances through June 8. With Pirates right fielders Jose Tabata and Travis Snider off to unimpressive starts, it seemed as though, despite GM Neal Huntington’s denials, only service-time considerations could be keeping Polanco down.
When the Pirates finally promoted Polanco, he went 15-for-39 with a homer, a double, and three walks in his first eight big league games, reinforcing the perception that the team had dragged its feet for financial reasons, not because its budding star wasn’t ready. Over his next 81 big league games, though, Polanco hit only .210/.288/.319 in 270 plate appearances sandwiched around another one-week stay in the minors. As the Pirates made their final push for a playoff spot, manager Clint Hurdle left Polanco out of the starting lineup for 11 consecutive games.
Polanco’s slump seemed to mirror those of several other high-profile players who had torn up Triple-A but struggled to sustain that success in the majors. Cubs prospect Javier Baez struck out in 41.5 percent of his MLB plate appearances and batted .169. Astros prospect Jon Singleton whiffed at a 37 percent rate and batted .168. The late Oscar Taveras homered in his debut but batted .238/.278/.299 thereafter. The Red Sox underperformed their projections in part because of disappointing offensive seasons from Jackie Bradley Jr. and Xander Bogaerts. Those struggling rookies and others like the Rangers’ Michael Choice and the Orioles’ Jonathan Schoop had all appeared on preseason top-prospect lists, in some cases in the upper tier. With highly touted young hitters like Kris Bryant and Joey Gallo on the verge of making the majors, it’s worth considering whether we’re about to see more of the same.
[mlbvideo id=”37193053″ width=”510″ height=”286″ /]
Where there’s an apparent trend, there’s a trend piece (or two).1 In an article earlier this month titled “For prospects, gap between Triple A and majors is growing,” Boston Globe Red Sox beat writer Alex Speier argued that sophisticated advance scouting and defensive positioning, among other developments, have made it harder than ever for hitters to transition to the majors.2 The same day, Baseball Prospectus cofounder and Sports Illustrated contributor Joe Sheehan tackled the topic in his newsletter. Sheehan noted that Marcus Semien, who walked more than he struck out in Double-A in 2013 and whiffed only 1.2 times for every walk in Triple-A over the past two seasons, had struck out 4.2 times per walk in the majors. Sheehan saw Semien’s slow MLB start and breakdown in discipline as emblematic of a new challenge facing all hitters upon their arrival in the majors.
A year ago, we were writing about what looked like the opposite trend.
Although with some organizations implementing defensive shifting in the upper minors, more prospects will be prepared for what they’ll face at the major league level.
“Right now, the gap between the high minors and the major leagues is huge, not merely because of talent, but because the strike zone being called in the major leagues is nothing like the one being called in the minors,” Sheehan wrote. “It’s not quite like they’re playing a different game in the majors, but it’s not entirely unlike that, either.”
Speier and Sheehan aren’t wrong. The offensive environments in the majors and high minors have diverged over the last several seasons, as scoring has declined in the majors — suppressed partly by an expanded strike zone — while remaining fairly consistent in Triple-A.3 The graph below shows the annual league-average OPS in the majors and in the two Triple-A leagues since 2000, revealing that the average MLB OPS, which stood above both the Pacific Coast League and International League marks as recently as 2006, was well below both last season. Since 2009, the average MLB OPS has fallen by 51 points, but both the PCL and IL averages were higher last year than they had been five years before.
Naturally, minor league batters’ stats are bound to decline when they graduate to a league where offense is scarcer, just as major league hitters’ stats wither when they move from Coors Field to Petco Park. The question is whether a transition to a lower-scoring league where scouting, the strike zone, and in-game tactics are different has a compounding effect that puts hitters at greater risk of collapse — whether, as Sheehan put it, “The transition from the upper minors to the majors for hitters is now doubly hard.”
In response to Speier’s piece, Jeff Sullivan pointed out that while rookie hitters’ production has indeed declined — rookie non-pitchers produced a collective .654 OPS last season, down from .713 in 2009 — much of that dip is attributable to the offensive malaise that’s afflicted all major leaguers. To determine whether rookie production has declined disproportionately, we can use Weighted Runs Created Plus (wRC+), a league- and park-adjusted metric that measures offensive production on a scale where 100 is league average and anything lower is worse. The following graph displays the annual wRC+ produced by rookie non-pitchers going back to 1977, the first year for which the minor league data I’ll rely on later in this article is fully available. The red line just below 85 marks the weighted average over the whole 38-season span.
Rookie production has consistently sat between 80 and 90 percent of league average, dipping just below 80 once and climbing just above 90 once, but snapping back both times. Last year’s rookie wRC+ was 83, its most modest level since 2004, but not historically low.4 In many years over the last four decades — even those that came before the advances in scouting sophistication, public prospect scrutiny, and strikeout rate that some have linked to certain young players’ struggles last season — rookies hit worse relative to the league than they did in 2014.5
As Sullivan observed, the average over the last three years is 85, or almost exactly the long-term average since the late 1970s. And the pattern is similar even if the sample is restricted to top-100 prospects.
One could speculate that PEDs — to which established major leaguers might have enjoyed easier access — could have contributed to a bigger gap between rookies and veterans in the 1990s and early 2000s, but that wouldn’t explain the rookies’ weak years in the decade before the Bash Brothers.
If adjusting to major league life were harder than ever for today’s rookies, we would expect to see that higher hurdle reflected in weaker-than-ever overall rookie production. Not seeing that suggests, as Sullivan concluded, that it’s probably too soon to declare a rookie crisis.
However, there are two ways in which the wRC+ approach could be obscuring a real change in the Triple-A/MLB balance. The first potential problem is that not all rookies come from the American minor leagues. Jose Abreu, who won the 2014 AL Rookie of the Year award, was one of the five best hitters in baseball last season, but he was a 27-year-old free agent from Cuba. Abreu accounted for less than 3 percent of rookie plate appearances in 2014, but he hit so well in his opportunities that he bumped up the average appreciably. Drop Abreu from the rookie sample, and the wRC+ of the remaining players falls to 81. Abreu’s success tells us a lot about Abreu and a little about baseball in Cuba, but nothing about the relationship between Triple-A and the majors.
The second potential problem is variation in the quality of each rookie class. Any given year might have an unusual number of major-league-ready rookies, which could skew the rookie wRC+ one way or another — not because of a change in the gap between Triple-A and MLB, but because of a change in the rookie class’s composition. So the best question to ask isn’t, “How well did minor leaguers hit after their promotions to the majors?” but “How well did minor leagues hit after their promotions to the majors, relative to their Triple-A production?”
Fortunately, there’s a way we can measure the gulf between Triple-A and the majors: Major League Equivalencies, or MLEs. The concept behind MLEs (originally a Bill James joint) is simple: They’re baseball exchange rates that tell us what minor league stats translate to in the majors. We can assume that a minor league hitter’s true talent doesn’t magically jump the instant he makes the majors; he’s essentially the same guy, with the same skills, in a different environment. Therefore, the difference between his Triple-A and MLB stat lines should roughly reflect the difference in difficulty between the leagues. For example, if in the same season a hitter has a 25 percent strikeout rate in Triple-A and a 30 percent strikeout rate in the majors, we can estimate that the level change inflated his strikeout rate by a factor of 1.2 (30 divided by 25). With any one hitter, the samples are too small to produce any conclusions, but if we run the same comparison for every hitter who changes leagues in a year, we get a good sense of the relationship between levels.
In practice, the process is a little more complicated, requiring additional adjustments that, for most of us, move MLEs out of the realm of “I could do that” and into the realm of “Maybe I’ll ask an expert.” As we’ve already covered, each league has its own average run-scoring rate, a distinctive surface gravity that exerts its pull on every player’s stats. Last year, the average OPS in the Pacific Coast League (the higher-scoring of the two Triple-A leagues) was .771, while the average OPS in the majors was .700. Just by going from the PCL to MLB, then, a hitter’s stats were likely to suffer a superficial drop, like an astronaut who weighs less on the moon than on Earth despite having the same mass. Of course, there’s also a real, talent-related drop that comes from facing superior competition: Even if the PCL and MLB had essentially the same league-average lines (as they have in the past), a hitter moving from the lower league to the higher one would hit worse in the majors. That’s the penalty we want to measure, the variable that can tell us whether the majors have truly gotten tougher relative to Triple-A.
Though MLEs are primarily translations of past performance, they’re also one of the building blocks of the ZiPS player projection system, designed by ESPN’s Dan Szymborski. He first “neutralizes” every stat line, taking the statistical fingerprint of each league and ballpark and then removing those fingerprints from individual players’ performance. Once a player’s Triple-A and MLB stats are neutralized, his stats at each level can be compared in a kind of computational clean room, free of statistical contaminants. For this article, Szymborski dug through his minor league data from 1977 to 2014 and ran the translations for every hitter (rookie or otherwise) who had at least 200 plate appearances in both Triple-A and MLB in the same season or in consecutive seasons.6 The result is a single number, derived from the performance of actual players who moved between leagues, that captures the inflationary or deflationary effect of a big league promotion on any given stat.
Separating promotions from demotions doesn’t make a difference, so he studies both. When comparing stats from consecutive seasons, he also applies an aging adjustment to account for the passage of time.
Here are the players who most exceeded their MLEs last season and the players who fell the furthest short, according to Szymborski’s method:
|Name||2014 MLB PA||2014 MLB OPS||2014 MLE OPS||OPS Difference|
|Scott Van Slyke||246||.910||.740||.170|
The graph below shows the estimated year-by-year impact of a big league promotion on a Triple-A hitter’s Runs Created (Per 27 Outs), an offensive rate stat. The numbers on the Y-axis represent Triple-A league strength as a percentage of MLB league strength; the higher the number, the smaller the divide between the two levels. I’ve included a two-year moving average (the black line) to smooth out the sharp peaks and valleys produced by the relatively small single-season samples of league changers.
Over the period covered by Szymborski’s data, the average Triple-A-to-MLB adjustment for RC/27 has been 0.80. In other words, after adjusting for the offensive environments in each league, about 80 percent of a Triple-A hitter’s production survives his promotion to the majors in a typical year. Last season, the exchange rate was 78 percent, lower than average but higher than the 75 percent figure from 2013. The combined rate over the past two years, 76 percent, is on the low end of the fairly narrow historical range.
We can assess the impact on component stats the same way. This graph gives us the translation factors for hitters’ strikeout, walk, and home run rates.
This graph shows something similar: When the going gets harder, strikeouts become more common, while walks and home runs grow rare. The 2014 strikeout factor (1.26) was higher than the historical average (1.20), but the 2014 walk factor (0.80) and home run factor (0.77) were lower than their historical averages (0.83 and 0.84, respectively). In other words, the skill disparity between Triple-A and the majors is currently larger than normal — but not to a dramatic degree.
With the effect of run environment removed, we can also tell that promoted pitchers (minimum 50 IP at both levels) don’t have it any easier.
The ERA factor is also above the historical norm. Although that’s counterintuitive given the ascendance of pitching and defense over the past several seasons, it does make sense. Pitchers who’ve spent time in Triple-A might luxuriate in the less restrictive strike zone they encounter in the majors, but all of the pitchers they compete against enjoy the same extra real estate, and those with more major league experience have already discovered how to make the most of it. Theoretically, any great discrepancy across levels in the way strikes are called should make the transition between them more difficult for both batters and pitchers, since it leads to a steeper learning curve. And sure enough, the recent strikeout and walk factors reveal a larger adjustment.
Szymborski also ran the translations from Double-A to Triple-A based on the performance of players who played at both of those levels to see whether Triple-A has grown any weaker relative to the next-highest level.
Historically, Double-A has been about 90 percent as challenging as Triple-A, which would make it roughly 72 percent of major league quality. Judging by last season’s Double-A/Triple-A time-splitters, 2014 was perfectly average in that respect.
So what have we learned? The competition gap between Triple-A and the majors does appear to be bigger than it was several years ago. In fact, it’s about as big as it’s ever been. But it’s not bigger than it’s ever been: 10 years ago, 15 years ago, or even 35 years ago, it was just as difficult for players to slap the last actuator on their summit of baseball’s Aggro Crag.
In baseball analysis, there are almost always possibilities that we can’t rule out completely, so just as I added a couple of caveats to Sullivan’s research, I’ll attach two to mine. The first is that there could be some selection bias at work in these stats: Because Szymborski applied playing-time minimums at each level, this method might be overlooking a greater tendency for players to slump after a promotion and fall out of the sample by failing to reach 200 plate appearances or 50 innings pitched. However, if that were the case, we would expect to see the number of league changers who qualify for those playing-time minimums decline, which hasn’t happened. Neither (as Sullivan observed) has there been any drop in the percentage of major league non-pitcher plate appearances made by rookies, which has barely budged from 13 percent over the last decade.
It’s also possible that only a player’s first exposure to the majors is really disruptive; maybe the initial shock of playing under different conditions — that first, jarring dip into an unheated pool — wears off after one trip on the Triple-A shuttle. However, first-time major leaguers make up a significant portion of the league-switcher sample, so if that “first promotion” penalty had grown more severe, we would expect to see that reflected in the overall translation factor.
Although the trajectory is worth watching, we’re left without any proof that the leap from Triple-A to the majors is tougher than it has been at various points in the not-so-distant past. So why does it seem as if young hitters have it so hard? It’s partly the recency effect: The transition has gotten harder in the last few years, so the problem seems more serious if we don’t take a longer-term look. Secondly, there’s some selective memory at work. While some young hitters with great expectations tanked in 2014, others, like George Springer and Mookie Betts, hit the ground running, and many less highly touted rookies — such as Danny Santana (a BABIP beneficiary), Kevin Kiermaier, C.J. Cron, Joe Panik, Travis d’Arnaud, and Brock Holt — made major contributions. Lastly, we’re probably suffering from a failure to adjust our expectations for the new offensive context. Just as we need to remember that today’s MLB league leaders aren’t posting the stat lines they did a decade ago, we need to temper our expectations for Triple-A hitters who are entering a lower-offense league.
While there are legitimate reasons to bemoan the game’s suppressed scoring and look forward to a revised strike zone, the current width of the minors-to-majors gap doesn’t appear to be sufficient cause for a greater fear of relying on rookies.