With a bolstered cadre of arms, bullpen management has become a hot topic in the Nation. How should Rasiel Iglesias be used to limit his risk of injury? What about Michael Lorenzen? Is Tony Cingrani the closer? Why does Bryan Price continue to use Ross Ohlendorf in high-leverage circumstances?
Different managers take different approaches to handling their bullpens. A key variable here is the extent to which distinct roles are assigned to distinct relievers. Dearly departed Dusty Baker was well known for his rigid bullpen roles: at occasionally varying times, Arthur Rhodes the LOOGY (Lefty One Out GuY), Jonathan Broxton in the 8th, and Aroldis Chapman in the 9th.
The argument in favor of this management style is a simple one: relief pitchers, like all workers, perform best when they know what is expected of them. Pitchers perform better if they know when (early? late?) and under what circumstances (close? blowout?) they’ll be called upon, and what will be expected of them (one out? multiple innings?). As Jonathan Papelbon summarized earlier this year, “when you’re in the bullpen and you can feel comfortable about when you’re going to pitch, it makes your life a lot easier.”
Bullpen roles are not all sunshine, however. Many analysts argue that these roles prevent a team’s best relievers from pitching in the most important moments or against the best hitters. These analysts laud managers who use their bullpen less rigidly.
Amid this variety, one factor is consistent: every manager wants a closer. Some go long stretches without finding one, due to injury or poor performance, but all managers prefer a single 9th-inning security blanket.
This is very bizarre. The appearances of closers—typically the best relievers in a bullpen—are dictated by the parameters of the arbitrary save statistic. The role of closers is to collect saves; to get the last three outs in games where their team is ahead, but the game is not a blowout. In no other circumstance, in baseball or other sports, is strategy so consistently and transparently beholden to statistical accounting.
This seems the epitome of backwards thinking. But is it? Is there anything behind the argument that closers are more effective when fulfilling a role? Is the frailty of closer psyches—which are only human psyches, after all—such that it may actually be beneficial for managers to remain beholden to the save?
This post will make the statistical case that, yes, the closer role makes a meaningful difference on relief pitcher performance.
One fact that stands out in the attempt to make this case is that closers perform noticeably better—according to advanced metrics—when pitching in traditional save situations. Or, at least they did throughout the 2011-2015 seasons.
For purposes of this analysis, a closer is a pitcher (a) with 40 or more relief appearances who (b) makes at least 40% of these relief appearances in a traditional save situation: 9th inning, 1-3 run lead, no outs, and no runners on base.
|Difference||7% better||11% better||6% worse|
As you can see in the table above, the average closer has a 7% better strikeout rate, a 11% better walk rate, and a 6% worse home run rate in save situations. Using an A/B test, the improvements in K% and BB% are statistically significant, while the worse HR% is not.
At a glance, these numbers are consistent with the theory that closers perform better when pitching in their expected role.
However, these outcomes could also be plausibly explained by the pressure of the 9th inning. Down to their final at bat, hitters are tempted to be more aggressive and swing for the fences. This riskier approach sometimes pays off (hence the improved HR%), but is often unsuccessful, explaining the worse K% and BB%.
We need to unpack this data a bit more to demonstrate that the role-ness of the 9th inning makes a difference to closers.
One way to isolate the impact of the closer role from the impact of 9th inning batter aggression is to separate first-time closers from veteran closers.
The narrative here is straightforward: being a closer is not an easy job, and it takes pitchers a while to become comfortable with the role.
The following charts show the performance of first-time closers and veteran closers. First-time closers are those who did not close in the previous season. Veteran closers are those who did. With 2011 used as a baseline, this data includes only 2012-2015.
|First-Time Closer Situation||K%||BB%||HR%|
|Difference||4% better||8% better||11% worse|
|Veteran Closer Situation||K%||BB%||HR%|
|Difference||7% better||14% better||3% better|
There are two notable differences between these two groups. First, veteran closers significantly improve their K% and BB% in save situations. First-time closers also improve their performance in these areas, but the gain is smaller and not statistically significant.
Second, veteran closers actually reduce their HR% in save situations—in contrast to first-time closers, whose HR% increases in the same situations. (Note that neither of these variations is statistically significant.)
This data supports the narrative stated above. Comfortable in their role, experienced closers raise their game in save situations. First-time closers, by contrast, show the variation one might expect, if this variation could be attributed solely to batter aggressiveness.
This is compelling evidence that roles can bring out the best in a closer.
It’s clear that a rigid closer role is in the best interest of the pitcher. In addition to racking up saves, consistently pitching in save situations has historically improved a reliever’s peripheral stats—a recipe for a larger contract. Even outside of the competitive drive to become the bullpen ace, it makes financial sense for relievers—such as, say, Rasiel Iglesias—to angle for this role.
It is not clear that a rigid closer role is in the best interest of the team. There may be moments more important to the outcome of a game than the last 3 outs. It may make sense for managers to use their best reliever—presumptively, the team’s closer—in the 7th inning to escape a 2-on, no-out jam and preserve a 1-run lead. Or in the 8th inning against the heart of the opposing team’s lineup. Etc., etc.
However, in deciding to use a closer in high-leverage but non-save situations, managers should be aware that they are likely not getting the best version of their reliever.
Closers pitch in two types of non-save situations: (1) in a blowout, to get in regular work; and (2) in close games, along the lines described above. The following chart excludes blowouts and compares closer performance in save situations to closer performance in high-leverage non-save situations—when the game is within three runs.
|Difference||17% better||19% better||9% worse|
The difference in K% and BB% is fairly astounding (note that the HR% change is not statistically significant). Considering K%-BB%, closers in save situations pitch like 2016 Madison Bumgarner, while closers in high-leverage non-save situations pitch like 2016 Jon Gray. I’d rather have Jon Gray pitch the 8th inning of a tie game than Ross Ohlendorf, but I may be tempted to roll the dice with Ohlendorf if it saves Bumgarner for the 9th—or for tomorrow night’s game.
(I’m lying. I would never roll the dice with Ohlendorf. But you get the point.)
Managers can go overboard but it’s clear that the inclination to slot relievers into particular roles—or at least into a closer role (8th inning guys showed no comparable boost)—does tend to make a difference on performance. Managers would be foolish to ignore this comfort level when making bullpen management decisions.
A brief comment on method: I used Retrosheets event files to isolate reliever situations and extract rate data. I found Retrosheets a very rich source of data, and am happy to share my code if others are interested.
The information used here was obtained free of charge from and is copyrighted by Retrosheet. Interested parties may contact Retrosheet at “www.retrosheet.org.”