Thursday, November 1, 2018


Willie McCovey's passing reminds us of those paradoxical times when people thought it was amazing when a ballclub had four players who could hit 20+ HRs a year. (Examining that phenomenon as it existed in the four decades in which McCovey played is not part of this post, but it'd make an interesting follow-up...we'll try to remember to do that.)

What we're here to do today is a bit different. We want to examine McCovey's peak, which was a bit later in manifesting itself than what is usually characterized by sabermetric theory. (Truth told, peaks built around single seasons, such as the age 27 shibboleth, are ultimately not very useful--either for predictions or evaluations. We need more years, and we need to see peaks in the context of some number of years.)

And so, here, we're dusting off one of our favored "flavors of peak"--the six-year version. We require that a player collects at least 2500 plate appearances over those six years to be eligible to appear on a peak list--for which we resolutely (read: stubbornly) sort it by the old-school, "highly flawed" OPS+.

OPS+ is flawed, but what isn't? "Better" measures quickly become far too wonky and don't provide significant advances in understanding offensive ability--which is 90% responsible for the selection criteria involved in Hall of Fame arguments. (The other 10% are what people spend countless hours chasing their tales about.)

So, a useful basic approach to evaluating a hitter's historical achievement can still be found in OPS+, which is league and park adjusted--the basic and useful necessary adjustments. And, as noted, six-year increments give us a benchmark for how well a player can sustain a "peak" performance. When measured against all the hitters in baseball history, it's both illuminating and meaningful.

When we do this for McCovey, we see he has a great "stretch" (pun definitely intended...) from age 27-36 clustered in five six-year measures (ages 27-32, 28-33, 29-34, 30-35 and 31-36). In these five age ranges, McCovey ranks in the all-time top ten for OPS+ for six-year averages.

Most of this is driven by the three great seasons he had in a row from 1968-70, where his aggregate OPS+ tops out at 188. But the seasons surrounding these, particularly the years from 1965-67, weren't exactly chopped liver: McCovey's OPS+ for those years was 159. The backside (1971-74) was lower, but not that much lower (148 OPS+). All of that, and the forces of age as they impose themselves upon hitters, explains McCovey's continuing lofty position in the six-year rankings all the way out into the age 31-36 range.

And regarding the Hall of Fame, it's instructive to look at the leaderboards for six-year OPS+ to get a sense of how many hitters with lofty rankings over these "half a HoF" snapshots wind up in the Hall of Fame. We won't do anything systematic with that here--but let's look at the age 27-32 range with respect to who's in/out of the HoF.

The guys in the Hall are the ones in white type. As you can see, 21 of the Top 30 guys on this list have been inducted. (Several others ought to be inducted eventually--Barry Bonds, Miguel Cabrera, Manny Ramirez, Albert Pujols. And we can always hope that the Vets Committee--or whatever they're calling that overdetermined clump of misdirected ersatz lobbyists these days--will see fit to put in Dick Allen before he dies. That would bring us up to 26 out of 30.)

Rights, wrongs, and rants notwithstanding, we can see McCovey here at #10, where he's in fine company. RIP, Willie...

Tuesday, October 30, 2018


Following up on our discussion of what happened to the Astros in their ALCS matchup with the 2018 World Champion Boston Red Sox, we thought it might be worthwhile to see if playoff teams--and World Series winners in particular--showed a pattern of hitting well against relief pitchers. (You should know that the Dodgers, the first losers in two consecutive World Series since the Rangers did in 2010-11, had their bullpen shredded by both of the teams that beat them--the Astros in 2017, and the Red Sox this year...the Dodger relievers posted a 5.48 ERA in the just-completed Fall Classic.)

Thinking more globally, we went back to the data at Forman et fils ( to look at the overall performance vs. relievers over the past decade. (That gives us nine years of data to work with--a total of 270 data points--just shy of what's needed to win twelve slightly used Dodger Blue™ cupcakes.) Do teams that make the playoffs hit better than average against relievers? And do teams that win the World Series exceed the average of "garden variety" post-season teams?

The answer to both these questions is "yes." The table at right breaks it all out for you. Teams that
made the post season have their OPS+ vs. relievers displayed in bold type. Teams with a 120 "sOPS+" (Sean's acronym, not ours!) are shown in scalding orange; we've also color-coded teams with 110-119 (pale orange), 100-109 (yellow), and--on the opposite side of the spectrum--teams whose "sOPS+" is less than 85 (pale, pale blue).

At the bottom of the chart you have some averages--these are yearly "sOPS+" averages for the playoff teams. As you can see, the figures are uniformly above league average: for the nine years in question, the average "sOPS+" is 107.

Finally, note the cells with the double-thick lines around them. These are the World Series winners. And, yes, the World Series winners (as seen in the double-thick-lined box at the very bottom right of the table) are better yet on average than their post-season also-rans. Over the past nine years, World Series winners have an aggregate "sOPS+" of 112 vs. relievers.

Now, doing well in this statistic doesn't guarantee you a trip to the post-season; after all, it's only one component of team performance. There are many examples of teams doing well in this statistic who didn't make it to the playoffs at all. But if you do make it, having an offense that is able to do damage against the opposition's bullpen seems to give you a measurable advantage with respect to winning the World Series.

(And to complete another historical tidbit that was given a teaser above: teams that lost consecutive appearances in the World Series include not only the 2017-18 Dodgers and 2010-11 Rangers, but the 1991-92 Braves, the 1977-78 Dodgers, the 1963-64 Yankees, the 1952-53 Dodgers, the 1936-37 Giants, the 1923-24 Giants, the 1921-22 Yankees, and two teams--the 1911-12 Giants and the 1907-09 Detroit Tigers, who are the only teams to lose three World Series in a row. By doing it this year and last, however, the Dodgers have joined the Giants as the only teams to have three instances of "two-time loser" syndrome in the World Series. To match their Bay Area rivals, they'll need to make it back to the World Series next year--and hit the skids again...)

Friday, October 19, 2018


...of your opinion about the call that clearly affected the outcome of Game 4 in the ALCS between the Red Sox and the Astros, there is one incontrovertible (and ironic) fact.

The Astros' bullpen, which had posted only one sub-par month (July, 5.19 ERA) during a season of exemplary achievement, picked a most unfortunate time to regress, giving serious ground in three games during the ALCS. Their overall ERA for the series (5.79) was actually better than their overall performance.

The Red Sox bullpen, considered suspect by many, managed to bend but not break during the series--and that made all the difference.

Peeking out from the stats is the fact that both pitching staffs were having trouble with their control. Red Sox pitchers averaged 5.09 BB/9 during the series, which looks a lot more like 1949 than 2018. The Astros were better (four walks per 9 IP), but this is still well above the regular season MLB average.

Thursday, October 18, 2018


So, OK, this is not really a "post-season" snapshot. There are no stats on post-season bullpen performance in this post.

What we do have, however, is a meditation on the changing perspective on the bullpen and its strategic importance for success that reverts back to more straightforward stats in order to capture those changes.

No one needs to be reminded that relief pitching is undergoing a transformation--the Tampa Bay Rays have made sure of that. We can expect more relief innings over the next couple of years as other teams attempt to emulate their "opener/delayed-starter-in-relief" strategy that was seemingly such a success.

But the value modeling that sabermetrics has imposed upon the game doesn't see it that way. Those numbers suggest that relievers did less to help their teams win games in 2018 than was the case in the previous seasons. Those modeling stats presume a different reality than what people see when they watch an individual game. And they tell us, year in and year out, that relief pitching has a net negative value in the overall model. This season, relief pitching had its most negative overall value according to Wins Above Average than has been the case in nearly half a century.

Is this hard to believe? Not for some. We find it hard to imagine, however, that teams whose bullpens post similar ERAs over a season can have significantly different WAR values. Of course, ERA has been "proven" problematic at the individual pitcher level by the recent attempts to use batter vs. pitcher (BvP) stats the go-to measure; but individual pitchers are not the only measurement aspect that we need to define and evaluate.

In fact, with the increase in reliever innings, it actually becomes more important to develop better aggregate measures for overall team performance in this area. And part of that effort should be to more tightly relate it to actual wins and losses.

And that's what we can at least start to putting together these various measures in scatter chart relationships. The ERA+/WAA scatter correlation shows a lot of discrepancy in the -2.5 WAA range, with ERA+ values being all over the chart. Some high-achieving WAA teams are actually have sub-par ERA+ values.

How, then, is this any real advance over a ERA+/WPCT scatter correlation? While there are always teams who "beat" their WPCT projections based on their ERA+ (due to the fact that such teams give up extra runs in games already lost.

The solid correlation in the 90-110 ERA+ region with WPCT demonstrates that there's general set of principles that remain in operation in the mid-range of the distribution, but that it frays a bit at the extremes. That's a more natural set of relationships based on real-life game situations. It suggests that there's more meaning in reliever WPCT than has been claimed for the past twenty years.

And the game is changing in ways that will likely reinforce this. When we look at the final month of the 2018 season, we see several interesting aspects of how this is manifesting itself.

In September 2018, we can see that certain teams experienced "make-or-break" months with respect to the post-season in terms of bullpen performance. The table at left sorts relief pitching in descending order of ERA.

Looking at it, we can see how one team (the Brewers) clearly rode their bullpen performance into the post-season.

And we can see how two teams (the Cardinals and Diamondbacks) wound up falling short of playoff appearances due to the poor performance of their reliever in the final month. (The third team coded in green, the Mariners, were caught and passed earlier in the year by the A's, who rode their bullpen into the playoffs.)

The color coding here is of some interest. Seven of the teams with the best performance from their bullpens (seven of the twelve with better-than-average ERAs) wound up in the post-season. Three of the top four teams in September bullpen performance are still competing in the post-season at this time (Brewers, Dodgers, Astros--only the Red Sox had a subpar performance from their relievers in September, and they managed to keep a lid on things in enough of their appearances to generate more relief wins than losses).

Relief pitching is not broken out sufficiently in the otherwise overly-parsed situational data for us to know why the Braves could go 9-3 with a 5.05 ERA, but we can make an educated guess: their mop-up relievers in already lost games gave up a lot of runs. A team like the Indians (who suffered a virtually complete reversal in bullpen performance in 2018 after a fine season the year before) managed to blow leads at crucial moments, saddling themselves with losses, but they did not pitch poorly in already lost games.

What we can tell you is that the playoff teams in 2018 posted an aggregate 65-34 record in September games where decisions were picked up by relief pitchers. We'll let you decide if you think that is as meaningless as many still seem to think is the case.

Sunday, October 7, 2018


Ah, the September song. Is it a preview of "coming attractions" as regards offense?

Joe P., who felt the urge a couple months back to double down on the "take and rake" offense, would doubtless point to the fact that run scoring levels are still relatively robust (4.45 per game in 2018, and 4.44 in September) and dismiss the dip in batting average (BA) for the month (.243) as being meaningless. (After all, batting average is meaningless, n'est-ce pas? So long as isolated power (ISO) can remain at all-time highs, offense can remain "robust enough.")

But such is not going to be the case if one other factor continues to follow its trend line. The rise in strikeouts--more specifically, the rise in the percentage of strikeouts that occur in plate appearance where a batter has two strikes on him--will at some point have a cratering effect on batting average, which will domino in to on-base percentage (OBP) and slugging average (SLG).

Our chart at right shows the K-to-2-strike percentage as it's evolved from 1988 to 2018 (these are the only years where the play-by-play data is detailed enough to capture this info). As you can see, this percentage is slowly but inexorably on the rise and has risen by 30% over thirty years--with the fastest rate of gain occurring in the past decade after a long lull due to the "baked-in" effects of the offensive explosion.

Additionally, BA and tOPS+ values for two-strike situations have been decaying over this time frame. (The tOPS+ stat measures the OPS value of the two-strike PA against the overall OPS. This was close to fifty percent in 1988 and remained relatively constant throughout the explosion until 2009; now, however, that figure has moved down into the low forties. BA is declining along with it, with two-strike PAs reaching a new low this year (.173).

The direction of the trend lines means that the so-called "smart adjustment" that Joe P. touts (admirably defending the tenets of an increasingly senile sabermetrics) is pushing everything into perilous territory. Not only is the game becoming more two-dimensional offensively, but it's skating into a risky region where additional pitcher adjustments will bring BA down to levels seen in two months during 2018 (September: .243, June: .245) for the entire season.

Note that HR totals did recede this year (from the absurd 1.27 in 2017 back down to 1.15). But keep in mind that such levels have to at least be sustained in order to keep offense "robust enough." Pitching adjustments, in two forms--experiments with in-game pitcher usage, and analyses to counteract the "launch angle" phenomenon that was partially responsible for the HR spike--are beginning to make themselves felt. It's unlikely that hitters are going to adjust to such alterations by pitchers in a short period of time--leaving it highly likely that home runs will drop and strikeouts will continue to rise...

...Which will result in batting averages that look disturbingly similar to what we saw in the mid-to-late 1960s.

How far can HRs drop? That's harder to predict until we see more evidence of pitching staffs improving. In 2018, we had an unusually high number of really bad teams and really good teams. Several of the really bad teams barely improved their HR allowed rates, while the really good teams showed a higher rate of improvement. When such improvement becomes more uniform, it will begin to effect teams that managed to improve their HRs hit in 2018 and a more pronounced decline will set in.

As you can see in the final comparison chart for R, HR, and BB (each month in 2018 is compared with the R/G, HR/G and BB/G from its corresponding month in 2017), the decline here was consistent but relatively uniform. (BB/G has a frequent pattern of being higher in April and September, due to weather and/or roster you can see, June was the biggest outlier, but that's because the HR rate was simply insane in June 2017 and it drove R/G up toward "offensive explosion" levels.)

A uniform year, such as was the case in 2018, is often followed by a more jagged change in the following years.

Next year we'll run these numbers for two parallel years, showing the months of 2019 against their analogous months for 2018 and 2017. And we'll be back a bit later this month with a look at big swings in BA, OBP and SLG at the league level over the history of baseball. Stay tuned...

Friday, August 31, 2018


Too much planning and writing for various manifestations of French film noir--and, frankly, the more mainstream versions of what we like to do here at BBB are kinda sorta covered in the sedimented river sludge of content at, so we await the conclusion of our book on French noir and our two upcoming festivals in Los Angeles and San Francisco to actually make a reasonable effort to cover baseball in something resembling our old, er, "panache."

But if you happen to find yourself in LA or SF on the dates specified in our accompanying illustrations, however, do come see us--might just talk baseball for (kindly pardon the pun...) a "change of pace."

Meanwhile, in the land of homeostatic homeritis, the August numbers are within moments of being final and official. Warmer weather propped batting average up a bit (.253), and HR/G rose slightly as compared with July (1.19 vs. 1.15), but a decline in walks (2.99/game as opposed to 3.26 in July) contributed to a slight decline in run scoring (4.46 per team per game as opposed to 4.7 for July).

When we say homeostatic, we mean it: the HR/G ratios for this year might be the most consistent across the months as any we've seen: 1.09 in April, 1.17 in May, 1.16 in June, 1.15 in July, 1.19 in August.

The last year the STDEV for HR/G by month was this low was in 2010. Such lack of volatility is actually pretty rare. The least volatile year for HR/G fluctuation was, of all years, the strike season in 1981.

Here is the vital sign comp chart updated through August 30 [at right], measured in percentage change from the 2018 month in question to its 2017 counterpart.

In case you're wondering, September has tended to produce HR/G at a 5% lower rate than the overall yearly average over the 2000-2017 time frame. That would suggest that the September HR/G rate will clock in at around 1.09.

Thursday, August 2, 2018


So July is in the books, and offense went up--but not because HRs returned to 2017 levels. No, it came about due to a rise in BA and OBP, with HR levels staying steady. The summary is provided at right.

Of course, HRs are still near historical highs (leaving the 2017 spike unto itself), so this "new normal" has to be taken with the same box of salt needed when we contemplate the continuing specter of the Orange Menace, but that's another story (and comparison) for another time.

More interestingly, we spent what little spare time we have right now investigating the effects of temperature on 2018 offense. Our approach is more statistically inclusive than elsewhere, of course (we continue to consider "shape" in how offense is created), and as a result there are some nuances here that bear further examination via additional breakouts in prior years.

As you might expect from what you've read elsewhere on the topic, there is generally a linear relationship between run scoring and temperature at game time. There are nuances, however--and these seem to have been overlooked. First, here's the full data set:

Note that we break up the categories into five-degree intervals (a different approach than everyone else). And when we do that we get something a bit different than a strictly linear result vis-a-vis run scoring and temperature.

Run scoring (at least in 2018) is actually higher in the coldest games (59 degrees or lower) than is the case for all games in the various temperature ranges between 60-84. It's only when we get to 85 degrees or higher than the game reverts to the scoring levels that prevailed during the long offensive explosion (1993-2008).

Note the HR/G is uniform at 85+ degrees, with a intermediate plateau in the low 80s, followed by a virtually uniform set results from 65-79. This is the so-called "smart game" referenced by Joe P., where homers prop up an offense that would otherwise be remarkably similar to what we saw in the 1963-72 era. (We'll get around to formally debunking Joe's characterization in subsequent posts.)

Note the odd fluctuations in BB/G--but pay especial attention to the extremes. Hot weather produces highest BA, ISO, and HR/G--it also produces more walks, which are likely due to pitchers being extremely careful in the increased number of men-on-base situations they face in these games. Cold weather suppresses (to an extent, at least) HR/G, but something about the weather conditions has an effect on BB/G, which shoots up to levels that look more like the late 40s "walk spike" phenomenon.

As a result, OBP is higher in these games than at any time other than the games played in 85+ degree conditions. And this counterintuitive shape combination produces more R/G than the so-called "intelligent" process of what's been called "take and rake."

It will be interesting to generate this exact breakout for 2017, for 2000 (height of the offensive explosion), 1992 (last year prior to the explosion), 1987 (first mega-HR year), and some other selected years in the past. We need to see if the "evolving" strategy of "take and rake" has caused changes in the relationship between run scoring and temperature.

From the data above, however, we can draw one tentative conclusion. Someone in baseball needs to figure out how to buck the temperature trend: an enterprising team (let's say the Rays...) should investigate all of the ways in which they can counteract the effects of hot weather. Is it a humidor, or is it something more strategically comprehensive in terms of how pitchers approach these games? And might it this be somehow related to the Rays' recent antithetical deployment of pitchers in a game? Perhaps a still-TBD combination of all the above?

While you contemplate what those adjustments might be, we'll look for some more spare time to run the additional breakouts...stay tuned.

Monday, July 23, 2018


Ortiz:"You're batting leadoff? I
thought you were the batboy!!"
Mookie Betts is having a great season, and that's not just great news for Red Sox fans. Mookie is not a "big man"--he's only 5'9", 180lbs. It has become extremely unusual in recent times for players of such (relatively) diminutive stature to have such a dominant offensive profile. So far, however, Mookie is flying high with a 193 OPS+ and is very much in the MVP discussion despite missing the better part of three weeks due to injury.

It turns out that Mookie is one of nine hitters who are 5'10" or shorter having at least robust offensive years (defined as a 120+ adjusted OPS) in 2018.

Now, nine may not sound like a lot--and it's not. But it's a helluva lot better than the numbers we see for "smallfry" in the recent past. Our table below shows that in recent years, hard-hitting little guys were a seriously endangered species.

Clearly two forces have been at work historically to chip away at the short-statured player. First, the general trend that people are getting taller. Second, the pervasive typecasting of small players as bereft of power, and a selection bias that favored middle infielders who had little or no chance of developing it.

What may be pushing things the other way is the perception that all players should possess a higher degree of power (measurable in ISO), opening up the search for short players capable of hitting for power. Of course, OPS+ is not driven only by ISO or SLG--but having that in addition to a solid OBP supported by a higher-than-average BA might just be the combination that has produced a sudden "bumper crop" of good-hitting smallfry.

A decade ago, neo-sabes had predicted extinction for such players--and the numbers you see in the 2000-09 time frame would certainly have made such a prediction seem plausible. But we see here at the least the possibility of a counter-trend. Remember these are hitters whose overall offensive profile (BA, OBP, and SLG) is lifting them to prominence--most of them are not hulking low-average power hitters relying on ISO to boost their SLG. There's a chance that some of these smallfry (read: big little men) will be Hall of Famers one which we say, "Hallelujah!".

Some think Mookie is the second coming of Willie Mays. Time will tell if he really has enough power to make that comparison more plausible, but size-wise, personality-wise, speed-and-defense-wise he's the same breath of fresh air that we got when the Say Hey Kid was in the prime of his golden youth. We dug out our YEPS (Year-End Projection System) spreadsheet to see what it projected for Mookie at season's end: as one might expect, it projects a dropoff over the next two months, but the overall projection is still for a season with an OPS+ in the 170 range.

That is a good first step toward being a Mays-like player.

Here are the nine "smallfry" posting 120+ OPS+ seasons thus far in 2018--if you're a "smallfry" yourself, light a candle in the window for these guys. It would be great if all nine stayed in the 120+ zone...

Ozzie Albies, Jose Altuve, Andrew Benintendi, Betts, Khris Davis, Eddie Escobar, Scooter Gennett, Jose Ramirez, Jean Segura

Saturday, July 21, 2018


Here's the latest: run scoring and hitting in general is up, but HRs are down--just the first piece of information refuting Joe P.'s  recent "defense" of what a growing cadre of disillusioned statheads are calling the "take and rake" philosophy.

We'll get back to undressing Joe's argument at a later date, but suffice it to say that it's OBP that drives offense. And there are two ways to increase OBP--draw more walks or make more hits. That's what's happening in July. Run scoring is at its highest rate because BA and OBP have recovered, while HRs are still down.

It's a sign that a sizable number of hitters are setting aside the "all or nothing" approach after they watched their collective BA push itself into 1960s levels for nearly six weeks during May-June.

It's also a sign that there's a something of a starting pitcher crisis occurring this month--but not in terms of HRs allowed. No, the symptom seems to be more basic--and sabermetrically inconvenient. It appears (as shown in the table at left) that many teams' starters have suddenly become more hittable. Batting average is up, and it is strongly correlated with the often sharp rise in starting pitcher ERA thus far in July.

Sixteen teams have their starters posting July ERAs at least ten percent higher than the overall team ERA. Nineteen teams have starting pitchers who are generally more hittable in July than they've been over the course of the season to date.

(As is likely the case with most of you, we don't quite know what to do with the Tampa Bay data, since their starters are still "unto themselves." But what's clear is that the Rays are not giving up very many HRs in July, and that's how they've lowered their starters' ERA even though they are giving up more hits.) More evidence of a slow but steady adjustment from the single-minded "take and rake."