Saturday, February 28, 2015


There has been much ado--mostly in the form of noise--about the fact that assigning wins to individual pitchers is a flawed process. Much of this ado (and much of it is, in fact, about nothing) is tied to an ideology that has developed from the value measurement systems that are being forced down the public's throat by emboldened cultists wishing to actualize the work of Bill James (who would merely make a corrective notation in the historical record) by actively rewriting the official stats to suit their own desires.

Step right up for those Radioactive Tango Love Pies™!!!
Call us curmudgeonly (and believe us when we say that we've been called much worse...), but we think that it might make sense to quantify the extent of the flaw before mounting a mouth-foam crusade to toss away the historical record. In the rush to judgment and the desire to own a mandate to interpret history, these folks (as usual, a number of them aligned with the purveyors of the Tango Love Pie™) have proven to be eerie precursors of the current United States Congress, a large faction of which hope to hijack history as well as the government. They both share the same strange obsession: to unilaterally declare something utterly irrelevant and bankrupt, only to follow by attempting to replace it with something that is, in fact, far worse than what they were criticizing in the first place.

The first step to a semblance of sanity with respect to the assignment of pitcher wins is to actually anatomize the flaws and then determine the rate of their occurrence. As is so typical of the "neo-post-neo" faction now searching for market inefficiencies in the degree of tension in an athlete's jock strap, the analysts favor "big data" without actually synthesizing any of it.

We see, for example, that a number of scoring quirks existed early in the twentieth century that assigned a handful of wins to a starter who hadn't pitched five innings. And we see a few stray instances of inconsistent judgment calls by official scorers. When we add this up over the long history of the game, however, we see that these types of glitches account for 0.5% of the total games played.

Viva la revolucion, n'est-ce pas??

But there is an area where wins are assigned with a through-the-looking-glass quality. These occur directly in conjunction with inefficient relief pitchers who surrender a lead and receive a win when their team retakes the lead while the pitcher who was lousy is still the pitcher of record.

This occurrence is not accurately quantified in any official way thus far; there are only surrogates for it that do not directly address the actual frequency. Over at Forman et fils, they list the number of times the starting pitcher is "in line for a win" but the team goes on to lose--an interesting stat in its own right and one that might assist in understanding the efficiency of a team's bullpen, but one that doesn't get to the root of our issue at all since we are looking for "wins while pitching badly that were stolen from someone who pitched better."

There is one statistic that can get us to where we want to go, however. That stat is the "blown save."

It's a stat that is overlooked, even scorned, due to two factors: 1) its lack of historical pedigree and 2) due to its odd lack of precision that creates "save situations" in the sixth, seventh and eighth innings. But it's precisely that lack of precision that affords us insight into the situations where what we've characterized (back up top in the title...) as "uggly relief wins."

In other words, wins as a result of blown saves.

So--how many of these are there? In 2014, there were a total of 59 "blown save wins"--wins awarded to relievers pitching badly enough to relinquish a lead, and then benefit from a go-ahead rally while they were still pitchers of record.

That works out to 2.4% of all wins for the 2014 season.

So what that means is that the cause celebre, this blight of all blights, constituting--apparently--the sellout of truth, justice and (God help us...) the American way, is focusing on a method that is perfectly reliable upwards of 95% of the time.

Reassigning 59 wins from relievers who've found this annoying little loophole seems a lot more reasonable than developing overwrought automated systems that reassign up to ten times as many wins in any given season.

Perhaps we need more history and more context? Would it be valuable to know if the percentage of "uggly relief wins" has changed over time? And maybe useful to have a sense of how the changes in the usage of the bullpen may (or may not) be affecting blown saves/blown save wins and this purported "conceptual crisis of the win"?

Well, of course we do. And Forman et fils is the place to acquire it. We spent some time (when we had it--as you may have noticed, we're not here a lot at this time because we have many, many other things on our plate...) looking at the data. And we've compiled a summary chart that gets us to the root of the matter.

The chart (above) starts with the actual number of blown saves in a season. We've condensed the data to reflect how the essential pattern has evolved. Accelerated bullpen use took hold in the mid-to-late 80s and was exacerbated by the offensive explosion in the 90s: we reach a peak at the very end of that decade. Things have declined a bit since, but seem to have plateaued.

Blown Save Wins (BlSvW) have also descended since the late 90s, and the percentage of "uggly relief wins" (UGG%) has declined back to pre-offensive explosion levels.

One of the other things that we thought might be significant here was to see if the ever-increasing use of relievers was having an effect on "uggly relief wins," particularly in terms of how long a reliever pitches in these. What's striking here is that from 1949 to the present, the vast majority of "uggly relief wins" come from pitchers who pitch at least one full inning (89% of them, in fact) while the percentage of blown saves that are an inning or more in duration is consistently between 50% and 60%.

That means that there's something structural about how "uggly relief wins" manifest themselves that resists any perturbing effect by the increasing number of relief appearances being made.

And interestingly, the percentage of blown save wins (same thing as "uggly relief wins," just in case that wasn't clear) in appearances equal or greater to a full IP is dropping. (That can be seen in the far right column, the one marked B!Sv1+% (yes, that "!" was supposed to be an "l"...fat fingers uber alles!) One wonders that if offense continues to decline, whether "uggly relief wins" will continue to drift downward.

So should we worry about the "uggly relief win" and how it has ruined the use of pitcher wins? No, of course not. But we don't expect this finding to gain much traction with the ideologues, who would like to take away texture and shape and all of the "imprecision" in value assessment that they imply, and try to do so with the zeal of a score of possessed mothers compelled to throw out bathwater and baby. (Give us spots on our apples, and leave us the birds and bees, already...)

Finally,  here's an astonishing fact related to blown saves in general that just dropped out of the data collection effort. It turns out that, over the last twenty years at least, the blown save--this is in general, now, for all outings identified this way, not just those that become "uggly relief wins"--is accompanied by an exceptionally high stolen base rate. It's not uncommon for the success rate in steals during blown saves to be upwards of 85% (in 2013, there were 91 SB, 14 CS in blown saves, or 86% to the good).

What does that mean?? Hard to say. But it's strange, and interesting, and more worthy of some fuss and fol de rol than the so-called "crack in the earth" purportedly produced by the fact that pitcher wins are not a perfect laboratory product.

Friday, February 27, 2015


We are loath to traffic in the bracing but often overly brilliantined "compare/contrast" franchise that Bill James invented in order to create a framework for literary form often masquerading as analysis. Such a technique reached its apex (or its nadir) in The Politics of Glory, where the dualist approach was so pervasive as to signal a potentially dangerous compulsion. (Of late, Bill has returned to this technique, improving on it by improvisationally adding more players to the comparison.)

As a stylistic device, it's often fascinating because there is a palpable psychological undercurrent that emerges from it that often transcends the mere content being discussed. The same cannot be said, however, for those who slavishly imitate the form that Bill invented. Contrarian philosophical urgency, which oozes out of Bill's toothpaste tube of discourse almost involuntarily, is replaced by a kind of wan sophistry (as embodied by the Lindberghs and the Keris and the bland inheritors of all the "prairie fire" numberists) that instinctively chooses limpid over lumpen.

To put it another way, Bill's work in this area has always been akin to a blue plate special which relied heavily on the prominent placement of side dishes, which often were plopped on the plate first in anticipation of the main course's arrival (often plopped down with the rough panache of a proud backyard chef). His inheritors have all shown the lamentable (but market-driven) tendency to go nouvelle, serving up tiny entrees on impossibly large plates with some festive food coloring festooned round its edges.

So you can sense our reluctance to traipse through those dangerous swinging doors. But, hey, when in Rome, right?

The recent passing of Minnie Minoso reminded us of just how long there has been a case bubbling over (as opposed to a case of bubbly delivered erroneously to your address...) about his worthiness for the Hall of Fame. Our view is that he's just on this side of that paradisiacal marker, but we would lose no sleep if a lobbying campaign carried him into Cooperstown. Thinking about this again on the occasion of his passing, we're reminded of Ken Boyer--a contemporary of Minnie's who also has been heavily touted for the Hall of Fame by the numbers crowd.

So before we could stop ourselves, we tossed together our version of a "comp" for these two. As you'd expect, ours is radically simpler than what you'd get with WAR (a system that clearly distorts the importance of fielding and uses a transient combination of coarse models and crude interpretations to overstate positional difference).

This radical simplicity is, indeed, radically simple: OPS+ and triples. (Not triples...again?? Fear not: this is our version of the Jamesian "side dish," applied here because we think it's interesting to look at category defined by its scarcity in the time frame being covered.) These are arranged in five-year totals/averages.

What we see here is (despite what is also a calculational strangeness in the offensive component of WAR) just how good Minnie was in this time frame.

That's eight straight five-year slices where he's in the Top 15 in OPS+, an overall stretch of twelve years. By contrast, Boyer has only one five-year slice where he cracks the Top 15. Minnie made it into the top ten four times.

Boyer surprises us, though, with his showing in triples. We had to remember (and without help from Brock Hanke) that Ken came up to the majors with some speed, and even played a passable center field one season early in his career. Playing in a league where ballparks conducive to triples were already giving way to the cookie-cutter stadia of the sixties, Boyer's 3B totals rank well even if they pale in comparison to Minnie's at their peak.

What probably keeps Minnie on the outside looking in with respect to Cooperstown, however, is his lack of a palpable peak at any point of his career. Numbers guys have meta-categorized such a region of players with the glib monicker of The Hall of the Very Good. In Minnie's case, he's probably more accurately in The Hall of the Very, Very Good. Boyer, a fine fielding third baseman (but not quite as good as the numbers guys have claimed) is probably straddling each of these regions.

Friday, February 13, 2015


Our title presages one of the coming features of our (admittedly idiosyncratic) coverage of the beautiful eyesore that is baseball for 2015--a series of essays in which aberrant references to and arcane interpretations of T.S. Eliot's The Waste Land will gurgle up like...well, like lilacs out of the dead land (if you must know).

And that's just what certain franchises in baseball's dizzying merry-go-round (Darren V., feel free to cue up that long-treasured copy of the Wild Man Fischer LP...) have been trying to simulate--their own chthonic canticles of rebirth (even if it all merely amounts to various psychic variations on graverobbing).

The two "unreal cities" of the 2015 offseason (not counting Oakland, of course, since Billy Beane is already well-known for defying the limits of mere unreality) are Chicago and San Diego, where three teams (whose GMs dare to step out from the shadow of this red rock) are looking to walk out of their own graves.

Actually...the thing that scares us the most is just how
much the elderly Eliot resembles Bud Selig. Thank God
we've never seen a photo of him cupping his hand to his right ear...

The relativism, the uncertainty, the moment of repose in the leap of faith (see? you can't really tell when I'm quoting Eliot or simply playing randomly with the gas burners on my stove...) is what finally sinks in after all the media blather and the strong, pungent lather of the off-season, waiting for dull roots to be stirred by spring (t)rain(ing).

It's a healthy shoulder-shrug for those guardians of the word, who don't actually have to play the games, who also serve by scribbling (even in the face of automaton "replacement level" journalists--our thanks to El Jefe for the sobering reminder that everyone's consciousness will, sooner than later, merge with the machine). This is the season of casting stones, to followed in March with the regathering of those stones and the systematic stakeout of glass houses.

Anyone else see it? Jayson: if you just let that
hair grow, don some thrift store duds, and hire
a hag to be your mom for the photo shoot, you'll
be a dead ringer for Wild Man Fischer!!
And so you might be cheered by the cheeky comfort in the ongoing transit of the cloud of unknowing represented in Jayson Stark's ESPN column, with its ersatz quantification of off-season activity, where The Man Who Would Be Us But For The Grace Of God has once again donned his reversible vest and asked the Emperors to cover their heinies. (Of course, some people make a fortune out of turning polls into blunt instruments, but Jayson is smart enough to know that corpses planted in the garden a year ago have a dangerous tendency to sprout.)

What's usually the case with teams such as the Cubs, the Padres, and the White Sox--our troika of flamboyant off-season fisher kings--is that some overlooked element in the makeup of their roster proves to be a stumbling block for the prospects of a phoenix-like rise from the ashes. For the Cubs, it will be the karma of the ruling-class clan with that most unfortunate and negatively evocative name, added to the insular prep-school arrogance of its brain trust, that will stall the "progress of the seasons"--that, and the failure of certain young prospects (Kris Bryant, Javier Baez, Jorge Soler) to meet outsized expectations. For the White Sox, it will be that the massive off-season "haul to the stockyards" (and let's remember that it was always the South Side of town that was never safe from the whiff of cattle...) is more burdensome beast than sanctifying stampede.

Madame Sosostris, right after her blind date with A.J. Preller...
And, in San Diego, the spectre of a team that (according to Madame Sosotris' wicked pack of cards, at any rate..) will have the greatest discrepancy in home-road performance in recent times (venturing, in its own inverted way, into the territory occupied by the early incarnations of the Colorado Rockies) is going to put the chill on A.J. Preller's ascent into the Empyrean, leaving him instead with a series of B-tickets for all the really tepid thrill rides at Disneyland. He's a personable kid, though, and he'll resurface ten years after his ritual beheading as the new host on (yet another) remake of Let's Make A Deal. The irony will not be lost on him, but he'll do his best to suppress it...while dimly recalling that in a land strip-mined of its values, there is not even silence in its mountains. (That thought will be hard to keep hold of, however, when he's being overrun by those hordes of housewives.)

So...will these three unreal cities--or, rather, franchises--collectively play over .500 in 2015? Neither Jayson Stark, nor I--nor even that mischievous Man in the Moon--know for sure. What's happened to analysis in the past twenty years is that it has mixed its metaphors and its ideologies into a muddle, no longer sure of which is which, filled with carious teeth that can't even spit in the midst of its off-season spew. So we all await those bats with baby faces in the violet light...


Wednesday, February 4, 2015


We're getting close to spring training, right? (Even as--especially as--blizzards pound the American landscape east of the Mississippi.) So we can start writing "series" just like the overdetermined media folks do. (OK, we will refrain from the overdetermined "ask a stupid transparent leading question and make it the god-damned-mega-overdetermined-title-of-our-goddamned-stupid-article" ploy. We'll just use a lot more parentheses...)

And what better place to start a "series" than with our long-time, long-term semi-nebulous concept of the "blue collar" starting rotation. Sounds good, n'est-ce pas? It's got that "throwback" feel to it (even if no one can quite remember just what "blue collar" was supposed to mean).

So, goddamn it, we are here to define it at last. (And--goddamn it--we're damned if we do and god-damned damned if we don't.)

A "blue collar" starting rotation is one where a team has no starting pitcher with 20 or more GS in a season with an ERA+ of 120 or higher.

What we're interested in determining is as follows: 1) how many of them are there, and 2) how often do they occur on teams that make the post-season.

So we have (at right) a chart that shows the team data for this over the past ten years (2005-14).

When we break out those numbers, we find that 35% of all teams have what we call "blue collar" starting rotations. (42% of all teams have one pitcher with an ERA+ equal/greater to 120; 17% have two; 6% have three or more.)

Of those, 25 (or 20%) are playoff teams. We've identified most of these in the chart with a red zero. (Alas--goddamn it--we missed a few.) The most recent such team--the 2014 World Champion San Francisco Giants. The team they replaced as world champs--the 2013 Boston Red Sox--also had a "blue collar" starting rotation.

As you might expect, the more pitchers with a 120+ ERA that a team has, the more likely it is that they'll be in the post-season. 24% of all teams with one pitcher in the "white collar" (120+ ERA+) category make the playoffs' 43% of all teams with two pitchers in that same performance region wind up in the post-season. And 71% of teams with three or more 120+ ERA starters don't go home when the regular season ends.

Now, of course, some pitching rotations are more "blue collar" than others. We'll discuss that--and a bit more--in our next installment.