Tuesday, March 24, 2015


We neglected to mention in our last post (so many moons ago: our "Rare Noir is Good to Find!" series just concluded in San Francisco yesterday evening...) that the advent of the five-man rotation (which begins in the mid-1970s) is the key event in the sudden and incontrovertible decline of the 20-game loser.

It's odd to note that this is also the social timeframe in which government management consultants  begin to perfect their ability to distance themselves from actual accountability for their work, which leads to partisan data manipulation and an internal bureaucratic system that permits a permanent political strategy of "running against the government." Baseball's current roster situation (more and more pitchers) is a result of something analogous: dispersal of blame protects all the individuals while allowing for a permanent outcry against the system. This mindset is one of the reasons why it is so difficult to stem the tide of pitcher injury--root causes and goal-oriented science become muddled in politicized muck.

For those starting pitchers not fortunate enough to the beneficiaries of such a system, however, the tag of "20-game loser" is able to attain a quasi-romantic hue and becomes an odd "badge of honor." We're going to examine some of these "20-game loss seasons" via the lens of QMAX and try to separate these  "big losers" into camps--those who pitched well and were inordinately unfortunate, and those who didn't. Along the way QMAX will provide shape statistics that augment and cast some new light on the more standard value statistics.

First up for such a treatment is Roger Craig, who managed to survive two 20-game losing seasons to become a longtime pitching coach and a major league manager. We focus here on his second 20-game loss campaign, the one that occurred in 1963.

You may recall that the strike zone was enlarged for this season and sent run scoring down into an abyss that hit bottom in 1968. Craig was one of the more victimized starting pitchers of this general trend: in his 31 starts in 1963, his team (the lowly New York Mets) scored only 2.29 runs per game. They were shutout in nine of his starts, scored only one run in six more. In 23 of his starts, the team scored three runs or less. That is an incipient recipe for 20+ losses, and Craig's won-loss record was 5-22 (5-21 in starts) . The Mets were 8-23 in his games started.

QMAX--and remember that it doesn't make its calculations using the actual runs allowed, but by a grid that plots hit prevention (modified for extra-base hits) and walk prevention (which also accounts for hit batsmen)--posits that Craig was actually right around a .500 pitcher in 1963.

He's a finesse pitcher, as the QMAX chart and the overall values (4.00 "S", or hit prevention; 2.39 "C", or walk prevention) demonstrate.

That "Tommy John" region (the box at lower left of the matrix chart) further confirms this. Eight of his starts fall in this region, as opposed to just one in the "Power Precipice" region (at upper right), where control is profligate but the pitcher is bordering on unhittable.

The "range data" (which quantifies the colored and boxed regions on the matrix chart) indicates that Craig was hit hard a bit more than a fourth of the time--but the basic chart tells us that when he was hit hard, he was really hit hard (all but one of these games in the "7S" row). He did manage to win one of these games (part of what was only the second two-game winning streak he had over the entire course of the 1963 season).

The photo above shows Craig changing uniform numbers: he was in the middle of an eighteen-game losing streak and he resorted to supersitition (#13) in an attempt to break it. It didn't help, because Roger was in a skein of games where the Mets were scoring absolutely nothing when he took the mound (in the ten-game stretch from June 22 to August 4, they scored a total of eleven runs).

We can use the QMAX matrix chart to capture a key aspect of this bad luck. Our modified matrix diagram (at right) shows won-loss records and ERAs for key regions. On this version of the matrix chart, we can see that the woeful lack of run support actually resulted in the Mets having a losing record (3-4) in his "elite square" games (which, collectively, produced a .705 WPCT for teams in 1963).

All this in spite of a 1.79 ERA.

[Two notes: 1) these regions show the "team record" for games in these regions, not just Roger's won-loss record; 2) the mid-region in the "success square"--the more intense yellow coloring--simply repeats the W-L and ERA for both segments.]

Finally, we can demonstrate the effect of control for a finesse pitcher by looking at the "C" region breakouts. For Craig, he was a very competitive pitcher when he had his control--a 2.49 ERA and a 7-12 team record in these games. But when his control was spotty, he was slaughtered: a 6.12 ERA and a team record of 1-11.

All current systems value Craig as just about league average (Wins Above Average is just under .500), but only QMAX can show you the component details by looking at the shape of his performance.

Sunday, March 8, 2015


We''ll get back to the idea of "blue collar" starting rotations a bit later, when we can process a little more of the data related to that concept. Let's shift gears here and take a look at what's happened to yet another baseball phenomenon that's approaching extinction.

What's that? Why, the 20-game loser, of course. A conversation with a long-ago colleague reminded us of the decline in this statistical category. As the chart at right indicates, we've had exactly one 20-game loser since 1980.

As you can see, there was a little flurry of 20-game losers in the 1960s/1970s, but that fell apart with a loud thump after 1980. The last team that had two 20-game losers: the 1973 Chicago White Sox. It's a feat that we seriously doubt will ever happen again.

Saturday, February 28, 2015


There has been much ado--mostly in the form of noise--about the fact that assigning wins to individual pitchers is a flawed process. Much of this ado (and much of it is, in fact, about nothing) is tied to an ideology that has developed from the value measurement systems that are being forced down the public's throat by emboldened cultists wishing to actualize the work of Bill James (who would merely make a corrective notation in the historical record) by actively rewriting the official stats to suit their own desires.

Step right up for those Radioactive Tango Love Pies™!!!
Call us curmudgeonly (and believe us when we say that we've been called much worse...), but we think that it might make sense to quantify the extent of the flaw before mounting a mouth-foam crusade to toss away the historical record. In the rush to judgment and the desire to own a mandate to interpret history, these folks (as usual, a number of them aligned with the purveyors of the Tango Love Pie™) have proven to be eerie precursors of the current United States Congress, a large faction of which hope to hijack history as well as the government. They both share the same strange obsession: to unilaterally declare something utterly irrelevant and bankrupt, only to follow by attempting to replace it with something that is, in fact, far worse than what they were criticizing in the first place.

The first step to a semblance of sanity with respect to the assignment of pitcher wins is to actually anatomize the flaws and then determine the rate of their occurrence. As is so typical of the "neo-post-neo" faction now searching for market inefficiencies in the degree of tension in an athlete's jock strap, the analysts favor "big data" without actually synthesizing any of it.

We see, for example, that a number of scoring quirks existed early in the twentieth century that assigned a handful of wins to a starter who hadn't pitched five innings. And we see a few stray instances of inconsistent judgment calls by official scorers. When we add this up over the long history of the game, however, we see that these types of glitches account for 0.5% of the total games played.

Viva la revolucion, n'est-ce pas??

But there is an area where wins are assigned with a through-the-looking-glass quality. These occur directly in conjunction with inefficient relief pitchers who surrender a lead and receive a win when their team retakes the lead while the pitcher who was lousy is still the pitcher of record.

This occurrence is not accurately quantified in any official way thus far; there are only surrogates for it that do not directly address the actual frequency. Over at Forman et fils, they list the number of times the starting pitcher is "in line for a win" but the team goes on to lose--an interesting stat in its own right and one that might assist in understanding the efficiency of a team's bullpen, but one that doesn't get to the root of our issue at all since we are looking for "wins while pitching badly that were stolen from someone who pitched better."

There is one statistic that can get us to where we want to go, however. That stat is the "blown save."

It's a stat that is overlooked, even scorned, due to two factors: 1) its lack of historical pedigree and 2) due to its odd lack of precision that creates "save situations" in the sixth, seventh and eighth innings. But it's precisely that lack of precision that affords us insight into the situations where what we've characterized (back up top in the title...) as "uggly relief wins."

In other words, wins as a result of blown saves.

So--how many of these are there? In 2014, there were a total of 59 "blown save wins"--wins awarded to relievers pitching badly enough to relinquish a lead, and then benefit from a go-ahead rally while they were still pitchers of record.

That works out to 2.4% of all wins for the 2014 season.

So what that means is that the cause celebre, this blight of all blights, constituting--apparently--the sellout of truth, justice and (God help us...) the American way, is focusing on a method that is perfectly reliable upwards of 95% of the time.

Reassigning 59 wins from relievers who've found this annoying little loophole seems a lot more reasonable than developing overwrought automated systems that reassign up to ten times as many wins in any given season.

Perhaps we need more history and more context? Would it be valuable to know if the percentage of "uggly relief wins" has changed over time? And maybe useful to have a sense of how the changes in the usage of the bullpen may (or may not) be affecting blown saves/blown save wins and this purported "conceptual crisis of the win"?

Well, of course we do. And Forman et fils is the place to acquire it. We spent some time (when we had it--as you may have noticed, we're not here a lot at this time because we have many, many other things on our plate...) looking at the data. And we've compiled a summary chart that gets us to the root of the matter.

The chart (above) starts with the actual number of blown saves in a season. We've condensed the data to reflect how the essential pattern has evolved. Accelerated bullpen use took hold in the mid-to-late 80s and was exacerbated by the offensive explosion in the 90s: we reach a peak at the very end of that decade. Things have declined a bit since, but seem to have plateaued.

Blown Save Wins (BlSvW) have also descended since the late 90s, and the percentage of "uggly relief wins" (UGG%) has declined back to pre-offensive explosion levels.

One of the other things that we thought might be significant here was to see if the ever-increasing use of relievers was having an effect on "uggly relief wins," particularly in terms of how long a reliever pitches in these. What's striking here is that from 1949 to the present, the vast majority of "uggly relief wins" come from pitchers who pitch at least one full inning (89% of them, in fact) while the percentage of blown saves that are an inning or more in duration is consistently between 50% and 60%.

That means that there's something structural about how "uggly relief wins" manifest themselves that resists any perturbing effect by the increasing number of relief appearances being made.

And interestingly, the percentage of blown save wins (same thing as "uggly relief wins," just in case that wasn't clear) in appearances equal or greater to a full IP is dropping. (That can be seen in the far right column, the one marked B!Sv1+% (yes, that "!" was supposed to be an "l"...fat fingers uber alles!) One wonders that if offense continues to decline, whether "uggly relief wins" will continue to drift downward.

So should we worry about the "uggly relief win" and how it has ruined the use of pitcher wins? No, of course not. But we don't expect this finding to gain much traction with the ideologues, who would like to take away texture and shape and all of the "imprecision" in value assessment that they imply, and try to do so with the zeal of a score of possessed mothers compelled to throw out bathwater and baby. (Give us spots on our apples, and leave us the birds and bees, already...)

Finally,  here's an astonishing fact related to blown saves in general that just dropped out of the data collection effort. It turns out that, over the last twenty years at least, the blown save--this is in general, now, for all outings identified this way, not just those that become "uggly relief wins"--is accompanied by an exceptionally high stolen base rate. It's not uncommon for the success rate in steals during blown saves to be upwards of 85% (in 2013, there were 91 SB, 14 CS in blown saves, or 86% to the good).

What does that mean?? Hard to say. But it's strange, and interesting, and more worthy of some fuss and fol de rol than the so-called "crack in the earth" purportedly produced by the fact that pitcher wins are not a perfect laboratory product.

Friday, February 27, 2015


We are loath to traffic in the bracing but often overly brilliantined "compare/contrast" franchise that Bill James invented in order to create a framework for literary form often masquerading as analysis. Such a technique reached its apex (or its nadir) in The Politics of Glory, where the dualist approach was so pervasive as to signal a potentially dangerous compulsion. (Of late, Bill has returned to this technique, improving on it by improvisationally adding more players to the comparison.)

As a stylistic device, it's often fascinating because there is a palpable psychological undercurrent that emerges from it that often transcends the mere content being discussed. The same cannot be said, however, for those who slavishly imitate the form that Bill invented. Contrarian philosophical urgency, which oozes out of Bill's toothpaste tube of discourse almost involuntarily, is replaced by a kind of wan sophistry (as embodied by the Lindberghs and the Keris and the bland inheritors of all the "prairie fire" numberists) that instinctively chooses limpid over lumpen.

To put it another way, Bill's work in this area has always been akin to a blue plate special which relied heavily on the prominent placement of side dishes, which often were plopped on the plate first in anticipation of the main course's arrival (often plopped down with the rough panache of a proud backyard chef). His inheritors have all shown the lamentable (but market-driven) tendency to go nouvelle, serving up tiny entrees on impossibly large plates with some festive food coloring festooned round its edges.

So you can sense our reluctance to traipse through those dangerous swinging doors. But, hey, when in Rome, right?

The recent passing of Minnie Minoso reminded us of just how long there has been a case bubbling over (as opposed to a case of bubbly delivered erroneously to your address...) about his worthiness for the Hall of Fame. Our view is that he's just on this side of that paradisiacal marker, but we would lose no sleep if a lobbying campaign carried him into Cooperstown. Thinking about this again on the occasion of his passing, we're reminded of Ken Boyer--a contemporary of Minnie's who also has been heavily touted for the Hall of Fame by the numbers crowd.

So before we could stop ourselves, we tossed together our version of a "comp" for these two. As you'd expect, ours is radically simpler than what you'd get with WAR (a system that clearly distorts the importance of fielding and uses a transient combination of coarse models and crude interpretations to overstate positional difference).

This radical simplicity is, indeed, radically simple: OPS+ and triples. (Not triples...again?? Fear not: this is our version of the Jamesian "side dish," applied here because we think it's interesting to look at category defined by its scarcity in the time frame being covered.) These are arranged in five-year totals/averages.

What we see here is (despite what is also a calculational strangeness in the offensive component of WAR) just how good Minnie was in this time frame.

That's eight straight five-year slices where he's in the Top 15 in OPS+, an overall stretch of twelve years. By contrast, Boyer has only one five-year slice where he cracks the Top 15. Minnie made it into the top ten four times.

Boyer surprises us, though, with his showing in triples. We had to remember (and without help from Brock Hanke) that Ken came up to the majors with some speed, and even played a passable center field one season early in his career. Playing in a league where ballparks conducive to triples were already giving way to the cookie-cutter stadia of the sixties, Boyer's 3B totals rank well even if they pale in comparison to Minnie's at their peak.

What probably keeps Minnie on the outside looking in with respect to Cooperstown, however, is his lack of a palpable peak at any point of his career. Numbers guys have meta-categorized such a region of players with the glib monicker of The Hall of the Very Good. In Minnie's case, he's probably more accurately in The Hall of the Very, Very Good. Boyer, a fine fielding third baseman (but not quite as good as the numbers guys have claimed) is probably straddling each of these regions.

Friday, February 13, 2015


Our title presages one of the coming features of our (admittedly idiosyncratic) coverage of the beautiful eyesore that is baseball for 2015--a series of essays in which aberrant references to and arcane interpretations of T.S. Eliot's The Waste Land will gurgle up like...well, like lilacs out of the dead land (if you must know).

And that's just what certain franchises in baseball's dizzying merry-go-round (Darren V., feel free to cue up that long-treasured copy of the Wild Man Fischer LP...) have been trying to simulate--their own chthonic canticles of rebirth (even if it all merely amounts to various psychic variations on graverobbing).

The two "unreal cities" of the 2015 offseason (not counting Oakland, of course, since Billy Beane is already well-known for defying the limits of mere unreality) are Chicago and San Diego, where three teams (whose GMs dare to step out from the shadow of this red rock) are looking to walk out of their own graves.

Actually...the thing that scares us the most is just how
much the elderly Eliot resembles Bud Selig. Thank God
we've never seen a photo of him cupping his hand to his right ear...

The relativism, the uncertainty, the moment of repose in the leap of faith (see? you can't really tell when I'm quoting Eliot or simply playing randomly with the gas burners on my stove...) is what finally sinks in after all the media blather and the strong, pungent lather of the off-season, waiting for dull roots to be stirred by spring (t)rain(ing).

It's a healthy shoulder-shrug for those guardians of the word, who don't actually have to play the games, who also serve by scribbling (even in the face of automaton "replacement level" journalists--our thanks to El Jefe for the sobering reminder that everyone's consciousness will, sooner than later, merge with the machine). This is the season of casting stones, to followed in March with the regathering of those stones and the systematic stakeout of glass houses.

Anyone else see it? Jayson: if you just let that
hair grow, don some thrift store duds, and hire
a hag to be your mom for the photo shoot, you'll
be a dead ringer for Wild Man Fischer!!
And so you might be cheered by the cheeky comfort in the ongoing transit of the cloud of unknowing represented in Jayson Stark's ESPN column, with its ersatz quantification of off-season activity, where The Man Who Would Be Us But For The Grace Of God has once again donned his reversible vest and asked the Emperors to cover their heinies. (Of course, some people make a fortune out of turning polls into blunt instruments, but Jayson is smart enough to know that corpses planted in the garden a year ago have a dangerous tendency to sprout.)

What's usually the case with teams such as the Cubs, the Padres, and the White Sox--our troika of flamboyant off-season fisher kings--is that some overlooked element in the makeup of their roster proves to be a stumbling block for the prospects of a phoenix-like rise from the ashes. For the Cubs, it will be the karma of the ruling-class clan with that most unfortunate and negatively evocative name, added to the insular prep-school arrogance of its brain trust, that will stall the "progress of the seasons"--that, and the failure of certain young prospects (Kris Bryant, Javier Baez, Jorge Soler) to meet outsized expectations. For the White Sox, it will be that the massive off-season "haul to the stockyards" (and let's remember that it was always the South Side of town that was never safe from the whiff of cattle...) is more burdensome beast than sanctifying stampede.

Madame Sosostris, right after her blind date with A.J. Preller...
And, in San Diego, the spectre of a team that (according to Madame Sosotris' wicked pack of cards, at any rate..) will have the greatest discrepancy in home-road performance in recent times (venturing, in its own inverted way, into the territory occupied by the early incarnations of the Colorado Rockies) is going to put the chill on A.J. Preller's ascent into the Empyrean, leaving him instead with a series of B-tickets for all the really tepid thrill rides at Disneyland. He's a personable kid, though, and he'll resurface ten years after his ritual beheading as the new host on (yet another) remake of Let's Make A Deal. The irony will not be lost on him, but he'll do his best to suppress it...while dimly recalling that in a land strip-mined of its values, there is not even silence in its mountains. (That thought will be hard to keep hold of, however, when he's being overrun by those hordes of housewives.)

So...will these three unreal cities--or, rather, franchises--collectively play over .500 in 2015? Neither Jayson Stark, nor I--nor even that mischievous Man in the Moon--know for sure. What's happened to analysis in the past twenty years is that it has mixed its metaphors and its ideologies into a muddle, no longer sure of which is which, filled with carious teeth that can't even spit in the midst of its off-season spew. So we all await those bats with baby faces in the violet light...


Wednesday, February 4, 2015


We're getting close to spring training, right? (Even as--especially as--blizzards pound the American landscape east of the Mississippi.) So we can start writing "series" just like the overdetermined media folks do. (OK, we will refrain from the overdetermined "ask a stupid transparent leading question and make it the god-damned-mega-overdetermined-title-of-our-goddamned-stupid-article" ploy. We'll just use a lot more parentheses...)

And what better place to start a "series" than with our long-time, long-term semi-nebulous concept of the "blue collar" starting rotation. Sounds good, n'est-ce pas? It's got that "throwback" feel to it (even if no one can quite remember just what "blue collar" was supposed to mean).

So, goddamn it, we are here to define it at last. (And--goddamn it--we're damned if we do and god-damned damned if we don't.)

A "blue collar" starting rotation is one where a team has no starting pitcher with 20 or more GS in a season with an ERA+ of 120 or higher.

What we're interested in determining is as follows: 1) how many of them are there, and 2) how often do they occur on teams that make the post-season.

So we have (at right) a chart that shows the team data for this over the past ten years (2005-14).

When we break out those numbers, we find that 35% of all teams have what we call "blue collar" starting rotations. (42% of all teams have one pitcher with an ERA+ equal/greater to 120; 17% have two; 6% have three or more.)

Of those, 25 (or 20%) are playoff teams. We've identified most of these in the chart with a red zero. (Alas--goddamn it--we missed a few.) The most recent such team--the 2014 World Champion San Francisco Giants. The team they replaced as world champs--the 2013 Boston Red Sox--also had a "blue collar" starting rotation.

As you might expect, the more pitchers with a 120+ ERA that a team has, the more likely it is that they'll be in the post-season. 24% of all teams with one pitcher in the "white collar" (120+ ERA+) category make the playoffs' 43% of all teams with two pitchers in that same performance region wind up in the post-season. And 71% of teams with three or more 120+ ERA starters don't go home when the regular season ends.

Now, of course, some pitching rotations are more "blue collar" than others. We'll discuss that--and a bit more--in our next installment.

Saturday, January 31, 2015


Over at his site, Bill James is in the midst of what will be likely be a book devoted to a revamped version of his fielding method for Win Shares. Aside from Bill's valiant attempt to demystify and critique the work of those who've made overly aggressive claims about matters with the glove, it's fascinating to see how the people "on the inside" are jockeying for position. (We'll get to that in a bit.)

A shameless and entirely unrelated plug for our upcoming "International
Film Noir" series in San Francisco this March, where twelve of the fifteen
films in the series haven't been seen in the US for more than 50 years.
(Note to Bill James: resist the temptation to be a film/cultural critic.)
Bill is revising a series of models and value assessments about the various fielding positions, and there are some fascinating data perspectives that he's worked up as he attempts something like his own "big data" approach. (Not play-by-play data, but a more comprehensive agglomeration of traditional fielding data than what's been either envisioned or attempted previously.)

From looking at where he's at right now (up through first basemen), several things seem likely, and they  will be make for improvements in the earlier work. It's likely that the new method will at last rectify the ongoing modeling error that virtually every major fielding system has replicated--an exaggeration of the centerfielder's actual contribution to outs made. That should reduce the overall number of Win Shares assigned to that position. And this will help to eliminate a whole series of distortions that enter into other ranking systems (if those folks will pay attention to what's being said, that is).

It's also likely that Bill will stop short--in fact, not even mention--what is (and has been for some time) our view of what the most important missing factor in refining defensive evaluation. What's that? Simply, it's measuring how far fielders have to go from where they are positioned to get to the ball.

Now, clearly, one reason why Bill will at best only mention this in passing is that he wants to create a system that permits some kind of historical comparison, and the measurement above is simply not something that could be done without post-modern technology. It's clear that Bill is pretty much abdicating this aspect of things to the technocrats, who (unfortunately) are quite unlikely to ask the right questions about how to collect this data.

Coming soon! "Mini Tango Love Pies" with special containers that can
be used to turn any statistical argument into a blunt instrument.
If that data is collected properly, we will know much more about the effective defense-to-pitching ranges that exist but can't currently be measured. We'll know more about such concepts as "pitcher luck" because the distance to ball data will tell us much more than the overrated BABIP stat does.

But likely the biggest battle that will come up in this new discussion, and one that is already underway from the ongoing chatter (including side conversations at the gathering place where the Tango Love Pie™ continues to bake...) has to do with what the effective range between the best and worst fielders at a position is. In his current work, Bill suggests that this range is much lower than virtually everyone else in the field. That has spawned some dubious modeling exercises elsewhere that try to force-fit a link between the gap in best-to-worst and the overall modeling inference about the overall importance of fielding in run prevention.

Those models are ideological holdovers from earlier, flawed representations of data and they persist in the thinking. The flawed result is that the effective range from best to worst is accepted as existing across all teams, when that gap is mitigated by the fact that in real life, team defense is never existing at anything like the individual positional extremes. In short, if you think there's a 25-run difference at a position, you can't just add up the seven positions and claim that the effective impact range for team defense is 175 runs. You have to have temper that "greatest possible gap" to reflect that no team--even using a team-based method that builds in the assumption that bad teams have lousier fielders (an assumption that is a modeler's compromise), there's no way that even the worst team can have all of the worst fielders on the list. It would be tantamount to multiplying your replacement level value seven times and then applying it to the data set--the result would be to make the fielders look far worse than is actually the case.
Somewhere in there...an actual "effective value range" for the
team-aggregated run prevention effects from the Defensive Spectrum.

It's clear that the answer about the "effective quality range" for teams as a whole is at least half of what the "additive approach" claims. It looks like Bill's method overcorrects a bit for this, and the early chatter suggests that this discussion will become one of the key skirmishes in the ongoing "Fielding Wars."

At any rate, it's good to see Bill focusing himself on these issues again--and it's also heartening to see our old pal Charlie Saeger, who was actually ahead of the fielding curve in the late 90s when his Context-Adjusted Defense (CAD) method (one of the proudest moments in the flamboyant history of BBBA) shifted the ground on which these discussions originate, right in the middle of the ongoing responses, providing his usual tongue-in-cheek sanity check.

Friday, January 16, 2015


GOOD news and a wonderful discovery are what's prompting us to dig out of our mid-winter baseball lethargy and fire up another exercise in "blogolalia."

The good news is that Terry Cannon and his minions at the Baseball Reliquary are now ensconced in a permanent home. Later today, at Whittier College, the Institute for Baseball Studies will get the proverbial bottle of champagne cracked over its doorknob and there will be a physical location for those  enamored in the art of baseball history to visit in search of whatever form of baseball enlightenment they profess to seek.

You are all encouraged to visit the Institute's Facebook page and join their community. As you'll see, there's no shortage of content to be found there--and when you arrive at the Institute, you'll find a key resource that automatically makes them into a destination: the papers of peerless baseball historian Paul Dickson.

This is truly the beginning of the next phase in the life of the Baseball Reliquary and its associated activities on behalf of "the art of baseball history."

IN the midst of this, that "wonderful discovery"--a singular blog presence that embodies "the art of baseball history" in ways that parallel--and, perhaps, augment--the work of the Baseball Reliquary.

Artist/designer/historian Gary Cieradkowski, in a humble, unassuming way, has staked a claim as one of the great practitioners of the "art of baseball history" with his Infinite Baseball Card Set. The blog is an ongoing creation of a very unusual, highly eclectic collision of baseball lore and Gary's own immense skill as an artist/poster designer.

Thus far, there are 184 entries in
what could indeed be an infinite baseball card set--where hidden lore and the romance of early baseball (when it was a good bit more liberated from the mass-media manipulation that has come into being over the past half-century) can blissfully coexist.

Cieradkowski combines history and art in a uniquely entertaining way: his love for the lore and for the odd details of individual lives and unusual events is exactly what "the art of baseball history" is all about. He is mining territory similar/adjacent to what "reformed sabermetrician" Craig Wright has been doing so well for many years, but the added dimension here is the visual accompaniment. The cards (and Gary's baseball poster art) all straddle a fine line between referencing baseball's primordial "design sense"--the pre-Art Deco conventions of early twentieth-century commercial photography and his own bold-but-subtle updating of that style, as can be seen in two examples of his poster work.

Hours of entertaining forays into unknown stories, or unusual takes on familiar ones, can be found at the blog. (His most recent entry, tracing the story of the mysterious pitcher who made it safe to be mysterious--Fred Mitchell "Mysterious" Walker--is a full meal disguised as a treat.)

Even though it's criminally old-fashioned to say so, it really ought to be a book--a big, bright book of baseball love that can sustain and console the desolate baseball fan during the winter interregnum.

Tuesday, January 6, 2015


The lingering talk of a massive BBWAA clusterf*ck is going to have to drift into other topics: today's Hall of Fame voting results plowed through three first-time candidates (Randy Johnson, Pedro Martinez, John Smoltz) and elected Craig Biggio (who missed in '14 due to a couple of hanging chads), while putting Mike Piazza (just under 70% of the vote) in position to appear on the Cooperstown dais in 2016.

This year's election was the first time in sixty years that the BBWAA enshrined four players on a single ballot. The four who made it in 1955: Joe DiMaggio, Dazzy Vance, Ted Lyons, Gabby Hartnett.

We are still astonished at the across-the-board support for Smoltz, who by rights should be drawing support somewhere between Curt Schilling (who wound up with 39% of the vote this year) and Mike Mussina (25%). As noted earlier, we can only conclude that a narrative of success penetrated the collective unconscious of the voter population (cynics may wish to issue an apology to Carl Jung on my behalf).

No movement, vote percentage-wise, for the two greatest
players not in the Hall of Fame.
Smoltz is, of course, deserving, but he really must be seen as one of the more curious anomalies in the often-vilified BBWAA voting process.

For 2016, we anticipate two inductees: Piazza and first-time candidate Ken Griffey, Jr. Enough ballot clearance might also permit Tim Raines and Jeff Bagwell (each with about 55%) to make significant strides with BBWAA voters.

There was very little movement in the voting percentages for Barry Bonds and Roger Clemens, and it remains clear that the Hall of Fame's action to slice off ballot time is indeed an odious effort to remove them from view just as soon as possible.

But, hey! Congrats to the new inductees. We suggest that those who wish to honor them in person this summer do so, but refrain from spending any money at the Hall of Fame itself. The organization deserves to be punished as much as the players deserve to be honored.