Sunday, September 23, 2012

HOW TO CHARACTERIZE QUALITY OF PLAY/3

Do you remember this chart?

Sorry, silly question...there are more charts thrown around in the world today than a barrel full of monkeys (a phrase as ubiquitous as the actual image of such simian abundance is elusive).

It's the chart we showed you about two weeks back, when we were discussing ways to define quality of play and how to display the discrepancies in actual won-loss record ("what really happened") from the Pythagorean Winning Percentage ("what should have happened").

If you are truly paying attention, however, you'll notice that this chart is an update of the one we displayed earlier. About 20 games' worth of the season have passed since the earlier chart was compiled, and this updated version adds in four more increments of the 30-game snapshots that were being tracked to show how actual WPCT and PWP vary. It's expressed in percentages...if you played .400 ball (12-18) but scored runs at a pace that suggest a .333 WPCT (10-20), you'd be playing about 16% better than your projected level of performance.

And this chart shows the fluctuations over 30-game snapshots for the teams in the 2012 AL East, capturing that data at five game intervals (1-30, 6-35, 11-40...116-145, 121-150).

What we see here is a bunch of jagged movement, which is to be expected. We also see something rather novel--one team (the Orioles) consistently outplaying its 30-game projections across the entire span of the data.

Now this chart doesn't help us know how well these teams are actually playing. We see whether they are exceeding their projections, or coming up short--but their won-loss records aren't shown here. It would make for a messy chart, to say the least. We'll keep looking for how to display the full data set, but we can go somewhat further with this direction in a couple of ways that should prove interesting.

Here's the same data as the chart above expressed in cumulative terms:

As we can see, the Orioles have been remarkably consistent in exceeding their performance projection. The Tampa Bay Rays began in a similar mode, but as the season went on, their ability to exceed their projection decayed--ironically, because they were doing such a good job of preventing runs that they literally couldn't keep up with what their run differential was doing to their projection. (It's a bit of a glitch in the PWP formula, particularly in smaller sample sizes).

It might become something of a vicious pattern in a team's psychology to be playing so well that you can't keep up with the number of games you should be winning: there's no way to verify such an idea without looking at an array of good teams who played below their Pythagorean projection, but it's interesting to see how the Rays have had such a linear decline in their "actual vs. shoulda been" numbers over the course of the 2012 season.

Again, we don't get the sense of the quality of play with this chart, just the sense of how much better or worse than that phantom WPCT the team has been doing on a cumulative basis.

There's one more conversion of this data that we can do that might be the chart to display for both mainstream fans and those who are more "numerically possessed." We can take this data, map it against the expected number of wins as they accumulate over the course of the season, and cumulatively track how many games above or below the PWP a team is. Playing consistently above one's level would produce a kind of linear progression up the chart as a team put more games between their PWP and their actual won-loss record.

So when we do that, we get a chart like this one:



And, yes, what we see for the Orioles is exactly what was described: a team that slowly, relentlessly, inexorably gained ground on its own projected won-loss record. By continuing to win close games (their record in one-run games right now is the best in major league history, and their record in games decided by two runs or less is the second best), they've kept their margin (as expressed in games) on something very closer to a linear ascent.

We're still grappling with how to show the WPCTS (actual and Pythagorean) in conjunction with this approach in charting the differences between them. The amount of data necessary to provide a full picture of what's going on here may have to focus on one team at a time. We'll be cogitatin' on it over the off-season.