We bloggers are spur-of-the-moment, react-just-to-react types, you know? We live for the present. And for the snark. And dogmatically defending metrics because we just can’t handle uncertainty.
Sometimes, though, we like an “evergreen” series of posts. Even we dwellers in the ephemeral like to have something we can fall back on. In my case, I have chosen to do catcher fielding rankings on (somewhat) monthly basis. It seems like a good idea for an easy, quasi-monthly post, but then I realize how clunky my spreadsheet is, and how it needs to be touched up and checked every time, and how the first post of the year, especially, is brutal since I have usually forgotten how a lot of it is set up. SIGH. Blogging: it’s hard, y’all.
I started doing the catcher defensive rankings at the end of 2009 for a now-defunct site, and even though I think there are catcher fielding metrics available now that are probably better, this is somewhat expected of me and people seem to like it, so I am going to try and stop apologizing for it. (For some of that, here are last year’s final rankings.) Anyway, it is always fun to start them early enough in the year so that someone surprising will end up on top (or bottom) and people will throw a fit about it. So forget sample size qualifications, true talent-versus-observed performance reminders, and methodological admissions (brief notes about the method can be found at the very bottom of the post — please read that before complaining), let’s get to it!
I might get into issues about methodology later this year again, but for now I will just remind people of what some of the metrics are (and again, how they are calculated is at the end of the post). “FERuns” is runs saved above or below average in terms of fielding errors made. “TERuns” is the same for throwing errors (they have difference run values per event). “PBWPRuns” is runs saved above or below average in terms of blocking pitches — passed balls and wild pitches. “CSRuns” is runs above and below average for caught stealing.
With that out of the way, here is a little commentary before you get to the table. We have a fun, complaint-inducing leader so far this year, who is it?
Later in the year, as things shake out more and the sample is more interesting, I might break down the categories in more details, but for now let’s just look at our overall leaders and trailers. As of today, your overall leader for runs above average for catcher defense (as defined here) is… the legendary Yan Gomes! Oh, calm down, people. It is a small sample, and teams have tried to run on him, badly, which is the primary reason he is on top so far. He has thrown out eight of the 13 attempts. He is followed by Yadier Molina, last year’s winner and super-stud. They are closely followed by A.J. Ellis, who continues to show that his defense, like his on-base skills, are no fluke, and the bizarrely underrated Matt Wieters.
On the very bottom we have Cleveland’s Carlos Santana, who probably is not this bad, and frankly, who cares, given that his power has caught up with his plate discipline such that even with poor defense behind the plate he might be the best overall catcher in the American League. Michael McKenry, sad Pittsburgh backup, is just above him, but in very little playing time. Third from bottom is the Incredibly Disappointing Jesus Montero — not for his defense, but for his putrid bat.
Concluding Methodological Postscript
I should make clear that for reasons of simplicity I am not including such debated areas as pitch framing or the more amorphous “game calling.” I am not taking a position one way or the other on either of those, simply making clear the bounds of these rankings. When I discuss “catcher defense,” like most others, I will be discussing preventing stolen bases, blocking pitches, etc.
One of the difficulties with evaluating catcher defense with regard to even these issues is that, much more than with other fielding positions, the catcher’s performance is dependent on another player — namely, the pitcher. No matter now strong or weak the catcher’s arm is, he can’t escape the reality that he depends on the pitcher’s skill with regard to holding runners, quickness to the plate, etc. While the catcher’s skill with regard to blocking pitches that are off the mark is clearly important, catching Tim Wakefield poses a unique challenge — just ask Josh Bard. And so on.
For these reasons, probably the best way of measuring catcher defense is Tom Tango’s WOWY (With or Without You) method of defensive evaluation as detailed the 2008 Hardball Times Annual. You can read about the details in the links provided. Versions of WOWY for catchers have also been done by Brian Cartwright and Dan Turkenkopf. I would do it that way if I could. The main issue is that 1) it’s pretty complicated, and beyond my present capabilities, and 2) it requires something like Retrosheet, which isn’t available until after the World Series is over, so even if I could do it, I couldn’t get the numbers during the season of even now…
While the method used here is neither terribly subtle nor original, I think when compared to things like the Fans’ Scouting Report and WOWY methods, it compares fairly well. Just keep in mind the acknowledged limits (e.g., not taking into account the pitchers’ contributions like WOWY does).
The Method Used Here
For non-WOWY catcher defense, the basic idea is to 1) choose what events you’re going to deal with, 2) determine each catchers performance with respect to league average, and 3) decide the run value of each event.
Stolen Bases/Caught Stealing (CSRuns): First, we figure out the league rate for caught stealing. One cool thing about the new Baseball Reference is that it separates out the catcher caught stealings from the pitcher pickoffs, so we can exclude the pickoffs (not under the catcher’s control) from the equation. So we total the CSctch +SB to get total stolen base attempts (SBA) and then to total CSctch/total SBA for the lgCS rate. We use the weight of .63 runs for each caught stealing, which represents the average linear weight of the caught stealing (.44 runs) plus the weight of the stolen base not achieved (.19 runs). The formula for runs above/below average for each catcher is thus (CS – (lgCSrate) * SBA) * 0.63.
Wild pitches/passed balls (WPPBRuns): The league rate is (WPlg + PBlg)/lgPA. The linear weight for each passed ball/wild pitch is 0.28 runs, which we make negative since the more WP/PBs a catcher has, the worse his defense is. The formula for each player is ((WP + PB) – (lgWPPBrate * PA)) * -0.28.
Errors (FcE and TE Runs): I deal with three different kinds of catcher error recorded by Baseball Reference: throwing errors, catching errors, and fielding errors. I’ve assimilated catching errors to fielding errors. There are separate linear weights for throwing (including catching) errors (-0.48) and fielding errors (-0.75). The method is the same as above. Get the league rate, then see how far over/under the player is. For throwing errors: (TE – (lgTErate * PA)) * -0.48. Fielding errors: (FE – (lgFErate * PA)) * -0.75.
Then you just add them all up to get the total runs above/below average. It’s not perfect, and hopefully, there will be some improved options soon, but the results do seem to reflect reality. I round to one decimal: I aware that gives an illusion of precision that isn’t there, I simply do it to expedite sorting and ranking. I thought about coming up with a “rate” version like UZR/150, but that isn’t as simple as prorating for innings caught/PA — one needs to normalize each sort of event separately, the chart is confusing enough as it is. For now, this is just a value measurement of what each player did this season.