Archive for the ‘Soccer Analytics’ Category

Reading v Leicester City - Sky Bet Football League Championship

This week, if the Telegraph is correct, Manchester City and Paris Saint Germain will be handed down a punishment for violating rules meant to prevent clubs from spending in excess of what they earn to compete in Europe. Readers should err on the side of caution, particularly as UEFA will not comment officially on the matter.

The news comes just before the apex of the league season, around that time of year when assembled newspaper transfer hounds pick up faint scents of deals yet to be done and of struggling managers about to meet their sorry fate, the intoxicating odour of more money spent and losses cut in the endless pursuit of a few fleeting moments of footballing glory.

At the heart of all this is a central question: what does it take to win at the football? Particularly at the highest levels of the sport?

Certainly a good dose of luck, demonstrated by a Premier League title race in which the leaders are tightly packed together at the top, separated only by individual goals in single matches.

But what about winning consistently, over years and even decades? The answer, at least over the last 25 years or so, is money and a global supporters base that only history can provide.

What if you have neither?

This is less and less of an academic question every year. Should City or PSG be punished by UEFA in a way that sticks (with or without the legal help of angry third party clubs), UEFA will have made their point. If you want to compete, you cannot spend far in excess of what you earn.

Yet even without FFP, we’re reaching peak spending in European football. There are only so many oil barons interested in buying football clubs, and only so many clubs for them to buy. Spending hundreds of millions of pounds on a Premier League squad won’t give you an edge if your competitors are doing the same.

So if money can no longer buy glory, are there any means left for clubs to get an edge?

Football is a romantic sport. It is marked by passion, by courage, by luck, belief in your team, your supporters, yourself. These are the qualities that make it so compelling for the millions who have made it a part of their lives.

The problem however is when we exclusively turn to these qualities to explain why one club wins Premier League trophies every other year while another club consistently finishes in fourth or fifth place.

Think of your knee jerk, impassioned response to some poor results from your club. Sack the manager. Sell the overpaid donkey up front. Get new owners, the club is inept, run by idiots. They’re all greedy and selfish. They go out and party while the team continues to lose.

Keep in mind, it’s possible the club should do none, one, some or all of these things. If you make the wrong decision however, like sacking a manager suffering from a short-lived poor run of form driven more by variance than skill, it can cost the club even more down the line and put you in an even worse mood when you call for the head of the old manager’s replacement (the author writes this as someone who was glad when Paul Lambert replaced Alex McLeish as manager of Aston Villa).

Critics might claim there was no way anyone could have known beforehand that the problem with the team was a personnel issue or a managerial issue, but in 2014, with so much good, publicly available analysis on team diagnostics (including the impact managers have on good predictive metrics like TSR), this rings hollow (Statsbomb incidentally did a lot of good work this week on evaluating managers).

Nothing any columnist writes will convince skeptics of the value of data analysis in sports, but analytics-boosters might gain from reframing their work along helping clubs make educated decisions based on good evidence. Most of the time this is implied in their work, but here we can put data in a wider context moving from on the pitch to the front office to the board room.

Your team sucks. Okay, do they have a decent TSR but a low PDO? Can you identify affordable options to radically improve the team? Are there ways the club can plan five, or even ten years in the future to build on their current form rather than simply pray to stay up every year? This becomes less about data and more about evidence-based decision-making.

FFP has set the stage for a club to take the lead in this regard. Even if the clubs are skeptical, surely they have nothing to lose in making decisions based on reasonable, tested evidence.

Liverpool v Tottenham Hotspur - Barclays Premier League

Resident hockey expert Justin Bourne wrote a stellar piece on “score effects” in hockey this morning. Here is a good definition of score effects via the popular Toronto Maple Leafs blog, Pension Plan Puppets:

Teams that are behind tend to get more shots and scoring chances because they press to get back into the game, and often the team with the lead naturally sits back and absorbs pressure. Conversely when the game is tied, or close (within a goal, or within 2 in the 3rd period) teams tend to play a much more balanced approach, giving up as little as possible, and working to score more goals on offense.

Interestingly, this effect persists in soccer too where it’s generally referred to as ‘game states’.

Bourne offers up a few theories for the root cause of this effect in hockey. One in particular however stood out for me: simple psychology. When their team has a lead, coaches tend to put out less talented players and their more talented players are under greater pressure to avoid mistakes:

What they do want, is Jay McClement to chip the puck out of the zone because, like fans, they’re less stressed when the puck isn’t in their zone. So, it gets out, coach feels relief, sees who made the clear, and the rat has been rewarded. He wants more of that.

If we were a behavioral psychologist, we might refer to the psychological response to the scoreline as a kind of heuristic. To borrow the Wiki definition:

In psychology, heuristics are simple, efficient rules, learned or hard-coded by evolutionary processes, that have been proposed to explain how people make decisions, come to judgments, and solve problems typically when facing complex problems or incomplete information.

Now I’m bending the rules a bit here, but in soccer, game states are a persistent effect, to the point where there is clearly something going on which goes beyond conscious tactical adjustment. I haven’t conducted a study but I’m inclined to think the persistence of score effects/game states underscores a fairly natural team response to a lopsided scoreline.

Anyone who’s ever played a team sport knows it. You’re losing, your team gets desperate, you all push up the pitch to try to get back in the game, and in doing so you leave yourself open at the back. You see it in football matches all the time.

Now in behavioral psych, heuristics become interesting when they lead to cognitive biases: decisions that feel right but are in fact ‘illogical’ (not to sound too much like Spock). If you’re aware of this pitfall, you can use it to your advantage either by avoiding it yourself or taking advantage of it in others.

And this where we get to Brendan Rodgers’ 2013-14 Liverpool FC.

They are league leaders on 71 points and the talk of the league, playing a breathtaking, attacking style of football that puts asses in the seats. Great stuff for the neutral.

Some, like Michael Cox on yesterday’s Guardian Football Weekly podcast, point to Liverpool’s incredible counterattacking ability. That view fits with some telling statistics courtesy of the great and vital work of Ben Pugsley.

First, a primer on some advanced stats in soccer.

Liverpool are third overall in the Premier League behind Man City and Arsenal in TSR at a tied Game State, but are eighth in TSR Close (which included tied and +1 Game States). Moreover, Liverpool shoot once per 11 passes, the second smallest ratio in the league (they’re behind Newcastle).

From this we can glean a few possibilities. At a tied game state, Liverpool are effectively dominant, outshooting their opponents and pushing for an opening goal. However, we can safely assume that Liverpool are spending a good amount of time at +1, which is when the losing side tends to push up the pitch and take more shots, opening up space behind them which a quick attacking force of the likes of Sterling, Sturridge and Suarez can take ruthless advantage. The speed of Liverpool’s transition to attack could also be reflected in their very low passes-to-shots ratio.

Now you don’t need statistics to tell you the advantages of working hard to score the first goal, then sitting back to play aggressively and quickly on the break. But Liverpool’s approach also neatly fits with a statistically consistent, apparently universal predictable pattern of play observed in Game States.

It’s also clear that many top tier teams don’t adjust their play to take advantage of the Game State effect, for example relying on plodding build up play allowing the opposition defense to track back in time to defend in numbers.

Now I don’t know what kind of data LFC and Rodgers tracks, but here is a clear area where a coach can take a statistically measurable effect like the Game State heuristic, and use it to their advantage. See? Analytics in action, and you may not have even realized it.

Barcelona's Messi celebrates a goal against Celta Vigo's Rodriguez during their La Liga soccer match at Camp Nou stadium in Barcelona

There has been a bit of noise recently in soccer analytics circles about the nature of “finishing”—that moment where a player has the ball at their feet and has made the decision to shoot—and whether it’s subject to random variation or is a repeatable skill that provides a measurable boost in goal numbers for certain players.

The soccer watching public strongly believes it’s the latter, but the high variability from season to season in shot conversion rates (the number of shots that go out of shots taken) has led some analysts to cast doubt (but not dismiss) on the idea finishing is an important skill, or at least more important than others like positioning etc. With some exceptions, this is the consensus view in most analytics circles.

Opta analyst Devin Pleuler wrote on this subject today, publishing findings that generally support this view. However, Pleuler isolates shot conversion rates for chances based on goal-probability, and finds the data does reveal what many have long suspected: for some elite strikers, ‘finishing’ is indeed something of a repeatable skill:

With a comparatively weak r-squared value of 24% we are correct to be discounting finishing skill in favor of a players’ more repeatable ability of constantly finding themselves in goal-scoring opportunities, but finishing skill does seem to persist. Over enough samples, expect that players with exceptional finishing skill to perform better than their average – but sometimes lucky – counterparts.

Again, this doesn’t present a massive break from intuition. Good strikers are amazing finishers, chipping, curling balls around the keeper, hitting the corners from distance with the kind of accuracy only years of practice can produce. That the skill itself is faintly repeatable can provide a huge boost for technical scouts, who might be able to see players whose shot volume might be impeded by a terrible midfield, but whose finishing is off the charts. You should be able to isolate for near-misses too, so they aren’t counted the same as howlers.

Yet there is another, perhaps even more cost-effective way that mediocre clubs can use this data: coaching. It looks absurd to write out, but shooting quickly, efficiently and accurately under pressure from defenders is a learned skill. All players have it in some basic form—what if there was a way to improve it across an entire team?

The assumption is “of course they train for this,” but in my admittedly limited experience, club training sessions involve a host of complex exercises with varying degrees of emphasis on everything from positional discipline to transitions from defense to attack. Some training sessions take finishing as an already formed skill, under the assumption that if you’re in the big leagues, you already know how to shoot.

Not all clubs make that assumption. Rene Meulensteen for example did some intense work with Cristiano Ronaldo specifically on his finishing.

Let’s put it another way; if your newly promoted Premier League team is likely going to post an average Shots Ratio against your opponents, what can you do? What if there was some way to move the conversion needle up, even slightly, across the whole team? Could data on average shot location for goals scored on low probability attempts help in anyway? Or is it is a chimera? Is this faint noise in finishing an in-born talent only available to the best of the best? Or can it be improved?

Blackburn Rovers v Burnley - Sky Bet Football League Championship

So I didn’t quite anticipate the flood of responses when I opened the floor on Twitter to help me out for today’s post:

While I anticipated the usual hang ups, I received back a flurry of well thought out, serious objections to the application of analytics to the sport. Because of the sheer volume of both, today I’ll stick to common reader objections to stats analysis, and next week I’ll look at serious objections from those within the field itself. Apologies if I missed something obvious, but please do let me know.

1. Analytics and stats in general do not take context into consideration often enough

This one from @Dbaser92. I think this encapsulates a lot of what people don’t like about the use and application of advanced stats in football, so I’ll use this point to get a common problem with analytics writing out of the way.

I think as readers we need to pay close attention to what analysts are actually saying when they publish their findings. For example, there’s a big difference between “Joe Allen had a great performance because he had a 91% pass completion rate” and “shot dominance tends to correlate with a higher points total.” Both require context, but only one oversteps its bounds.

Simply posting a stat under the assumption that it is a good or bad number without having done the work to see if it’s repeatable or significant is bad analysis, but it doesn’t make analytics bad per se, much in the same way Adrian Durham doesn’t make football writing bad per se. In my experience, the majority of analysts don’t make huge empirical leaps, and if they do, they’re usually called out on it within minutes of posting.

One needs to be vigilant, both as writers (Twitter means we’re all writers now) and readers, in ensuring that analysis writes cheques that its butt can cash. That’s on you, man.

2. “Stats are like miniskirts, they don’t reveal everything”

That one from Prozone analyst and all around smart guy @OmarChaudhuri. That is a direct quote from a comment on an excellent post on Man United he wrote a few weeks ago.

Again, you as the reader need to be careful to judge what the analyst is telling you. Analytics isn’t about ‘revealing everything.’ It’s a method. Some use it to build better betting models, some use it to make better decisions in player recruitment, some use it to see if there’s a better way to play the game. It’s far better to view analytics as a tool, rather than a world-explaining philosophy. This is why I cringe when I read about the “Analytics Movement.” It’s like referring to the “Mathematics Movement.” Math is a method, a means of measurement. It’s not a world view. Your job is simply to judge whether the analyst accomplished what they set out to do.

3. Analytics doesn’t take into consideration the ‘intangibles’ like heart and romance

You know? It doesn’t. But the intangibles vs tangibles thing is a false dilemma. If you run a regression and you notice a strong correlation with, say, final third touches and goal differential, that is literally all you know. You can make all sorts of inferences from that information: possession-based teams win more games, build up play is superior to counter-attacking football, the big teams in Europe prefer short passing etc. But in the end, all you have is a sample size and a strong correlation between two variables. None of this discounts the importance of things like teammanship, or heart, or passion, or desire.

I think what gets analysts’ knickers in a twist is when people make huge empirical leaps based on those intangibles. “Chelsea lost the game because they weren’t confident.” This is an exaggeration, but pundits say this kind of thing all the time. Well, single matches involve a lot of complex interplay, unforeseeable outcomes, lucky bounces, fortunate decisions, incredible technique, momentary lapses in concentration, bad decisions from the refs etc. etc. To single out something as abstract as a ‘state of mind,’ one that the pundit presumes is equally shared by all eleven players, with no evidence to support your claim is to open yourself to reasonable criticism.

This isn’t to say confidence doesn’t matter, or trust in the manager isn’t important. But we have no clear idea if they matter more or less than the weather or the state of the pitch. It sure feels like they do.

4. A lot of analytics is just pointing out the obvious

This is probably the most common complaint I hear about stats analysis. “Oh, teams that shoot more than they concede will win more games? Bravo Einstein.” “Wow, so you’re saying if teams shoot from better positions they’ll score more goals? Here’s your Nobel!”

Couple things here. First, imagine if you asked someone asked you how far it was from New York to Los Angeles. I could either say, “it’s far.” Or I could say, “4,491.0 km.” Or I could say “one day and 16 hours of driving.” The value of the answer depends on what you’re looking to do.

If the analyst set out to find a revolutionary new way to win at football, well, the correlation between shot dominance and table position may not help.

But if they’re building a betting model? Or giving managers a concrete means to tell if their team is in okay shape? Well, measuring an “obvious” data point is going to be very helpful indeed.

5. It’s never going to be like baseball

Oh man this, from @Rui_xu, is defo a common one. Baseball is a sport of discrete events, soccer is complex game of flow. Ergo advanced stats work in one and not the other.

That soccer is the way soccer is puts analysts at a disadvantage, if the aim of the analyst is to find market inefficiency in how players and teams are evaluated. If your goal is to do something else, like predict which team will likely finish first and which team will likely be relegated, or which player is creating better chances on a regular basis than an another player, analytics can be very helpful indeed. Plus X,Y positioning data can offer more depth, though I’m not sure it will foment a revolution in how teams play the game. I doubt it.

But again, it all depends on what analysts are trying to do. This is the running theme, in case you missed it.

6. Stats are boring and stat blogs are poorly written

When you see something stats related, don’t read it. Don’t know what else to tell you. It’s the Internet. You have personal agency, comrade. Don’t like it, avoid it.

7. I’m not good at math

Well, getting your head around certain statistical concepts is tricky for us all. I think though at the core it’s just common sense. Teams that tend to do this over a long period of time tend to score this many goals or earn these many points. Here is a graph showing a regression analysis. Google ‘regression analysis.’ I would also recommend reading Michael Mauboussin’s The Success Equation, which is a great primer on most of this stuff. But if you don’t want to be bothered, see number 6.

Arsenal Training

Arsenal travel to Munich today to face Bayern Munich in the Champions League needing to overcome a two goal deficit in order to progress in the competition against one of the best teams in Europe. They are the clear underdogs in what could be a captivating European tie (or a Bayern steamroll job).

Now, I could give you a dispassionate, tactical account of what Arsenal need to do in order to win, but it would be an incomplete hack job from someone who has barely scratched the surface of fully understanding football tactics at the elite level. I could talk about how underdogs traditionally win in life by changing the parameters of the game, rather than facing their opponent on equal terms. So in soccer speak, Arsenal should press high for an early goal, then play a little deeper to hit on the counter. Over to you, Arsene.

Wenger’s own strategy however has been to focus on the little things. Like priming the refs. From the Guardian:

“We played now a few times with 10 men in Europe and under always very special circumstances,” Wenger said, appearing to begin to say “suspicious” before checking himself to say “special”. “In the Champions League final … now against Bayern and at Barcelona when we were in a position to qualify.

“It’s the only time that I’ve seen that since I watched European football when Van Persie was sent off. So I hope we will get a fair chance to play with 11 against 11 until the end.”

This may seem spurious, but it is in fact Wenger “making his luck,” sowing the immoveable seed of self-doubt in the referees’ minds like someone kindly asking you NOT to think of a white whale, which invariably forces you to do just that.

That’s because even Wenger the Idealist Economist knows that in single games, these little elements matter. “Luck,” by which I mean random (or unaccounted for) variation, has a tremendous influence in a knockout competitions, which is why we accept that while Chelsea won the European Cup against Bayern in 2012 they were not, in fact, the superior team. Variables that are inconsequential in the long term can overcome imbalances in talent in a single ninety minute span. It’s in the individual match that the abstract ideas talked up by most European football pundits—heart, passion, togetherness—might exert a big influence. That’s part of what makes football great.

Yet while these things are important, they don’t replace the importance of footballing talent in building a successful football club in the long term. This should be stupidly obvious, but in practice, there is a lot of disagreement over what, exactly, it means to be a good footballer, or even what makes a good team. Or to put it another way, there is a lot of disagreement over how exactly we should measure footballing talent.

This past week, Tim Lewis examined these questions as part of a column for the Observer on the influence of Prozone, Opta, and analytics in general on football. In it, we hear from a lot of analysts stressing the endless possibilities provided by improved data collection, and we hear from Everton manager Roberto Martinez on why he evaluates players based on “the way he speaks to other team-mates after missing a chance, the way he celebrates a goal, the way his team-mates react when he scores.”

And here again another writer portrays stats analysis in football as a dichotomy between tangibles and intangibles. But what if intuition and stats analysis are just two ways of answering the same questions. Questions like: what does it mean to be a great footballer? What does it mean to be a great team? How do we bridge the two things?

If I’m looking to answer the question from a statistical point of view, I would only concern myself with player traits that are repeatable and strongly correlate to improved team performance. I would also want to get a good idea as to whether those traits carry over from team to team. In practice, at least in the public arena of amateur stats analysis, those kinds of metrics are very elusive. What I wouldn’t want to do is pick through whichever data points I think are important without testing them.

If I’m looking at the question from a more intuitive point of view, I would want to know how well the player is settling into the team, whether they understand managerial direction, whether they feel comfortable with the manager’s overall tactical approach. I also think you can be intuitive in a way that’s also smart, as in my idea for a scouting ‘Apgar score’ which effectively systematizes what are essentially completely subjective views, and almost by magic improves their reliability. While I’m a tad skeptical of Martinez’s preference for a positive attitude—Arjen Robben’s team-mates at Bayern often look morose when he scores, and yet Robben is still Robben—you can see what he’s after. A team with a bunch of ego-maniacal sad sacks is going to be more difficult to motivate after some unlucky results than a team of sober professionals. While these things may not matter in the long-term, they might make the difference in crucial moments of a crucial match.

The thing to remember is neither the statistical or intuitive approach alone will give you a definitive answer. They simply tell you what they tell you, and it might work out and it might not. And maybe there isn’t a definitive answer to the question anyway. One man’s Messi is another man’s Bramble, depending on what you’re looking for. A manager might turn down a player with fantastic numbers for a slightly less capable one simply because the former has a massive ego and they don’t feel like babysitting a borderline alcoholic for a couple of years. Maybe qualities that don’t help a team in the long-term (like the league, for example) turn out to be very important in single matches (like cup competitions).

Furthermore, neither approach is going to eliminate the element of risk (I’m not even certain you’d want to eliminate risk—far better to establish a team strong enough to survive and even thrive on uncertainty). Risk need not always be negative, after all, as when underrated players become unexpected stars in defiance of both stats and intuition.

But to rely exclusively on one without any reference to the other? I’m not sure anyone does that, anyway. So why are the two always framed in endless opposition when it comes to player scouting and team building?

Chelsea Press Conference & Training

Yesterday Dan Altman Tweeted this:

and it got me thinking about how we evaluate player signings in European football. Some people in the analytics world would ideally like clubs to see players only in terms of how they contribute to wins. IE, as cogs, variables in a spreadsheet, who can be replaced should they not perform. Not that we’re anywhere close to having the data to do this exactly, at least in the public realm. Though the idea should be that clubs, armed with as much information as possible, negotiate player contracts based solely on the value they provide on the pitch. After all, what do clubs live for except to win?

The problem with this view is that it doesn’t take into consideration a host of factors off the pitch that affect player value. One is, as Altman points out, the commercial value a player provides to the team. Rooney may not be the same Rooney in five years’ time, but his presence on the team shores up Man United’s global brand (barf puke etc.). We know who Rooney is, and we’ve known for some time, and this carries import with dumb-dumb marketers. Before you dismiss this as a bastardization of the game, remember that players who significantly improve brand visibility help foster fatter commercial deals which clubs can use to buy lesser known but superior talent down the line. This strategy is partly why MLS overpays for aging but widely recognized talent. It generates commercial revenue which gradually over time allows the league to attract better talent at younger ages.

One could make the argument that any player which wins trophies will sell t-shirts, so why not sign less-recognized but more talented players in that endeavour? That’s not exactly true though, else the world would be awash with Mandzukic Bayern replica kits. But the real reason is that the big clubs can simply afford to pay a premium for visibility. For them, the inefficiency on the pitch is counterbalanced by the leverage off it.

What does this mean? Well, I think it means that expecting the Big Clubs to play it smart in the near future is probably a waste of time. Progress in analytics comes from innovators who can’t simply buy whomever they wish in order to win (see the Oakland A’s, blah blah blah). This is probably why guys like Big Sam are proponents of analytics-driven performance on the pitch, though the real magic trick would be finding market inefficiencies in player recruitment. We like to think certain clubs do this, but it’s all hearsay.

The general point here though is that we should be careful in how we frame our criticisms of club signings. Though it’s fun and it provokes debate, not all clubs are in the market for the same reasons. And some are even willing to sacrifice the best possible outcome on the pitch in return for a better brand. That’s terrible, awful, modern football at its worst, I know. But it means that the future of progress in player analytics lies with the clubs that get the least attention.

Manchester United v Fulham - Barclays Premier League

Last week I wrote a little bit on Expected Goals. While not a new metric, it’s starting to come into vogue as more and more analysts demonstrate its predictive value.

The idea behind Expected Goals or ExpG is simple: it uses average conversion rates by shot type–whether by location, foot or head, distance etc.–to add another layer of analysis to raw shot data, like Total Shots Ratio.

There has been an outpouring of work in this subject over a short period of time, so it’s hard to keep track of every new development. My tentative opinion however based on the very early returns is that the truly “repeatable” element (that is the part of ExpGs which involves skill rather than random variation) is, as with shots, the volume of higher probability chances, rather than the actual ExpG to Actual Goal count which I think, like total team shot percentages, come to down to as yet unknown variables and skew a lot higher for the “best” teams.

Now as I’ve discovered over the years, whenever you write about what you think is clearly an exciting development, someone inevitably leaves a comment like this one:

“Wait. So you’re telling me the team that creates more, high quality chances, will win?

BRILLIANT!”

(This is a real comment).

And this very real commenter has a point. What do football clubs do except to try to create as many high quality scoring chances in a game as they can with the players they can afford? What is the added value of this kind of statistical analysis?

Well, as Daniel Altman eloquently argued last week, it depends on just what exactly you’re looking to do with the data. If you’re a gambler, this kind of data can help improve your predictive model and put more money in your pocket. But what if you’re a manager?

Before I answer, I would urge you to take a quick look at Michael Caley’s Premier League table, which incorporates a host of data on Expected Goals for and against, shots from various areas of the pitch, etc, and reveals a fairly distinct correlation (this season at least) between the ratio of ExpGs for and against and place in the table.

Done? Good.

Now imagine you’re a Premier League manager and you don’t have access to this kind of league-wide information. All you know you is you have to improve your club’s performance or you’re going to lose your job (and maybe get your team relegated in the process). This leaves you with a vague set of hunches about what to do. Sure, you have your team analyst giving you sophisticated performance data about your squad. But what you know is in a vacuum. You don’t have a clear idea of how your side is performing against other teams in the league in various areas, or even if improving these aspects of your team performance makes a significant difference over the long term.

So you’re left to tinker, to adjust your tactics, to play around with formations, to try out better teams talks. Here’s hoping!

This is of course a caricature. Perhaps Premier League clubs already have access to detailed, league-wide ExpG data and every team has already made any and all tactical adjustments to try to improve. Perhaps what we’re seeing in the league right now is in fact perfect competitive equilibrium, solely influenced by luck and filthy lucre. Maybe, but I have my doubts.

“Okay fine,” says our interlocutor. “But what good is analytics unless it tells managers WHAT they should do, tactically speaking, to improve their team?” And at this point they’ll send you Kirk Goldsberry’s Grantland article on the NBA using SportVu data to develop an Expected Possession Value.

This is where I hear the ghost of Nicholas Nassim Taleb scream “Platonicity!” in my ear. The idea of using extremely complex mathematical probability models to establish “idealistic” attacking scenarios sounds cool and sci fi and stuff, but at some point you have to translate these abstract, momentarily ideal decisions to human players who are human and have years of experience playing a particular way and, if I haven’t said it already, are human.

Why do we need analytics to be an oracle? Why not just use it for old fashioned diagnosis? Your team is allowing too many chances directly in front of goal–what do you do? Go back to the video, see exactly what is happening, and adjust. Try a different formation. Play a deeper defensive line. Switch the counter against aggressive teams. There’s no magic answer. You don’t need a Harvard mathematician to figure it out.

That we can do this at all is incredible. If you can’t see the value in it, well, good luck to you in your struggle to stay up.