I know this column is about tactics, and I also know that it has often overlapped with issues that might seem more appropriate for the Tuesday analytics column. That’s intentional; as long as “analytics” gets cordoned off as its own separate niche category, the more easily it can be ignored by fans, journalists, and football managers and directors.
The isolation of analytics is a hot topic right now. If you’ve ever spoken to any of the leading lights in the football analytics world, the number one issue that you often hear again and again is: “How do we as performance analysts gain the trust of the manager, the director of football, and the first team coaches?”
The advice you hear along these lines involves everything from being a nice guy to not inundating your boss with spreadsheets and statistical baffle-gab. I think this is generally good advice, unless it comes at the cost of making the best decision for the team.
I think however that the caricature of coaches and managers as old school boors who wouldn’t know an algorithm from an albatross isn’t realistic or helpful. These are people who are in charge of getting the most out of their squad, selecting the right players for the right match, and taking them through midweek training exercises that will help propel the team to victory on the weekend.
For managers, nothing is theoretical. As a manager, you are in charge of a squad of human beings, working alongside other human beings, in order to win a game that is overwhelmingly decided by luck and the talent of your players even before you’ve scrawled on a single chalkboard. You are expected to be a teacher, a motivator, a tactician, a football expert, a judge of player quality, and sometimes, a parent to some very difficult but supremely talented people. You must do all this under intense pressure from supporters, the board, your players, and (depending on the size of the club) the media. You also must perform to a public that often ascribes far greater responsibility to the manager for the state of the team than might objectively be deserved.
Every manager is different, and each considers their training, their personal preferences, their understanding of the game, and the opinions of their coaching staff (of which the performance analyst is but one) in how they run the team. While many of the best managers are risk takers, they are also well aware of how tenuous their jobs are.
So you can see how a performance analyst might get lost in the mix.
It doesn’t happen at every club, of course. Sam Allardyce’s West Ham clearly relies heavily on their performance analyst David Woodfine, and players regularly receive data dossiers. I hope however that the data presented in those documents is a) something the player can actually work to improve on their own and b) is actually correlated to an objectively better performance.
For those analysts who might not have as great a say in how the team is run, I think there is good news. Because I honestly think that good analytics makes for good coaching, if applied in a useful, meaningful way which can be practically integrated into team training sessions and one-on-one work.
Earlier this week I looked at how statistics might be used by fans, bloggers and journalists more effectively. Here therefore are some completely speculative questions data analysts might consider to help craft their work to fit the needs of first team coaches and managers. Keep in mind I have NO IDEA whether this in fact how things work. I’d love to hear from some PAs about this stuff.
Which individual/team metrics are the most important?
Here is an excellent post from Alex Olshansky on StatsBomb which makes the case that it would be better for analysts to look at shot volume and key pass numbers instead of goals and assists. Why? Because they’re far more repeatable from season-to-season.
That might seem odd to herald; if the player is scoring goals and making assists, who cares about those numbers that don’t help my team to win?
Well that repeat-ability hints at an underlying individual consistency. It hints at something that is more marked by talent than by luck (although we need to be careful here to isolate for factors that might be influenced by other players).
The tricky part for the analyst is to explain the difference between a positive process and a positive result. So a player with great key pass/shot volume numbers but bad goals/assist numbers might have been affected by factors beyond their control. That presents a very interesting opportunity for the first team staff.
Which individual/team metrics can be improved on through coaching, and which can’t?
As an addendum to his post, Olshanksy writes that while shots and key passes are better, they’re still not good enough. But he adds:
Luckily, much work has been done on shot location/type and expected goals (here and here and many other places). As far as I know, adjusting for shot location/type hasn’t been attempted yet for shots resulting from key passes, but that is a logical next step. Theoretically, an expected goal and expected assist model would be the best predictor.
This seems to make sense, but it raises the question: can the ideal player position on a certain shot or key pass be coached? Can an individual player learn to take more effective shot types? To what degree? Or within a season does it come down mostly to random variation?
Or maybe improving these qualities falls under a broad, ambiguous category, like decision making, which Tony McKenna and Lee Mooney touched on in a great article for the Tomkins Times, and can’t be coached at all. It’s important for both the analyst and the coach to know with some measure of certainty, while at the same time not hesitate to experiment.
How do I work with individual players and the team in order to get an improvement?
The analyst is not the first team coach, but there ideally should be good communication between both groups in order to ensure that the analyst is providing the manager not only with good and bad numbers, but some input on concrete, workable means to improve them. For example, are there some ways a team of average quality improve their possession of the ball in the final third and create better chances? If so, how could a first team coach work on the training ground?
Or—how exactly does game state change the behaviour of the opposition? Is there a set of tactical actions a team can take to make the best advantage of the situation?
However, this leads to another important question…
Are there potential negative trade-offs in getting the team to focus on improving certain metrics?
We can call this the Reep Trap. Here is the crudest example I can think of to explain what I mean. You know that shots-per-90 is a good, repeatable metric. So you tell the first team coach, and then he shrugs and then encourages his forwards to take more shots in a ninety minute match. Of course what happens is the shot conversion rate drops because the forwards are just shooting willy-nilly from wherever to drive up the numbers, possession drops because these shots lead to goalkicks, and the team actually starts to suck.
Or, to take a more sophisticated example from above, you tell the first team coach that “these areas of the pitch tend to produce higher shot conversion rates.” But you as an analyst haven’t checked to see whether the reason shot conversion goes up in certain areas has more to do with the habitual defensive positioning of the other team than anything having to do with a magical pitch.
So your forwards take shots from or work to get in these positions regardless of what’s happening around them, and nothing much improves.
This to me strikes at the perilous, dark heart of translating analytics into coaching and management. It’s my guess that the best analysts will take these very complex problems into consideration when making their judgments.