fbpx

After digesting the post analysis of the AFL trade period, where each club is ranked on its trading performance, usually on the very day the trades have completed, I found myself considering that this was not an overly useful analysis.  Invariably, I find the evaluations weighted toward those clubs most active in the trade period, particularly teams with net positive player acquisitions.  It had me thinking about how long after the trade period would be a reasonable duration to perform analysis? 

And it also had me thinking, what about the teams who invest in the draft?  How do we measure their draft performance, and after how long?

Avid fans who follow the build-up to the draft period, who digest each and every phantom draft and rate the talent based on the players highlight reels, will also be familiar of the attempts to rate each club’s draft acquisitions in a similar and abrupt manner to which the trade period is evaluated.

Ultimately, a club’s draft picks in any particular season can only be fully assessed at the completion of each player’s career, and even then, it is very much subjective.  This means of course, that we don’t fully evaluate for perhaps 15 years after the draft has eventuated, at which our interest in that particular draft has very much evaporated.

When considering a fair period of evaluation for the “AFL DRAFT Mid-Term Report Card” visualisation, I’ve taken account the following, noting also the obvious limitations in such an approach (any evaluation is better than nothing…even if it is on draft night).

  • Players take different timeframes to develop, particularly tall acquisitions, thus the timeframe must give players a reasonable time to develop.
  • To rate players in one particular year might be unfair to a player who is injured or otherwise has a poor year, but perhaps a stellar career.
  • Rating players would be best served by a combined qualitative and quantitative approach.

Thus, I’ve decided on the following metrics:

  • A player’s aggregate performance in seasons 3, 4 & 5 (since being drafted).
  • Games played as the quantitative metric.
  • AFLCA coaches votes as the qualitative metric.

Games played can also be considered qualitative, given you have to be performing well to play a game.  This also raises the consideration that it will be easier to make the team for some clubs moreso than for others.  However, by choosing years 3, 4 & 5, we give players an adequate opportunity to make their case.

The AFLCA Coaches’ votes provide a better measure of player performance than Brownlow Medal votes.  Each coach rates their top 5 players for the game, awarding 5 votes for their best through to 1 vote for fifth best.  Often coaches are in close alignment as to the best players, although sometimes there is a wider distribution.  There’s also less likely to be bias on possession count (the Brownlow is frequently a midfielder’s award), since coaches are more likely to recognise other team-oriented performances, such as a strong defensive game, and therefore assign votes to players in other positions.

There are a few limitations with the data and some limitations imposed on the display; the draft picks have been clipped at pick 80.  Historically, rookies were elevated to the main list through the use of a later draft pick.  These have been omitted in the visualisation, for high performing rookies unfairly distorted the analysis, however father-son selections are included.  It would have been nice to be able to extend the analysis to include more recent drafts, however I felt this would distort the analysis by not giving players enough time to establish themselves, despite some players clearly being able to do so in a shorter period of time.

There are potentially some data errors included in the analysis, since the data was gathered from a few different public sources, and with no consistent player identifier, the data has been connected on player name, not ideal, with duplicate names identified and aligned.  There’ll be a player in there who hasn’t linked correctly; if you find one, feel free to shout it out to me.

I wasn’t sure what to expect in terms of patterns emerging from the visualisation, but if we were drawing simplistic conclusions, picks 1 to 5 are gold.  We can see a distinctive decline in games and votes from picks 31 onward, so trading into the top-30 would appear sound strategy.

In terms of ranking the draft years, 2013 is a standout, both in games and in coaches votes.  A quick scroll through the players reveals the likes of Patrick Cripps, Matt Crouch and Zach Merritt all making strong voting contributions despite not being picked inside the top 10.  The tail wags in 2013 as well, with handy acquisitions in Ben Brown, James Sicily and Jeremy McGovern also plumping up this draft’s credentials.

Here’s the AFL Draft Mid-Term Report Card visualisation to enjoy, an interactive visualisation of the 2009 to 2018 drafts (click on image to view the visualisation on Tableau Public).

Grant Henderson | LinkedIn

AFL Draft Mid-Term Report Card v2