How Long Until Decision Training Is Optimized?
The previous section showed us how the difficulty score of a video is a driving factor in improving referees/linesmen in uCALL. But could the NHL have expected this capability in the first year of using uCALL back in 2018? The answer is: no, the above graphs shows the reason why.
This graph shows the “standard deviation” (see more about this way of summarizing data here) of all video difficulty scores in uCALL. Importantly, it shows these standard deviations as a function of how many unique (or new) videos referees/linesmen have worked in uCALL. For instance, to the left, there are a low number of videos seen (this is early in the season when they begin usage). And the standard deviation (or spread) in difficulty scores is quite high. Conversely, to the right, there are a high number of videos seen (this is late in the season after much usage). And the standard deviation in difficulty scores is much lower (almost 2x-3x lower).
As the difficulty scores’ standard deviation drops, uCALL gets better at matching a given penalty video to a referee/linesman’s capability. Just like many mobile technologies, uCALL gets better for an individual user as he uses it more and as more people use it (see other examples of such technologies here).