Most Common Mistakes Data Scientists face when presenting results to business stakeholders
In this section you'll learn about some of the common traps which I used to make or saw other DS making when presenting to non-techy stakeholders.
Fixing these mistakes isn't always easy but being aware of them is the first step towards improvment.
Walking stakeholders through your step by step process
This isn’t school, you don't get working out marks. Stakeholders want to know what’s up and fast. Don’t waste your time and theirs explaining the intricacies of your model or experiments. Note, that this isn't the same as witholding caveats or risks with your analysis, but just not talking about the nuts and bolts of your method.
There is a time and place to talk through nuances but it is not when presenting results. If you want to discuss your method you should do this within a DS team meeting where the aim of the call is to go into the details.
Not presenting your results immediately
Alot of junior data scientists will often want to set expectation by talking about their findings and then the implications that these findings have on this bussiness. But this is backwards.
The best presentation format I have found is to present your conclusion first and then layer in data insights periodically to back up your findings.
Showing them how the model works under the hood
As a general rule don't do this.
You should be discussing the technical nuances of your analysis and models but its important to pick the right forum.
Not using any interpretability techniques to explain your model
Black boxes are scary especially when there is a lot on the line. Think about how can you give your stakeholders confidence in your results and explanations so that they can make key decisions with minimal risk?
There are alot of approach to this:
- SHAP values
- Some models have interpretability metrics (beware that these can sometimes be misleading)
- Use an interpretable model from the beginning (not always possible). But in most cases you should be able to determine whether model interpreation is a hard requirement.
When interpretation isn't possible you can normally run a sensitivity analysis. Eg if X was wrong by 10% then it affects the result by 2% is very different from X was wrong by 10% then it affects the result by 20%
Not adding uncertainty or bounds to your estimates
Always Always Always add bounds to your predictions. This gives you and your stakeholder confidence in your estimate and gives you leeway for errors in your model. Guessing something exactly is incredibly hard but giving a range takes the pressure off. Adding standard errors to your estimates is really easy and there isn't any excuse for not including them. The first three paragraphs of this article explain it really well.