Dear Advisor: How Can Founders Build Customer Trust around AI?
February 2024
It seems that everyone is pitching "AI for <noun>" – without passing the "AI Startup Litmus test." But you're not doing that! How do you stand out from the crowd in a way that builds trust?
Please note: I’ll use "customers" and "clients" interchangeably, as the advice applies whether you’re implementing a software solution for B2B customers, are a DTC software solution solving customer pain points, or are a consultant offering implementation services.
Part 1: Hear Your Customers Out
Advice #1: Build Trust with the Pitch
Customers don't care if you're using AI; they want to solve their problem!
That was a hard pill for me to swallow when I first started as a solo consultant. I would pitch why prospective clients need data – but they didn't. Nor would they pay someone to analyze it. But they would pay for outcomes!
Remember to focus on how the product/service will benefit the customers and bring them value – every time you pitch, whether fundraising or in sales! AI is just one of the tools to get there.
Advice #2: Dive Deep to Understand the Status Quo
We’re creatures of habit! We’ve also probably all gotten a gift/swag we had no use for, but someone somewhere thought we needed.
(Putting on my consultant hat) Listen and hear your customer out – before discussing even a hint of a solution! Let them vent! Focus on understanding:
The status quo of how they do things now. What’s working and not? What would they like to change?
Is it painful? And how painful?
Does it need to be solved? Or a nice-to-have? e.g., Is it urgent but not important? Do they need a vitamin or a painkiller?
How did the customer get here? Try to understand their position.
Were they hurt in the past by "AI" vendors that promised the moon – and then couldn't get off the ground?
Advice #3: Understand Customer/Client’s Misclassification Risk Tolerance and Cost
If we were to make predictions, is there a large margin of error that carries a high cost if we get things wrong?
For example, this is the case if we identify defective parts in manufacturing a Class III Medical Device.
Part 2: 5-Tiered Solution to Help Clients Get More Comfortable with AI
Step 1: Start with “100% Sure” Use Cases
Are there scenarios/cases that always warrant a specific answer, e.g., an exact solution, that we can just look up? Use if/else conditions to predict only those that match this criteria – and leave everything else for review. This way of making predictions should reduce manual labor and build instant rapport with the client!
Bonus: This approach will also work even if your customers don't have any historical data to work with!
Bonus: Because we don’t always make a prediction and only make one when we're in the right, we won't “hallucinate” an incorrect recommendation.
Step 2: Make Suggestions for Those Most Sure
Of the remaining events, what scenarios are we fairly confident about? Use if/else conditions to suggest predictions only for those that match these criteria; include the certainty in each and leave everything else for review. This approach should reduce manual labor further and build more rapport with the client!
For example, with one of my old FinTech clients, we only predicted a specific class if we were at least 95% sure of its prediction, which is much higher than the typical 50% threshold in identifying which of these two classes something belongs to.
Bonus: Because we won't always make a prediction, we’re less likely to “hallucinate,” unlike ChatGPT (and friends), who don’t share the underlying certainty in a reply and always reply.
Please note: This assumes that the customers have historical data to develop a predictive model; if that’s not the case, see Part 3 below.
Step 3: Process Side-by-Side
Offer to have the predictive algorithm run side-by-side with the person currently doing the work without changing the current process or customer habits.
Validate and review every prediction (from Steps 1 and 2) afterward together, keeping track of what the human missed vs the computer and how long it took to make each prediction for both. This process will help you quantify how much manual labor you've saved, translate it into $$$, and be the use case for showcasing how to scale the work at a lower cost. The client will also appreciate this!
Bonus: Because we won’t always make a prediction, the error rate (and associated costs) should be low!
Bonus: Because you've just validated a new data set, you can now update the predictive model and potentially reduce the error rate even more!
Step 4: Evaluate Algorithm performance using Business Metrics over Model Metrics
The client won’t care that the model is better because the BIC went down, but if they’re a lender, for example, they will appreciate that the loan write-off rate did. That will also showcase your expertise and build trust and rapport with the customer.
As an aside, when evaluating AUCs, be suspicious of anything over 88%. If that happens, look for overfitting or data leakage!
Step 5: Focus on Low-Hanging Fruit
By tackling those business questions with high ROI and low effort, we’ll build even more rapport with the client!
Part 3: Tackling the Cold Start Problem of No Historical Customer Data
You’ve signed your first customer you’re providing the service to (or just launched your product) – only to find that they have little/no (usable) historical data to work with. What can you do? You still have options!
Option 1: Propose to act only on those events you are 100% sure about (see Step 1 above).
Option 2: Start with a hard-coded recommendation and let the users modify it to get user-in-the-loop feedback on what’s working for them and what's not.
For example, if you’re a travel marketplace, your first recommendation to every customer may be to show the same 3 items (same brands, colors, and sizes): a suitcase, headphones, and joggers.
By capturing customers’ engagement and activity around those recommendations, from color and size options to category and item swaps, you’ll learn a lot more about what’s working and not for your customers – to inform a better future model – than to try to launch the marketplace MVP with a predictive algorithm that doesn’t have this information.
In future iterations, you can adapt this rules-based recommendation to be by their geography deduced from their IP, for example; alternatively, you can vary the brands, colors, and sizes. But you don’t need AI as part of the MVP!
Parting Advice
Let the client guide the vision for how the solution should work in practice to build more rapport and have the customer feel heard.
Hear their feedback and input!
Allow them to opt in/out.
Iterative development, (Beta) testing, and validation are encouraged!
No one said you can only have one algorithm to tackle all the use cases!
Start with the simplest algorithms first:
If/else: If A happens, do B; if C happens, do D; otherwise, don’t make a prediction
Consider logistic/tree-based regression to predict those we’re most sure of. These algorithms will make verifying what’s working and what went wrong easier.
The minimum sample size depends on many things, including the use case, cleanliness of the data, how prevalent the outcome we're looking for is (e.g., does it happen every 1M views or every 10?), etc. Try something and evaluate!
Validate after every try!
Don’t forget to tie it back to how you’re solving customer pain points, such as how hard/costly it is to scale/hire.
The algorithm doesn’t always have to make a recommendation – and can pass things on to customer support, e.g., a human-in-the-loop.
For example, Amazon’s chat does this well.
For those still hesitant, see if there’s a way to gamify.
For example, can someone earn a badge based on the % complete they’ve reviewed?
Good luck!
You May Also Like
Dear Advisor: What should my AI/LLM Go-to-Market Strategy be for my (Pre)Seed Start-up? (or) AI vs. Product-Led Growth
Tell me what you want, what you really really want: How to identify the real business question
(Step 1) Dear Advisor: How do I get Started on a Data Strategy from the Beginning?
Dear Advisor: Who should be my first data hire?
Dear Advisor: Why is my ML model in production doing worse than locally?