Dear Advisor: How do I prepare for AI Technical Due Diligence?
9 Focus Areas to Help Start-Ups Prepare for AI Technical Due Diligence
Originally published in July, updated in August -- November, December 2023
Congratulations! You signed (or are about to sign!) the term sheet and are now told to prepare for technical due diligence focused on AI. Sigh. To help you be more prepared to fundraise, close the round faster, and stand out from the crowd, I’m here to share what I look for during this process.
Goals for Technical Due Diligence focused on AI
Before we dive into its potential scope, the goals of due diligence are:
For the start-up: use this as an opportunity to keep building on the momentum, rapport, and credibility you’ve established with (potential) investors earlier throughout the fundraising process – and prove (as best as possible) the feasibility and defensibility of AI as (part of) your company’s moat that will help the company scale.
For the evaluator: put together a summary of what’s working well, not so well, what can be improved, as well as thoughts around any challenges, risks, and opportunities, which the VCs can use to make a decision for themselves around investing/passing on a particular start-up.
Spoiler alert: The goal is not to evaluate AI per se, but to see how it ties into your product, to bring your customers value – and help you scale. It's less about the specific algorithm -- and more about how you're thinking about the algorithm(s).
As you may know, most technical due diligence, until recently, didn’t seem to focus too much, if at all, on the start-ups’ focus on AI. And because it seems that now every start-up is either thinking about or already trying to incorporate "AI" into their products, VCs are now adding AI into the scope of technical due diligence, to help them evaluate if it’s a potential moat or a detractor for the start-up. and may need support in figuring out what's hype and what's real.
When I help VCs with technical due diligence focused on AI (why me?), I typically wear the customer hat (e.g., does this solve my pain point and how?), and the MLE hat (e.g., if I had to develop this myself, what do I need to watch out for?). Because the code we write today is tomorrow’s legacy code, I focus my assessment on progress toward the solution and milestones you’re pitching, not perfection.
As a result, there are about the key areas I’ll typically discuss with you. Because every start-up, product offering, and technical implementation is different, the 5 focus areas are not an exhaustive list, or a list of required questions, but a jumping-off point for additional things to discuss as they come up. Having said that, it should be enough to help you prepare to put your best foot forward during technical due diligence focused on AI.
#1 Revenue Risk
Is the proposed revenue model actionable? That is,
How does it align incentives between the company and customers? Do the customers have the budget to pay for the product/service, given your revenue model?
(If in Health/MedTech) How does it tie into the current quality of care practices?
Are all of the contracts and policies in place to be able to execute the proposed revenue model? While I won't be reviewing the contracts, can you walk me through, at a high level: what the revenue model is and what's covered in the agreements with customers, vendors, and affiliates?
For example, if there's a play to resell customer data, do you have permission from customers vendors, and affiliates, where appropriate, to do so?
Are there any other potential risks to reaching the milestones you propose?
#2 How is AI Core to Product?
How is your company becoming more efficient and scalable while -- at the same time -- bringing your customers more value, with the help of AI? e.g., How is AI a means-to-an-end?
#3 Show, Don’t Tell
You’ve pitched and discussed how you solve your customers’ pain point(s) end-to-end – and said that you have a working product.
Please demo your solution to your customers’ problem(s)! Walk me through it start-to-finish: from getting started (including any on-boarding, if it exists), to reaching your customers’ end goal, whether that’s helping with a diagnosis – or making a real-time recommendation.
Please ensure your audience can follow along -- and we can see themselves using it or get excited about your potential customers using it. Consider UI over Terminal demos whenever possible.
Screenshots are not Demos
I see demos that are app screenshots. I’d like to know what’s actually developed and what works, which may not always be the same thing…
If the product makes real-time recommendations, I’d love to see it live! I may ask to see how the product responds to specific, typical customer behavior in the product. I’d love to see what still works and what errors out or silently fails, to help understand your knowledge of your customers and how robust the technical implementation is.
If you’ve introduced your audience to a specific solution(s) to your customers’ problem(s) in prior meetings, I recommend starting with a demo that touches on that, and we’ll dive in from there.
If you haven’t shared specifics in prior meetings, consider adding it to your pitch deck! I may also ask you to walk me through how to solve a common request your customers may have based on what you’ve pitched previously.
#4 Glaring Data Bias
Depending on what you’re trying to predict – and how, if there’s a fundamental bias in your data -- or you're potentially missing key data sources, your ML may be moot. Be ready to discuss this!
For example, say you’re trying to predict if a condition exists/not. The way you get data for this is when a patient gets a referral or sees a specialist to get the specific test(s) done, the specialist and your start-up examine the results of these tests. Knowing that the patient got the referral to see the specialist, and/or just knowing that they got the specific test – will be very predictive of the condition (!). To better understand what’s called "data leakage” to try to prevent it, please see this blog post.
#5 AI Risk
Please be ready to discuss AI scope, focus and risks as part of customer on-boarding and as a part of everyday customer workflow, where appropriate, including:
How are you tailoring the data, model and outputs to each customer, industry and use case, as you on-board each customer and support them over time?
How and when is re-training happenening?
How do you deal with inaccurate predictions or results? What's done automatically? What has a human-in-the-loop?
#6 Efficiency and Costs of Cloud Storage and Computing
Come prepared to walk me through your data stack and cloud costs, especially if you store/analyze data for your customers and/or your product is based on a high volume of data, such as IoT (including sensors and satellite imagery), ad-tech and healthcare that’s accessing EHRs. Is it the case that you pay for every data touch-point, from storage to calculation to download? I’d be curious what those monthly costs are, at a minimum, for each focus area. Is that $10K+ per month? More? Less?
Did you know that it costs OpenAI $700K per day, to run ChatGPT?!
Getting real-time analytics/AI into the product is never a one-and-done. To make sure that AI is adding value and helping the company grow, maintenance is key. This includes:
Infrastructure and people/teams to help integrate AI into the product.
Ability to alert when models are out-of-date/out-of-sync and need to be updated – and re-training the model(s).
People/teams are on-call for support if things aren’t working.
Bonus points if you score highly on the ML test. I may not ask for your score, as it’s a very high bar to pass, and none of the companies I’ve worked at -- or start-ups mentored -- have gotten there yet (!). We may touch on topics outlined in the test as they relate to your product, to evaluate how prepared you are to defend the AI moat that’s helping your company grow.
Since AI is core to the product, do you have the right people for the job – or a plan, to help you defend this AI moat? This may include any fractional leaders, full-time employees, remote and off-shore contractors and/or talent to help align AI to answering business questions, implementing algorithms, and MLOps.
#9 Potential Gaps in Solution
Depending on how your previous conversations went, this may become a big focus area of due diligence. It's one of the topics I discuss with VCs, to help us scope out AI due diligence; e.g., what hasn't been clear -- and are there any priorities/gaps they'd like to see us dig into?
In addition, it may also be the case that the solution you proposed to solve your customer(s) problem seems, at first glance, to either be:
making too many assumptions about the customers’ habits – or, on the flip side,
(most often) too technical to understand, and/or
(most relevant, if any of) 9 combinations of "promise, complexity, and risk" around AI discussed in this blog post, and/or
gaps in business/product/other expertise.
Be prepared to dive into this, at a 10K-foot view along with some details!
Hope this helps you close your round faster! Good luck!
Next Steps for More Support
- VC Funds: Calendly link to learn more about your fund, and discuss the scope for the next AI due diligence you need support with.
- Start-ups: To help you tell your AI story, please schedule this (flat-fee) session with me.
- Everyone else: Please complete the Google form.
Frequently Asked Questions
(From LinkedIn discussion of this post) How do startups that are thin wrappers around OpenAI pass through due diligence?
Great question! It would really depend on (1) "how thin", (2) how differentiated it is right now from OpenAI around what it's solving for its customers, and (3) how easy/hard it is for OpenAI to do the same.
What's the appetite of VCs funding an AI-driven SaaS company?
Another great question! It really depends on how key AI is to solving the customers' pain point(s) and helping the company scale, along with all of the advice shared here and in my advice on pitch decks.
You may also like
Jason Yeh's series on "What's The Deal With Due Diligence?"
National Association of Corporate Directors (NACD) on AI:
The [Board] Director's Playbook for Generative AI, by Jim DeLoach for NACD
Director FAQs and Essentials: Director Essentials: AI and Board Governance, by NACD and Data & Trust Alliance Staff
Cartoon: XKCD (#1838)