Dear Advisor: What should (not) be YOUR AI roadmap?

(or) Why You Don't Need AI in your SaaS MVP

Post was originally published in April 2022 and has been updated in May and August 2022 for relevancy.

It seems that every start-up is doing AI, and getting VC money for it! How can you get started?

I’m flattered that start-ups recognize my expertise in AI – and that they reach out for support on AI strategies and roadmaps. It seems that everyone else is telling them to add AI capabilities to their MVPs.

I think it’s too early! They’re usually surprised to hear this -- especially when they're ready to hire me and I talk them out of it (!), by sharing this contradictory advice and the reasoning behind it. This tends to build rapport. Then we end up tabling discussions of AI capabilities in favor of strategies for developing data-driven products that understand and solve customers' pain points. Here’s what I typically share.

Part 1: Definitions

First, let’s start with my definition of “AI”.

AI: Software tool to help you automate outcomes you understand, by predicting when it may happen next, based on observed events.

Let’s unpack that.

  • Outcomes you understand: That is, what are those repeatable customer actions, interactions, attributes, etc. that help you infer that they’re reaching the desired outcome, or – even better – that this outcome signals a high-valued, ideal customer?

      • This assumes that you know what that outcome is and understand (at least the 10K-foot view of) the process to get there.

  • Automate: That is, do you have enough signals in the data about your customers to automatically identify outcomes of interest? To prevent “garbage in, garbage out”, I don’t recommend automating a process you don’t understand, as it will be really hard to validate and also justify the approach when results will inevitably look “weird”.

  • Predict: I highly recommend first understanding what’s happening in your product and your customers now and historically (e.g. doing descriptive analytics), before tackling predictions of what will happen in your product and customers in the future (e.g. predictive analytics or “AI”).

  • Observed events: The predictions are only as good as the data that’s fed into the model. Are you collecting everything you’re ethically and legally allowed to, about your products, customers and their engagement?

  • Software tool: AI is a tool that can try to find signals from the data it’s given, based on assumptions about what we’re trying to find; it’s not a silver bullet.

      • This also assumes that software -- and data -- are core to your business. If not, it will be harder to build, iterate and incorporate infrastructure to personalize and scale the service.

If this sounds like that’s hard to do; it is. If you’re not able to answer the above questions affirmatively, it’s OK. You’re too early for AI! Please see the next section for advice on how to try to start getting ready.

There are many different opinions on what should and shouldn't be in scope for an MVP. In my mind, a MVP should do the following:

MVP: (Typically) Mock-up/wireframe to help you demo and test out the idea of your product, including:

  • who it should resonate with,

  • what problem you’re actually trying to solve,

  • what is the smallest step to solving this problem,

  • how to begin aligning incentives of solving your customer’s pain points with your product offering and pricing,

  • that will also help you decide on what to do next.

That's not an easy ask. Because the goal of an MVP is (typically) a product that solves a small part of your customer’s big pain point, that – at the same time – also adds value and builds rapport with your customers, you’ll still be learning about your customers at this point. Especially in the beginning, there may not be enough signal – or enough customers – or enough runway – to develop a predictive model with actionable outcomes to uncover/confirm these trends for you. It’s too early for AI!

Part 2: Alternatives to AI for MVP

I’ve just told you AI should not be a high priority early-on – and maybe you mostly believe me :). While it may be too early for AI, it’s never too early to be data-driven. Here are some recommendations (in no particular order) on how to set-up your start-up for success in helping make data-driven decisions, which may in the long run pave the way to AI.

Approach 0 (tackling the cold start problem): Is there another industry (or even competitor) that you can borrow the initial hard-coded/rules-based recommendations from, to get your customers to start engaging with the platform?

  • Example: If you're a new start-up trying to provide personalized gift recommendations, consider starting with recommendations from popular competitors, by occasion. That is, which top 10 items can you pull from Amazon, Harry and David, <other competitors>, to formulate your Top 10 recommendations for <occasion X>.

  • In a real-life example, the CEO of Whatnot (YC '20) shares with YC about what they did to tackle to cold start problem for their marketplace.

Approach 1: Take a step back and reflect on what your product would look like at scale, with product market fit – and try to work backwards from there. That is, what would you like to be able to do, when you reach this end goal?

  • That is, to get to <end goal> we need to be able to do X, Y and Z. To get to X, we need to unblock A, B, C. etc. To get to A, …

  • This then becomes a loose roadmap to get there.

  • Notice that the steps (and roadmap) will be outcome-based, not method- (such as “AI”) based.

Approach 2: Evaluate your capability to make data-driven decisions based on historical data, by trying to answer a currently pressing business questions, following advice here:

  • One (simplified) way to do this is by:

      1. tracking everything,

      2. looking for drop-offs and reducing that friction (here's an example of reducing friction);

      3. then looking to see what's repeatable, and

      4. focusing on understanding the most recent historical events, before predicting future outcomes.

  • It’s never too early to start collecting data; I recommend starting as early as your Alpha release (!). This way, (ideally) from the very beginning – or as early as possible, you’ll be able to see (and evaluate if you can see) engagement – and especially repeat engagement on your platform. This will save you literally millions of dollars as you scale!

      1. Here’s the types of data to (legally and ethically) collect

      2. It’s OK if you examine the engagement manually at this point (or whatever tool you’re most comfortable with). The goal here is to figure out what’s going in the business first, before automating summaries about it.

Approach 3: Identify any repeatable steps/processes/decisions – these can be very small, that you can use a rules-based/workflow approach to make automated (such as via Zapier, non-affiliated), simple suggestions of next steps or outcomes. This will be the smallest step you can take now, to help you scale your product offering.

By identifying these rules-based steps or workflows, you in turn will be creating “labels” of desired outcomes. In the long run, if these outcomes are of interest, you’ll have labeled customer and/or product attributes to work with, to help you validate an “AI”, predictive model.

  • In the ML/AI world, this becomes a “supervised” approach, as the “labels” are guiding/supervising mode estimation. That is, it becomes easier to evaluate model performance when there is an outcome/label to compare/predict against (vs an “unsupervised” approach, which doesn’t have “labels”).

  • A canonical example of this is customer lifetime value (LTV). That is, we’d love to be able to target our Marketing spend to those that most look like our current high valued customers.

      1. Unsupervised approach: We don’t have any customer purchases yet. Let's predict customer LTV from their activity on our platform, to see what attributes we may need to target.

      2. Supervised approach: Let’s focus on understanding our customers who’ve made purchases totaling 3x+ CAC in their first year on our platform. Where did we find them?

  • It may seem like a chicken-and-egg problem: you can’t automate a workflow if you don't know your customers – and you don’t know your customers if you don’t know what outcomes you’re looking for.

  • By tailoring your MVP to add value by solving one (small, lovable) aspect of your customers' pain point, you get them excited about the solution and your product – and break this chicken-and-egg cycle; in turn, you’ll capture their engagement and help you prioritize the next feature/pivot.

Part 3: What if I'm still not convinced – and still need it?

That’s OK! This is just one person’s opinion. :)

If you do decide to continue developing AI for your MVP, I’ll leave you with parting advice. Remember:

  • You don’t need an AI roadmap for your MVP (and beyond), you need a roadmap for how to scale your product offering. One way to do this is to work backwards from the end goal and another way via PLG (as discussed above).

  • “AI" is a software tool to predict outcomes from signals; it's not perfect – and takes time to develop. Do you have enough quality data – and enough runway and time to do so? It may take many months (and maybe even years) to develop an algorithm to get the accuracy needed for real-time predictions.

  • Try to avoid the most common AI mistakes start-ups make.

  • Whatever you develop, you’ll have to maintain and support; if it’s not core to your product offering, I recommend buying an out-of-the-box vendor solution for it.

Part 4: You’re a Product Market Fit and AI consultant. How does AI come into PMF play? Where do you come in?

Great question! Remember: the goal is not about getting AI to work, but how to scale your product offering. We’ll start small:

  1. We'll discuss what the short- and long-term goals are for your product, and what you think is the next (big) step for your product.

  2. We’ll discuss strategies for how we can go about (a) identifying and/or solving our customer pain points, if we aren’t there already, or (b) how to work backwards from your end goal in (1) above.

  3. We’ll use approaches in Part 2, to build up our knowledge to understand our customers, their pain points, and how they are (and aren’t) using the product offering to solve their needs, to learn about what can be improved. To help us focus on scope and outcomes, we'll:

      1. Try to answer the most pressing business problem as we simultaneously evaluate our data quality as we try to answer this question. We may need to go back to previous steps based on our findings.

      2. Focus on identifying and understanding the high-valued customers and their pain points and improving the product offering around that.

  4. And we’ll iterate! That is, we may need to go back to previous steps and update our hypothesis based on our findings. With each iteration, the goal will be to check that it’s resonating more and more with our core customer base, improving our product market fit.

AI may (or may not) be the tool to get us there, based on the customer’s needs as well as the business and technical requirements to make real-time decisions.

I’m here to guide every step of the way: to help you grow your product with market fit, and discuss trade-offs and strategies for AI once you know what's happening in your business currently and historically. Please reach out.

You may also like