What are some of the Biggest ML/AI Mistakes I Continually see Start-ups Make?
Post was originally published in June 2020 and has been updated in August and September 2020 for relevancy.
I hosted a "making data-driven decisions" office hour on the 805 Startups Discord server on June 15th. One of the questions that the co-founder Gary Livingston asked, was: What are some of the biggest mistakes you continually see made by startups?
Without throwing anyone under the bus, here are the recurring themes I've seen in the last 5+ years.
It’s impossible to get insights from data you didn’t collect. Are you tracking everything about your business?
Assessment: Do you know how many active users there are on your platform today?
(On the flip side) Tracking everything about the business, but with many different data providers and vendors, that don't link to one another.
Assessment: Do you know how many new active users there are on your platform today?
Too early for ML -- all of the above and:
Starting with ML/AI (e.g. predictive analytics), before understanding what's happening with customers and product historically and now (e.g. descriptive analytics).
Assessment: Is a Data Scientist one of your first technical hires?
Assessment: Do you currently know who cancelled your service within the last week, and how you acquired that customer in the first place?
Treating ML/analytics as a Silver Bullet -- all of the above and:
Doing analytics for the sake of analytics, without understanding how the result will solve the business/customer's pain point, how it will be used by the stakeholder to solve that pain point, and what data is/isn't available.
Assessment: Are you trying to do ML for a process that you don't understand?
Inadvertently scoping out MVP for product/feature to be better than state-of-the-art for ML.
Example: Many pitch decks that end with "... and we'll use AI to do this" :(
Buying a multi-year vendor license without doing an internal POC to see if the software actually solves the problem you bought it for.
Executing on ML products as if they're software engineering tasks -- all of the above and:
Thinking you have clean data :)
Developing biased models because of biased data or 4 other reasons.
Example: IBM, Amazon and Microsoft stop selling "general purpose" facial recognition (2020)
Example: Gender-recognition technology and its bias (2019)
Example: Amazon's AI recruiting tool showed bias against women (2018)
Not treating ML as data products that you scope down and iterate over, from proof-of-concept (POC) to v1, v2, etc.
Assessment: Do you have an ad-hoc/simple model that answers your business question, that you can compare + evaluate the next iteration of the ML model against?
Asking for a guarantee on ML model performance (based on ML metrics, KPI, etc.):
Guarantee that offline model performance will be at least X -- which is impossible to guarantee because it depends on data quality, or
Guarantee that live model performance will be just as good or better, than the offline model -- which is impossible to guarantee, because customer behavior or product offering changed, data collection/processing pipeline broke/changed, or model may have (inadvertently) overfit offline data, etc., or
Guarantee that a live model will never need to be updated
5. Difficulty hiring DS/ML/AI candidates, most likely because job descriptions list software requirements over a 30/60/90 plan of what the expectations and deliverables look like.
6. Not knowing about the Hidden Technical Debt in Machine Learning Systems
Assessment: Do you set aside time to tackle software engineering and machine learning debt?
7. Not understanding what the algorithm is doing.
Assessment (start-up): Can you give a 1-2 sentence overview of what the algorithm is doing?
Assessment (data scientist): Can you give a 1-2 sentence non-technical overview of what the algorithm is doing? Why you picked that one? And why you chose the parameters you did?
Not executing on (aspects of) ML products as if they're software engineering tasks -- all of the above and:
Not testing and monitoring ML products in production
Assessment: How high do you score on the ML Test?
Nobody is perfect, has clean data, or has ML running with no downtime in production. Now that you know what to focus on, start small and iterate. If you need more support on what that process looks like for you -- or how to execute it, please reach out.
Keywords: AI, ML, start-ups, data strategy, data products, customer understanding
You may also like: