How Large Companies Evaluate AI Startups
As an AI Consultant, I often have to evaluate AI startups to determine if their solutions would make sense for us and if they can become a real business partner over the long term. Our process for evaluating startups is more or less similar to what VCs call due diligence.
This need of evaluating ML startups comes from the fact that we both need to digitalize our processes to maintain our position in the industry but also the multiplication of startups whose tagline is often a mix of the words AI, algorithms, deep learning, platform, deployment, etc…
After several successful collaborations, I started to better notice the early signs of a failing ML startup and understand the issues of Machine Learning. In this article, I will present some of the most frequent problems I have encountered.
Despite the hype, relying on Machine learning is not always the best solution for business issues. In many cases, we have been disappointed by startups …
Before getting into details, I would like to better define what I mean by failing. My definition concerns startups that will either not deliver what was expected due to the nature of Machine Learning or not grow into a viable and self-sustainable business and will never truly find a good business model. However, they may still get acquired by a larger company, but they won’t survive as standalone companies.
From a technological point of view, I see a growing number of startups working on exciting business applications using “new” AI approaches such as Zero-shot learning, GANs, or Federated Learning but when it comes to making businesses more profitable, they struggle just as much as a non-tech startup.
Machine Learning vs …
The first thing that comes to my mind is the fact that a significant number of AI startups tend to underestimate both non-AI and AI competition. In reality, for an important number of business issues, the competition is fierce and the industry already quite overwhelmed when it comes to AI applied on specific business issues (eg. Machine Learning to predict customer churn).
To be honest, Machine Learning in itself doesn’t give you much advantage as the main goal is to sell it to the users and perfectly fill their needs. Most companies will need a custom-made solution and cannot dedicate extra resources both financial and time-related to the management of a so-called “easy-to-handle AI solution”.
Pilot projects issue
The other issue that I noticed is that most AI teams start with pilot customers which are already quite a challenge. I can assure you that it is extremely difficult to convince a large firm to believe in a random startup.
Before accepting to build a PoC with an ML startup, we ask the following questions:
The investment is usually high (usually between 20k to 150k USD + Maintenance costs) for obvious reasons since the solution will probably be custom-made. For startups, it is a great way to finance the development of the product over the long-term and have access to more data which is crucial.
A white paper released by Pactera Technologies states that 85% of AI projects fail. Oops.
However, these pilot projects often solve very specific problems that are difficult to transfer to other companies and use cases. Unfortunately, most large firms will require a high level of customization which can be difficult to achieve for startups. This difficulty in adapting pilot projects to other customers can become a real issue. Indeed, there aren’t many startups that can survive without regularly “shipping”.
ML-based solutions usually require to be retrained regularly to adapt to the dynamic nature of data, and algorithms written and tweaked for specific customer or application workloads.
When it comes to the project itself, it is often difficult for startups to go beyond the PoC stage… Numerous reasons can explain the failure of AI projects but I’ll focus on accuracy expectations, production, management costs, edge cases management, and decision-management.
I am usually skeptical about the following startups:
- AI startups promising 100% accuracy is often suspicious. I’d recommend you to not expect perfect accuracy, particularly for any complex decision making. Moreover, do not forget that accuracy can drop over time and with market/societal change (ex: Impact of Covid-19 on your dataset…).
- AI startups not taking into consideration the need to determine constraints for when the AI will be allowed to make decisions autonomously, when it will be supervised by a human and when it will simply guide a human making a decision.
- AI startups not trying to anticipate/better understand edge cases. Depending on your business issue, you might be confronted with edge cases. For them, the effort required can be significant in terms of resources and ultimately ruin your project.
- AI startups rarely mentioning the management of AI applications once in production. The absence of a plan to deal with data drift, a black swan event (Covid-19), or use of the AI outside its valid domain, can be a major setback. It is crucial to perform frequent updates to model and introduce new, competing models. Governance related to deployment and alerts is also to be strongly considered.
- AI startups unable to answer the following questions: How often does the solution get updated? When do they intend to integrate new data to make the solution more accurate over time?
On top of these elements, failing AI startups often underestimate the importance of the word: SCALABILITY. I agree with Mike Lynch, Founder of Invoke Capital, when he says that the reality of the current AI hype is that many of the companies who succeed in securing funding show a woeful lack of understanding of the difficult nature of their problem and of how to test their AI for robustness in the real world.
Too many times, I have seen promising AI startups managing to have an interesting pilot project with a large firm but later fail to create a solid business model, a performing sales team or understand the difficulty to sell to large firms/scale.
The long purchasing cycles of large firms can put in risk the life of an AI startup. I have met many CEOs that couldn’t afford to wait for large firms to figure out if they needed an AI solution. Below, I created a list of the most common threats for AI startups.
As a result, most AI startups I dealt with, struggle in finding the right balance between doing projects for early customers, and investing in a scalable product (large revenue). I share Stefan Seltz-Axmacher’s opinion, CEO of Starsky Robotics when he says that “many AI teams end up as engineering consultants. They make good revenue with clients, but do not really sell a product at the end.”
Beyond my opinion as a prospect for AI startups, I know too well that VCs do not positively value this situation. Their goal is to see scalable products with limited implementation costs.
In reality, cloud operations can be more complex and costly than traditional approaches, particularly because there aren’t good tools to scale AI models globally.
Once again, I agree with Mike Lynch when he says “Although AI techniques are useful in solving certain problems, they are not yet applicable in every case. Adding an off-the-shelf algorithm to old software is not necessarily going to teach it new tricks.”
Financial Margin vs SaaS
Another interesting aspect is the relatively low average margin of AI startups. Indeed, it is often in the 50–60 percent range — well below the 60–80 per cent benchmark for comparable SaaS businesses.
Perhaps this “low” margin can be explained by the high costs of cloud compute time for ML and the human resources required to clean up the data needed to train and maintain the accuracy of AI systems.
Even though, the human aspect required to clean the training data can be outsourced to third parties, these services can represent a significant investment, even when outsourced to countries with low incomes and become a privacy challenge.
I also realized that data preparation steps are often underestimated in AI projects. The simple tasks related to collecting and cleaning data can end up becoming long and frustrating processes. This can be explained by miscommunication and bad expectation setting.
Deployments tend to also be underestimated…
Machine Learning vs Expectations
The greatest issues we have with ML are related to edge cases, human interventions, and interpretability. At the beginning of a project, we might believe that edge cases or the need to explain results will never become a significant issue and yet…
Edge cases: Each AI applications inevitably encounter scenarios in which the solution do not perform as expected.
The more accurate your model becomes, the harder it is to find data sets of specific edge cases that were actually key for the project overall performance. Additionally, the better your model, the more accurate the data you need to improve it.
This entire constant need for data is often underestimated by both sides.
An easy mistake would be to expect exponential improvements regarding the ML performance. In most of the computer vision projects that I managed, the cost to improve our accuracy with time always increased, following an S-Curve. AI startups that tend to promise you 100% accuracy after only a couple of weeks, thanks to supervised Machine Learning, often end up underperforming.
Issues with final accuracy steps
Discover Past Posts