Embedding AI into your product

Posted: 5 Sep 2023

Artificial Intelligence is going through a big shift that can be intimidating and hard to navigate. No company wants to get left behind in this new world, and there’s a rush to put AI into every product going. For people that haven’t worked with AI before, this can be daunting.

We’ve created this guide to help pre-product market fit companies navigate building products that leverage AI. If you want to read a general overview of AI, you can see our post on “Defining AI” here.

It’s specifically written for companies that aren’t AI natives – those that are building software and want to understand how they can embed AI into their products.

“A year spent in artificial intelligence is enough to make one believe in God.”

Alan Perlis

Roles in AI product:

There are several roles involved in creating AI products. If you’re going to plug in 3rd party AI, you might only need software engineers to work with external APIs. However, if you’re going to consider building in-house, it’s worth understanding the people and different roles required:

  • Software Engineer: Write code for regular software, which is needed to integrate AI into user interfaces.
  • Machine Learning / Data Engineer: Productionise models for AI algorithms and make sure that data is accessible, secure, and cost-effective.
  • Data scientist: Understand data, design models and algorithms that solve business problems.
  • AI Product Manager: Responsible for the team deciding what to build, understanding value and feasibility, and shepherding the team towards strong outcomes. Can be a “regular” PM but the processes and technologies are different enough that specialised PMs are useful.

These roles and responsibilities will obviously differ between companies, but in an early stage startup, this is a good rule-of-thumb guide.

Deciding where to embed AI into your product:

Your product probably already uses AI in ways that users don’t even realise. Stripe uses AI for fraud detection; AWS uses it for autoscaling; and anything with a Siri integration will use it.

However, remember your users won’t care about AI in your product. All they care about is getting good value for low cost. So make sure that you don’t shoehorn in AI for the sake of it. Customers care about AI about as much as they care about the language your application was written in.

That said, here’s an exercise to understand where you might embed AI into your product.

Map out your value flow:

What are the different steps that are required for your product to create value for customers? What is that value and when do customers receive it?

Highlight opportunities to improve:

If you’re struggling to see where these might be, you can map out your product using AirBnB’s 11-star experience exercise. If you could hire thousands of people to improve the experience, where would you put them?

Where are the high-urgency, low-risk decisions:

These are your biggest opportunities for AI.

Low-urgency decisions can be taken by humans, as they can get through a backlog at a reasonable pace. High-risk decisions (e.g. decisions about healthcare or legal issues) will need a lot of focus in order to get AI decisions right, and to handle the associated regulations.

Any high-urgency, low-risk decisions are great places to consider embedding AI within your product.

Check out what other people have built:

What third party applications are on offer for the job you want to solve? Even if you want to build in-house, the wider market will help you get inspiration to see what’s possible.

Work out how you’ll measure success:

Objectively measuring the impact of AI is important, to understand what’s really happening in your product. Simply the act of having AI in your product is not a benefit to your customers, and (as with any powerful software) it can cause harm. Think deeply about what problem you’re attacking, and how you’ll measure that it’s actually being solved.

Buy vs Build:

The eternal software question of “should I build this, or use someone else’s service?” is very applicable to AI. The benefits of building it yourself are that you have control of proprietary technology, which gives you patent defensibility and reduces the risk of relying on another company.

The costs are pretty substantial however. Building machine learning product is very different from building other kinds of software. It takes different skillsets, expertise, and processes. Release cycles take longer, observability is lessened, and fewer colleagues intuitively understand how it works.

I generally only recommend building AI in-house if you are willing to properly invest in the people and resources required to get it right. You can’t put a team on it for a few months and see how it goes. You need to hire specialised people and help them to work in a specialised way.

Those people are expensive, and demand for their skills will grow much quicker than supply. The same is true for hardware required, like GPUs. That’s why companies building their own models need to raise such large amounts of money.

A good rule of thumb is also to ask yourself: is the use case that we have here relatively unique to our company? For example, every company needs to do customer support, so there will likely be decent customer support AI emerging. However there are very few companies that offer ridesharing like Uber, so there is unlikely to be an off-the-shelf solution for their pricing algorithms.

One general argument against building your own AI is that pre-PMF you likely won’t have hordes of data that is both proprietary and useful. This means that building your own thing only makes sense if you can get useful data from another source, and populate your use case better than third parties would.

At the end of the day, the best thing that you can do is to scout around the market and see what’s on offer. There will always be benefits and costs to whatever route you take, so it’s best to scope out your options.


Defensibility is a big topic in any startup journey, and with AI, it seems to have taken on new life. There is concern from investors about how different AI companies can defend against upstarts to protect their gains over time. Having proprietary data to build unique AI capabilities seems to be the leading contender for AI defensibility.

I personally believe that before you have product-market fit, defensibility is a bit of a red herring. If putting AI into your product will help you get to PMF quicker, then do it. Equally, AI can be leveraged to bolster what we call product vectors of value (PVVs) – ways of accruing product value at scale (such as network effects, PLG or becoming a system of record. If you want to read more about PVVs, you can do so here).

Even if your particular use case can be copied by others, focus should be on reaching product-market fit and PVVs rather than defensibility.

General tips for building with AI

Building AI-driven software is similar to regular software design in many ways. But there are a bunch of differences that it’s worth being aware of.

Think about what you’re optimising for:

AI is extremely good at doing certain things, but it needs to be told exactly what to do. With people you can be more ambiguous, because people can infer a lot from context. Let’s say that you work at Uber and the dispatch AI was favouring certain drivers, which feels unfair.

A human might say “make the dispatch system fairer”. But then you need to define fairness. Is it giving the most revenue to the quickest drivers? Or the safest? Or making sure it’s distributed as evenly as possible? Or making sure each makes at least a certain amount of money? If so should that amount be per hour, or day, or year?

Being crystal clear on your strategy, the problem to solve, and what you really care about is important when building AI.

Keep an eye out for unintended outcomes:

Regular software is relatively unlikely to have wildly unintended outcomes. AI is extremely powerful, and if you give it power over decision-making then it might exhibit some bad behaviours.

For example if you ask it to screen for CVs, it might reinforce existing bias in the system. If you ask it to give rides to the most efficient drivers, it might make it impossible for anyone else to make money. If you ask it to maximise hospitality revenue, it might charge so much that no one ever comes back.

Make sure to forecast what these side effects might be and keep an eye out for them.


If you are training your own models, keep an eye out for “overfitting”. Overfitting occurs when you train a model too closely using data that is not really generalisable.

For example, imagine you mapped out the relationship between age and height, and only gave the machine data on children aged 1 to 15. If you then asked it to predict the height of a 40 year old, the machine might (understandably) assume that people kept growing and make an unrealistic prediction.

Make sure the data that you’re giving it is generalised to the problem it will be solving in future.

Literature reviews:

When you build a new feature for your app, you might check out competitors to see how they do it. With AI it’s harder to understand what competitors have built, but you can see what academia has produced. Doing a review of academic literature can show you how scholars have tried to solve problems, some of which are generalised (like the travelling salesman problem); some are more specific (like understanding Uber’s price surging).

Using models and techniques refined and shared by others is a powerful way to move faster when building your own AI systems.

Use humans:

There are two major ways that human agents can be used when building AI. The first is to use people to make the specific decisions that you want the AI to make. This will let you test out the broader system that the AI will sit in, while you build the AI itself.

For example, imagine you’re building Uber’s first automatic dispatching algorithm. It was previously done by people making phone calls, and you’re building a way to do it in the app via a machine. A great first step could be to keep having a person make the decision, but have them execute via the app rather than a phone call. This will get you learning faster about how the broader system works, even if you don’t have the AI ready yet.

The second way is to use AI tools to augment humans. This is especially useful for high-risk decisions. Flying a plane will almost always require a pilot to keep the plane flying when things go wrong. But the AI is responsible for almost all in-air flying and now, even landing. Keeping humans in the loop makes rolling out big AI changes much lower risk.

Examples of AI embedded into products

GitHub copilot:
Combines GitHub and Open AI, two of Microsoft’s best strategic acquisitions, in a powerful way. Copilot is fantastic because it really cements GitHub’s main proposition of increasing efficiency of software development.

Spotify playlists:
Spotify worked well without AI, but it has leveraged AI to elevate the service even more and to double down on its value propositions. First with the “daily mixes” specifically for you and your mood, and most recently with the “enhance” feature for playlists that you made yourself. This is executed so well because it sits neatly inside the product, doesn’t get in the way of the regular experience, and helps you to listen to more music that you love.

Notion AI:
Two things are impressive about how Notion integrated AI into their product. Firstly, how quickly they managed to do it. Secondly, how they managed to do it for the mutlitude of use-cases. The value of Notion is creating and organising knowledge in many different formats. This breadth of use must have made the AI integration complex on the back-end, but they manage to make it simple for users on the front-end.

Resources to stay up to date: