Defining AI

Posted: 5 Apr 2023

By this point, reams of words and pixels have been dedicated to unpacking, exploring and forecasting the future of AI. We’re not going to add the speculation here, but for those that want a quick primer on all things AI, let’s take a brief trip around some of the key themes and technologies that underpin the AI revolution.

“An even shorter description of what AI could be: a way to make everything we care about better.”

Marc Andreessen

“A.I. will probably most likely lead to the end of the world, but in the meantime, there'll be great companies.”

Sam Altman

What is Artificial Intelligence?

Artificial intelligence has been swirling around for so long in so many different forms that nailing down a definition is hard. Our definition is that AI is technology that enables a computer to make a decision. There are many different techniques, variations, and uses for this technology, but computer-led decisions are at the root of it.

For a more academic definition, investor Marc Andreessen calls AI “the application of mathematics and software code to teach computers how to understand, synthesise, and generate knowledge in ways similar to how people do it”.

To give some examples, all of the following are using forms of artificial intelligence:

  • Chat GPT writing your essay for you
  • Kheiron screening radiology reports for cancer
  • An autopilot that keeps the plane in the air
  • An automatic door in a supermarket
  • Netflix guessing what show you want to watch next
  • Spam filters in your email

Some AI techniques are easier, some are harder. Some are high stakes and exciting, some are mundane. But they all fall under the same bracket of machines that are making decisions.


Here are some quick definitions to help understand the commonly used terms:

  • Artificial intelligence (AI): Any computer that can make a decision.
  • Artificial general intelligence (AGI): A computer that can make a broad range of decisions. This is something of a holy grail of AI technology. The inverse is “narrow” AI, which has been trained to do a single specific task.
  • Machine learning (ML): A program that is trained from existing data to understand patterns and make predictions based on those patterns. Commonly used as a synonym for AI, although it is a subfield. All ML is AI, not all AI is ML.
  • Deep learning: A machine learning technique that mimics human brains using neural networks.
  • Large Language Models (LLM): The type of model that Chat GPT uses to understand and create language output. The “large” refers to the number of parameters and the complexity of the model.
  • Generative AI: A type of AI that produces new media output (text, images, sound, etc) in response to inputted prompts.

Although it’s currently a hot topic, it’s worth remembering just how long artificial intelligence has been around for. The first AI programs were written in the early 1950s and the first artificial neural network was developed in 1958. The reduction in compute cost since then has allowed for more complex models and larger training sets, which creates more powerful models. The internet has also created enormous datasets for training new AIs.

Why is AI important to understand?

It’s easy to get drawn into the AI hype that floats around Twitter and LinkedIn threads. But there are (understandably) people stepping back and saying “why should I care?”.

At Connect we believe that AI will change almost everything about almost everything, which makes it worth understanding. There are a few reasons that we believe the impact will be enormous:

  1. AI changes the economics of cognitive work. Compared to AI, humans are extremely expensive when performing some tasks, and often not as good. Almost any industry that relies on frequent decision-making is open to disruption from AI.
  2. AI makes it cheaper to produce software production, which will accelerate the rate at which “software eats the world”. This fantastic blog post from SK Ventures goes into more detail. The upshot is that AI can assist engineers and greatly reduce the cost of building software.
  3. The above two points create a reinforcing cycle: the more uses for AI, the more investment it gets, which leads to more uses, and so it goes on.

This is all scary in obvious ways. The economics of being a human worker are undergoing their biggest shift since (probably) the Industrial Revolution. Just as the Industrial Revolution reduced the cost of work otherwise done by human hands, AI reduces the cost of work otherwise done by human minds.

But, as with all change, AI also presents huge opportunities. Entire industries and their incumbents are trying to work out what comes next. Google is worried about Open AI becoming the go-to place for information. Meta has open-sourced some of their LLMs. Adobe is wondering how many people will need Photoshop when they can just describe a perfect image.

This makes it critical for all startups to understand how AI will affect their industries in order to know how to best react.

Different types of AI

There are many different forms of models and algorithms that form modern artificial intelligence. Understanding these will help you to understand where to best apply them to your business and product.


Classification models sort things into groups. Imagine taking pictures of food: a classification model might work out what type of food it is. A different model could classify emails as “spam” or “not spam”, or card transactions as “fraudulent” or “legitimate”.

You can read about the different types of classification algorithms here.


Regression models are a machine’s way of plotting out data on a graph, and drawing the line of best fit. This helps them to make predictions which were previously unseen.

For example, imagine plotting out the combined wage of a football team against the amount of goals they score in a game. The correlation won’t be perfect as there are lots of other factors to take into account. But there will be enough of a correlation that a machine could predict some relationship between wage bill and amount of goals scored.

Regressions don’t have to be just two variables. Multivariate regressions could take into account the other factors about a team, such as the age of the player, their experience, stadium size, previous results, etc. This lets an AI make pretty complex and accurate predictions.

Supervised vs Unsupervised vs Reinforcement Learning

There are different ways to teach an AI, and even if you use something off-the-shelf it’s useful to know how they broadly work.

Supervised learning is where labelled data to a machine. Labelled data about a football team would be the “wage bill” and “total years’ experience” and “previous league position”. Using the example above, it might use this labelled data to predict the amount of goals scored. This is called “supervised” because it’s trained on previous data that it is told is “correct”.

Unsupervised learning is when the machine will analyse data and group things together on its own. This is useful when looking for fraud, for example. Rather than writing explicit rules, you can ask the machine to highlight anomalies that don’t fit previous patterns of data.

Reinforcement learning is where you define success or failure then let the machine go off and try things out. This is useful for products with clear success criteria, like AIs that play video games. It’s great because it can produce very unexpected results. This is explained in one of my favourite AI demonstrations of all time: OpenAI Plays Hide and Seek…and Breaks The Game! 🤖

Neural networks

These are machine learning techniques that mimic how the brain works. They pass data back and forth through layers of “neurons” to help the machine learn. It’s a pretty complex topic, and if you want to learn how they work, then this video is an incredibly strong explainer.

As mentioned before, neural networks were invented in the late 1950s, but with lower compute costs it is now possible to run a neural network on a fairly standard laptop 🤯.

Generative AI

This is a model or algorithm that creates new media content. Part of the reason for the latest AI buzz are the huge advances that generative AI has made recently. Several years ago the quality was so low that it had almost no commercial value, but now it’s reached that tipping point very, very quickly.

In a recent presentation about generative AI, Linus Ekenstam gave this example to show just how quickly things have progressed:

It really is incredible how much can change in a single year.

Generative AI is very complex, and if you want to know how it works, this is a fantastic explainer video of how the models turn a text input into an image. A TL;DR explainer would say: imagine you have a labelled picture of a cup of tea, and continuously added noise to that picture. Over time, it would stop looking like a cup of tea, and just look like noise.

The process of generative AI is basically moving back from “noise” towards the first image, when given a “cup of tea” prompt. They can train this back and forth many millions of times, by using a classification algorithm on the output to say “does this look like a picture of a cup of tea yet?”. When these models are then trained on millions of images, they can start to produce new, wonderful things never seen before.

Midjourney and Stable Diffusion are the most widely known applications for generating images, and Chat GPT is the most well-known for text generation.

Some fun examples of generated images are “This House Does Not Exist” and “This Person Does Not Exist”.