Thinking outside the AI box

AI

AI models provide us all with a huge capability boost, and as the models improve and start to exceed the capabilities of the human brain in terms of creativity and processing power we’ll soon live in a very different world.

But one aspect of LLMs and generative AI has me thinking - are we individually at risk of becoming ‘average’ if we all use the same core set of foundation models?

An LLM is trained on a large corpus of data to gain its pre-training (the “p” in GPT).

Once available, if two different humans ask exactly the same question the LLM is designed to provide slightly different answers - to avoid everything looking the same.

But if you’ve ever seen AI comments in your X or LinkedIn DMs or comments you’ll know it is very easy to spot the similar theme.

AI comments

Different…..but the same!

I’m not an AI engineer, but from my understanding, AI can only know what it knows.

The generative element comes from connecting disparate parts of its knowledge to generate new content - but that content is the result of what it already has access to - the average you could say.

Its great at summarising a document

It is great at suggesting connections between two known concepts

It is great at providing content ideas

But when it comes to creating something that is genuinely unique and new, it doesn’t have that capability. It doesn’t know what it doesn’t know.

And this is where so much human innovation comes from.

I visualise it in this image

If the circle represents the training data for the LLM, then any questions asked will source from different areas of that knowledge and create a response that is the average of what it knows.

But a creative human can look beyond what it knows to suggest or discover something entirely new.

As I work with LLMs daily I keep this image in mind.

I use LLMs to help me understand and consider different existing concepts, but I then use my own creativity to build on that to make whatever I write, draw or design uniquely mine and uniquely human.

I’d encourage you to keep this visual in mind as you develop your own AI solutions.

We have always had the cliché “think outside the box”.

Maybe now it is more important than ever to do just that.

Previous
Previous

What is the role of a fractional Chief AI Officer?

Next
Next

How RevOpsCharlie can help you hit your H2 number