AI Mythology Is Stopping Us Asking Questions

The mythology around AI causes us to ask fewer questions. We should resist this, and ask the very same questions we would about the quality and limits of any automated system.



Context

The label 'AI' is everywhere. You can't avoid it.

A deluge of products are now suddenly AI-enabled. There's an explosion of AI startups. Conferences about X and AI - agriculture and AI, healthcare and AI, democracy and AI are mushrooming.

There is a huge, perhaps inflated, sense of optimism about the problems that AI can solve.


Problem

The use of the label 'AI' seems to be have caused many of us to step back from asking key questions - the same key questions we would ask about any system that makes automated decisions.

There are two factors in particular that drive this:
  • Marketing that promotes the idea that AI-enabled products have super-powers, capabilities beyond the reach of mere humans. 
  • A poor understanding of the algorithms that actually make the automated decisions creates a vacuum, which we have a psychological need to fill.

Fewer people asking questions about an AI-enabled product works well for the company selling it. Questions can highlight limitations and that's directly counter to the message being promoted by marketing, which often wants to portray it as a flawless all-powerful entity.

It is true that many of us don't understand how machine learning techniques actually work. Yet we have a psychological tendency to fill gaps in our understanding, and so the easiest thing to do is fill it with the most readily available and familiar explanations - those from our rich culture of science fiction and its anthropomorphised machines.

There is another third significant factor at play.

Researchers from non-technical disciplines thinking deeply about the risks and ethics of AI have a tendency to explore such questions only on foundations that they can comprehend. These are the outputs, the visible actions and the effects of an AI, not the internal mechanisms. This again leads to an unbalanced focus on the effects of AI, and not enough on the causes.


Solution

To start fixing this we need to strip away the mythology and mystique around AI, and see it for what it is - just a bunch of simple logical instructions written by people, people who are fallible humans just like you and me.

In this sense AI is not different to any other software tool designed and created by people.

And just like any tool, software or otherwise, we can start to ask simple but essential questions:
  • how well does it perform?
  • when does it start to fail to perform well?
  • is that failure graceful or catastrophic?
  • how broadly did we test the tool?
  • which individual is accountable for its performance and safety?

We've been asking these basic questions about our tools for hundreds, if not thousands, of years.

In fact, such questions are a key part of the training of scientists and engineers, from school to professional  accreditation.

A key point here is that we don't all need to be experts on how machine learning algorithms work to be able to ask simple but important questions about the safe use of AI.

We just need to stop thinking of AI as a mystical ghost, and instead think of it as a tool, a tool as mundane as a hammer or saw.