Automation

Humans vs Machines: Augmented AI

I recently attended an event in Manchester, organised by BJSS and Microsoft, on the subject of AI and ML. The schedule was jam packed with interesting insights and useful case studies that got my head buzzing. Having had some time to reflect on everything I absorbed on the day, I have realised that we are back into familiar territory with these technologies, which makes available a decade of debate that is directly relevant.

Some time ago I wrote a couple of essays on automation, including test automation. My motivation for writing these was to ground executive excitement over the capabilities of automation and to be clear on the impact test automation would have on their business. As with all new technology, the C-Suite view is skewed by hyperbole… the machines can give us 100% test automation, we don’t need humans any more! This is why I borrowed the Flight of the Conchords song title when I published The Humans are Dead (which became an Amazon Top 25 publication).

AI slots neatly into my automation philosophy. There is a great opportunity to apply what he have learned in other forms of automation, whether that is the production line or software test automation, to AI. In particular, there are tasks that play to the strengths of the machine and tasks that are better suited to humans. Routine work is where machines do a better job. Humans find it hard to maintain focus when forced to perform repetetive tasks, which is why writing 100 lines was a punishment at the school I attended. The strain routine tasks place on a human make the tasks error-prone. Machine, though, love that kind of stuff. Conversely, until we create a machine superintelligence, humans are uniquely positioned to come up with creative innovations in a way machines cannot yet comprehend.

In terms of AI in particular, this gives us a great opportunity to leverage these human and machine strengths in an Augmented Intelligence. Think of it like this… the machine grinds its way through millions of boring tasks and highlights anomalies to a human. The human works out what to do with the anomalies.

The human does less grind, the machine gets to leverage intelligence it doesn’t possess.

If you imagine a Venn diagram showing what humans and machines are good at, it probably intersects a fair bit; so there is stuff in the middle. It will be easy to select tasks at the edges and harder in the middle; but the middle is less important.

Augmented Intelligence

So, just as we found in testing that “following defined steps and ticking off an expected outcome” is great machine space and “thinking creatively to explore limitations” is great human space, so we will find for AI.

I’m working through some real numbers from a case study and I can support this opinion with the following comparison of accuracy.

Machine Only: 92%

Human Only: 96%

Augmented Intelligence: 99.5%

As I’m stating in an additional essay for The Humans are Dead:

In other words, when interpreting 1,000,000 samples, the cost of mis-applying humans to routine tasks is 35,000 mistakes and the cost of mis-applying machines to eccentric tasks is 75,000 mistakes.