What is it: this is a human tendency to blindly depend on the decisions and output of automated systems, (including AI), even though human judgement suggests otherwise…
ChatGPT says it’s true, so it must be, it wouldn’t make a mistake would it…”
Overview
Automation bias, is the tendency for us humans to cede our thinking, and critical decision-making, to machines, (including AIs) without an understanding that they can make mistakes. This over-reliance can lead to significant errors of both omission (failing to act when needed) and commission (following incorrect advice).
If you grew up in the 1990s, you were probably more used to PCs throwing up errors. Remember the ‘blue screen of death’ anyone? As such, we probably had a greater degree of mistrust of technology. Alas, as technology has become more sophisticated, and systems more ‘closed’ – have you tried trying to fix a Macbook – we’ve become less aware of systems making mistakes.
We simply trust now that machines are infallible. And they’re not!
Although digital systems tend to be binary, where processes are hardcoded, they can get things wrong in the grey areas. An understanding that machines, are, by and large, driven by code we create, with sophisticated loops and logic is important.
AI is no different, just because we get an answer from ChatGPT, it doesn’t mean it’s right. If you want more on this, search ‘AI hallucination’.
Our over-reliance on ‘easily attained information’ is a bias in itself, linked to cognitive load. We simply quite, will simple answers, that require little effort or are easy to recall; look also at the Availability Bias here too.
What can we do to avoid this?
The most valuable skill we can hone, to avoid this, is to learn to be critical thinkers ourselves, and retain a healthy portion of cynicism. Learning to see ChatGPT (other AIs are available) as a tool to support, not replace, our thinking and judgement is key.
It’s also worth noting that if you ask any system a bad question, you’ll get a bad answer. No AI is going to say, “I hear what you’re saying, but that’s not the right question!”
For those creating technical systems, automation bias can be mitigated by designing better user interactions, designing in prompts, displays and UI to augment, not replace user oversight.
Examples
- Airline pilots are taught to corroborate instrument readings, and not to reply on them solely. A number of accidents have been attributed to an over-reliance on systems rather than skill and judgement.
- Autonomous driving: although a great idea ‘drivers’ believe they no longer need to have oversight.
- Biased credit scoring or predictive policing algorithms can reinforce existing societal inequalities if human decision-makers over-rely on their outputs without question. AI can be biased if it’s training content is biased! Ever seen the film ‘The Minority Report’? It’s a system that is supposed to never make mistakes. Spoiler… it does!