Try our mobile app

Thank you, Google, for screwing up so badly

Published: 2024-03-13 16:58 +02:00 by Clive Crook tag: AI and machine learning

JSE:AVI JSE:HUG

The laughable screw-ups in the Gemini chatbot’s image generation offered a salutary glimpse of an Orwellian dystopia.
Google’s investors are entitled to be furious about the stunningly incompetent roll-out of the company’s Gemini artificial intelligence system. For everybody else, including this grateful Google user and committed technology optimist, it was a blessing.

The laughable screw-ups in the Gemini chatbot’s image generation — racially diverse Nazi soldiers? — offered a salutary glimpse of an Orwellian dystopia. And in so doing, they also highlighted vital questions of opacity, trust, range of application,and truth that deserve more attention as we contemplate where AI will lead.

AI is a disruptive and potentially transformative innovation — and, like all such innovations, it’s capable of delivering enormous advances in human well-being. A decade or two of AI-enhanced economic growth is just what the world needs. Even so, the exuberance over actually existing AI is premature. The concept is so exciting and the intellectual accomplishment so impressive that one can easily get swept along. Innovators, actual and potential users, and regulators all need to reflect more carefully on what’s going on — and especially on what purposes AI can usefully serve.

People make mistakes all the time. If AI makes fewer mistakes than humans, would that be good enough?

Part of the difficulty in grappling with AI’s full implications is the huge effort that has gone into devising AI models that express themselves like humans, presumably for marketing reasons. “Yes, I can help you with that.” Thank you, but who is this “I”? The suggestion is that AI can be understood and dealt with much as one would understand and deal with a person, except that AI is infinitely smarter and more knowledgeable. For that reason, when it comes to making decisions, it claims a measure of authority over its dimwitted users. There’s a crucial difference between AI as a tool that humans use to improve their decisions — decisions for which they remain accountable — and AI as a decision maker in its own right.

In due course, AI will likely be granted ever-wider decision-making power, not just over the information (text, video and so forth) it passes to human users but also over actions. Eventually, Tesla’s “full self-driving” will actually mean full self-driving. At that point, liability for bad driving decisions will lie with Tesla. Between advisory AI and autonomous-actor AI, it’s harder to say who or what should be held accountable when systems make consequential mistakes. The courts will doubtless take this up.

‘Hallucinate’

Liability aside, as AI advances we’ll want to judge how good it is at making decisions. But that’s a problem, too. For reasons I don’t understand, AI models aren’t said to make mistakes: they “hallucinate”. But how do we know they’re hallucinating? We know for sure when they present findings so absurd that even low-information humans know to laugh. But when AI systems make stuff up, they won’t always be so stupid. Even their designers can’t explain all such errors, and spotting them might be beyond the powers of mere mortals. We could ask an AI system, but they hallucinate.

Even if errors could be reliably identified and counted, the criteria for judging the performance of AI models are unclear. People make mistakes all the time. If AI makes fewer mistakes than humans, would that be good enough? For many purposes (including full self-driving), I’d be inclined to say yes, but the domain of questions put to AI must be suitably narrow. One of the questions I wouldn’t want AI to answer is, “If AI makes fewer mistakes than humans, would that be good enough?”

Read: Google apologises for ‘woke’ AI tool

The point is, judgments like this are not straightforwardly factual — a distinction that goes to the heart of the matter. Whether an opinion or action is justified often depends on values. These might be implicated by the action in itself (for instance, am I violating anybody’s rights?) or by its consequences (is this outcome more socially beneficial than the alternative?). AI handles these complications by implicitly attaching values to actions and/or consequences — but it must infer these either from the consensus, of sorts, embedded in the information it’s trained on or from the instructions issued by its users and/or designers. The trouble is, neither the consensus nor the instructions have any ethical authority. When AI offers an opinion, it’s still just an opinion.

For this reason, the arrival of AI is unfortunately timed. The once-clear distinction between facts and values is under assault from all sides. Eminent journalists say they never really understood what “objective” meant. The “critical theorists” who dominate many college social studies programmes deal in “false consciousness”, “social construction” and truth as “lived experience” – all of which call the existence of facts into question and see values as instruments of oppression. Effective altruists take issue with values in a very different way – claiming, in effect, that consequences can be judged on a single dimension, which renders values other than “utility” null. Algorithmic ethicists, rejoice!

Read: Google bars Gemini AI from talking about elections

As these ideas seep into what AI claims to know, prodded further by designers promoting cultural realignment on race, gender and equity, expect the systems to present value judgments as truths (just as humans do) and deny you information that might lead you to moral error (just as humans do). As Andrew Sullivan points out, at the start Google promised that its search results were “unbiased and objective”; now its principal goal is to be “socially beneficial”. AI systems might reason, or be instructed, that in choosing between what’s true and what’s socially beneficial, they should pick the latter — and then lie to users about having done so. After all, AI is so smart, its “truth” must really be true.

In a helpfully memorable way, Gemini proved that it’s not. Thank you, Google, for screwing up so badly. — Clive Crook, (c) 2024 Bloomberg LP

Get breaking news alerts from TechCentral on WhatsApp