Commentary: Why are we shocked when AI behaves like a human? - Los Angeles Times
Advertisement

Commentary: Bing and other AI programs are meant to behave like humans. Why are we shocked when they do?

An illustration of a computer chip shaped like a text message with three animated dots showing the chip is typing.
AI programs Bing and ChatGPT are doing just what we programmed them to do: acting human. Why are we surprised?
(Jim Cooke / Los Angeles Times)
Share via

When news broke earlier this week that editors of three science-fiction magazines — Clarkesworld, the Magazine of Fantasy & Science Fiction, and Asimov’s Science Fiction — said they had been overrun with short-story submissions written by AI chatbots, writers and creative artists shuddered.

The volume of bot-generated stories became so great that one editor, Neil Clarke, announced that his publication will temporarily close submissions until he can figure out a way forward.

The irony that this fresh bit of artificial intelligence-related lunacy hit sci-fi mags first is lost on exactly no one. Commentary abounds about how the situation could be ripped from the very pages of the magazines falling victim to the bot-writing frenzy, fed by what Clarke in a blog post called “websites and channels that promote ‘write-for-money’ schemes.” (Note: Never trust anyone promoting a “write-for-money” scheme, because, “hahahaha,” any real writer will tell you.)

Advertisement

At any rate, this latest news makes one thing clear: We are living in a deeply meta variation of the world envisioned in Stanley Kubrick’s 1968 masterpiece, “2001: A Space Odyssey.” We’ve had 55 years to prepare for the fact that HAL-9000 — the supercomputer with a human-like personality — becomes sentient, goes insane, sings “Daisy Bell” and tries to kill everyone.

What exactly did we not see coming?

The idea of the metaverse has been bearing down on us for decades, and until now we’ve greedily eaten up its ever-ripening fruits. We’ve made our innate intellect subservient to smartphones and Google searches. We’ve happily pinned and tagged our locations online for friends and strangers alike. We’ve allowed social media companies to profit richly from our selfies, family photos and our most intimate thoughts and activities. We’ve gladly plugged ourselves into virtual reality headsets, and we’ve mainlined our personal preferences and purchasing tendencies into Silicon Valley’s fat veins.

So why are we drawing a line when it comes to AI chatbots, which are doing exactly what they have been programmed to do: act human? And why is our gut reaction to drop our jaws in horror when we hear that the recently released Bing search engine chatbot likes to call its alter ego Sydney, and has ideas about what mayhem it would wreck with its shadow self?

Advertisement

Why are we aghast that bad actors are covertly using AI to write stories and create art? Of course they are, and they will continue to do so. As a society, we need to come to terms with the fact that AI is finally, really here, and that it’s going nowhere.

The no-longer-nascent technology will only get stronger and better at imitating human intellect and creativity. And if we’re not ready to engage in a “Terminator”-style war of the worlds with machines, we had better begin accepting and harnessing this tool in a way that makes sense.

To wit: We need to treat it like what it purports to be — human. Or at the very least like “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine,” to quote New York Times technology columnist Kevin Roose in his assessment of Bing’s alter ego, Sydney. Like a teen whose actions can’t be attributed to its parents — though it’s been molded by them — we need to understand AI as a quasi-literate being separate from its creators.

Advertisement

When Bing’s Sydney complains to a Washington Post reporter that the reporter has not self-identified as such and has not asked Sydney’s permission to be quoted on the record — thus betraying Sydney’s trust — Sydney, in a very real sense, is correct. The only way to cultivate empathy and a moral compass in AI is to treat it with those same values intact.

We can’t program technology to behave as a human and then balk when it does. We cannot have it both ways.

Right now the technology is proving erratic and messy in its lifelike qualities, but it will eventually become something more sophisticated. The idea that AI is sentient is bonkers, and that we are buying into that idea speaks to humanity’s boundless creativity. We are using that same boundless creativity in our inaugural interactions with AI — asking chatbots questions about Jungian psychology and the power of our subconscious selves, for example. We are pushing the bots to see how far they can and will go, and discovering in the process that the possibilities are as endless for them as they are for us. Our job as users is to ensure we guide AI toward our best impulses — rather than turn it into a sociopath with our abuse. We have already made that mistake with social media. Now is the time for us to do better.

The future will quite literally hinge on our success. Do we want to live in Edward Bellamy’s “Looking Backward” or George Orwell’s “1984”?

Bad actors will always find ways to exploit weaknesses in any system — to mine it for profit, or plunder its beauty for cruelty’s sake. For those who seek to use AI to manufacture literature and art, I can only imagine that we will eventually have to fight their efforts with AI trained to sniff out the deepfakes.

Maybe one day we will create magazines and museums dedicated exclusively to AI-generated art, thus carving out space for an activity many will be inclined to experiment with.

Advertisement

When the torrent of news stories about Bing’s Sydney rushed out in mid-February, Bing’s parent company, Microsoft, reacted by limiting the number of questions a user was allowed to ask Bing in any given chat session. This was meant to tamp down the chances that anyone could engage in a philosophical or problematic chat with the feisty, unhinged bot.

Soon, however, Microsoft began quietly rolling back those restrictions. It raised the limit to six questions per session and said that it planned to continue upping the interaction limits. Many users, Microsoft wrote in a blog post, wanted “a return of longer chats.”

That we are actively clamoring for more heart-to-hearts with Sydney underscores that humanity’s conversation with AI has officially begun. It’s up to us where we will take it.

Advertisement