What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily Bender, a linguistics professor at the University of Washington, you get a lot of skepticism about the future of generative AI tools such as ChatGPT.
“I feel confident that they’re not thinking,” he said. “They’re not understanding anything, but we need another way to make sense of what they’re doing.”
What’s the harm? One of Chiang’s foremost fears is that the thinking, breathing humans who wield AI will use it as a means to control other humans. In a recent Vanity Fair interview, he compared our increasingly AI-driven economy to “a giant treadmill that we can’t get off” — and during Friday’s forum, Chiang worried that the seeming humanness of AI assistants could play a role in keeping us on the treadmill.
“If people start thinking that Alexa, or something like that, deserves any kind of respect, that works to Amazon’s advantage,” he said. “That’s something that Amazon would try and amplify. Any corporation, they’re going to try and make you think that a product is a person, because you are going to interact with a person in a certain way, and they benefit from that. So, this is a vulnerability in human psychology which corporations are really trying to exploit.”
AI tools including ChatGPT and DALL-E typically produce text or imagery by breaking down huge databases of existing works, and putting the elements together into products that look as if they were created by humans. The artificial genuineness is the biggest reason why Bender stays as far away from generative AI as she can.
“The papier-mâché language that comes out of these systems isn’t representing the experience of any entity, any person. And so I don’t think it can be creative writing,” she said. “I do think there’s a risk that it is going to be harder to make a living as a writer, as corporations try to say, ‘Well, we can get the copy…’ or similarly in art, ‘We can get the illustrations done much cheaper by taking the output of the system that was built with stolen art, visual or linguistic, and just repurposing that.’”
Bender gave a thumbs-up to Hollywood writers and actors for winning protections from AI encroachment during this year’s contract negotiations with the studios. But she gave a thumbs-down to journalists who slip AI-produced content into their own work on the sly. (Full disclosure: No AI tools were used in the writing of this report.)
“I have Google Alerts set on the phrase ‘computational linguistics,’ for example,” she said. “I set that years ago as a way to find job opportunities for my students. Starting in November 2022, it kept sending me these news articles where people were writing about ChatGPT, and a remarkable number of them started with some ChatGPT-generated paragraph, unflagged, and then below the fold, ‘Oh, haha, that was written by a machine.’ And I thought, what kind of journalist would sacrifice their integrity like this? And also, how dare you trick me into reading fake text?”
You might argue that Bender has been assimilated into the AI ecosystem merely by virtue of the fact that she uses Google Alerts — but she makes a distinction between generative AI and special-purpose technologies such as machine translation and automatic speech-to-text transcription.
“I really appreciate having a spell-checker,” she said. “So, language technology can certainly be valuable.”
Even generative AI might have its place, Chiang said.
“A lot of times, the world calls upon us to generate a lot of bullshit text, and if you had a tool that would handle that, that’d be great,” he said. “Or, I mean, it’s not great. The problem is that the world insists that we generate all this sort of bullshit text. So having a tool that does that for you … that is arguably of some utility.”
Chiang and Bender agreed that generative AI will need regulatory guardrails.
“The guardrails that I’d like to see are things around transparency,” Bender said. “I think that we should all know whenever we’ve encountered synthetic media, it should be immediately apparent to the human eye. It should also be mechanistically encoded so that you could filter it out and not see it at all. I think we need transparency about training data. I think we need transparency about energy use. And on top of that, I would love to see accountability. I would love to live in a world where OpenAI is actually responsible for everything that ChatGPT outputs.”
“I don’t have anything to add to that,” Chiang said.
Other human-generated gems from the Town Hall Seattle chat:
- Chiang said AI-generated text will be a boon for internet scammers: “That is, I think, an example of this broader problem, of valuable human-generated text being drowned out in a sea of AI-generated nonsense.”
- AI programs have mastered complex games such as chess and Go, but Chiang noted that it took them millions of trials to gain that mastery. He then pointed to an experiment in which rats learned to drive miniature cars after just 24 trials. Based on that measure, AI programs are “not as good at skill acquisition as mice are,” Chiang said. “It’s going to be a long time before they’re as good at skill acquisition as humans are.”
- Chiang acknowledged that AI will make it harder to distinguish between student-written essays and machine-generated text. “It is a gigantic problem that might be insoluble,” he said. “It might be that essay writing has lost its usefulness as a pedagogical tool.”
- Will AI put authors like Chiang out of business? “It is not at all clear to me that AI-generated text is a game-changer for the prose fiction market,” he said. “In terms of the cost of publishing a book, the amount that you pay the author is only a tiny fraction of that. So you’re not actually saving all that much.” He said generative AI might be useful as a brainstorming tool — and noted that science-fiction author Philip K. Dick used I Ching divination coins for a similar purpose when he wrote “The Man in the High Castle.”
- Chiang is arguably best-known as the author of the short story that inspired the 2016 movie “Arrival,” which features a linguistics professor as the main character. “Arrival” also reflects a controversial concept in linguistics known as the Sapir-Whorf Hypothesis. So what do linguists think of Chiang’s story? “All my linguist friends are jealous that I get to meet Ted,” Bender said. “I’m going to be speaking at the Linguistic Society of America in January, and I’m already arranging my talk so that I can brag about this.”
InfEneTy is a knowledge platform which showcases critical news, insights and features on contemporary and topical issues related to Infrastructure, Energy and Technology affecting the economy, industry sectors, business environment. The intent is to enable an association with the evolving scenario and be a catalyst for change. Help make InfEneTy better. Share your comments or connect with us at email@example.com