By Joe Arney
As an expert in generative artificial intelligence and ethics, when Casey Fiesler interacts with brands or commenters online, she’s very attuned to whether the person on the other end might actually be a chatbot.
More and more, regular internet users are having the same doubts. That’s becausecompanies are increasingly turning to chatbots to solve problems, manage customer engagement—or because everyone else is doing it.
“I’ve heard from multiple people on social media who say the big conversations they have at work are about how to do A.I., because everyone feels like they have to integrate this new technology as quickly as possible—even if it doesn’t makesense,” said Fiesler, associate professor of information science at CMCI.
Chatbots have their use, Fiesler said. They can spark brainstorming sessions for a writer struggling with a draft, or create non-player characters in tabletop role-playing games. The problem, she said, “is the idea that chatbots and generative A.I. need to be doing everything, everywhere. Which is absurd.”
Don’t think so? Consider that chatbots have encouraged small-business owners to break the law (City of New York), advised using glue to help cheese stick to pizza (Google) and impersonated parents to offer reassurance about local schools (Meta).
“In the Meta case, to give them some credit, the account that responded to the parent was clearly labeled as being A.I.,” Fiesler said. “But at the same time, the idea that it might impersonate a parent should have been anticipated, because large language models are not information retrieval systems—they’re ‘what word comes next?’ systems. So, it’s inevitable you’re going to have some wrong responses.”
Social media interactions that should be between people are one case where Fiesler said chatbots should be off-limits;another is dispensing legal, medical or business advice. That’s not even considering the complex social and ethical concerns about A.I.—misinformation, labor rights, intellectual property, energy consumption—that are getting short shrift by an industry waxing poetic about the golden age this technology promises to usher in.
But moving slowly and asking thoughtful questions is not a strength of Silicon Valley, and companies fearful of being left behind are missing Fiesler’s bigger point about ethical debt.
“There’s this attitude of do this now, and deal with the consequences after we see what goes wrong,” she said. “But very often, the harm is already done.
“It blows my mind that these huge tech companies, with all their resources, could be surprised that all these things keep happening. Whereas when I describe some of these A.I. use cases to undergrads in my ethics class, they come up with all the things that could go wrong.”