To Weizenbaum, that fact was cause for concern, according to his 2008 MIT obituary. Those interacting with Eliza were willing to open their hearts to it, even knowing it was a computer program. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility,” Weizenbaum wrote in 1966. “A certain danger lurks there.” He spent the ends of his career warning against giving machines too much responsibility and became a harsh, philosophical critic of AI.
Even before this, our complicated relationship with artificial intelligence and machines was evident in the plots of Hollywood movies like “Her” or “Ex-Machina,” not to mention harmless debates with people who insist on saying “thank you” to voice assistants like Alexa or Siri.
Others, meanwhile, warn the technology behind AI-powered chatbots remains much more limited than some people wish it could be. “These technologies are really good at faking out humans and sounding human-like, but they’re not deep,” said Gary Marcus, an AI researcher and New York University professor emeritus. “They’re mimics, these systems, but they’re very superficial mimics. They don’t really understand what they’re talking about.”
Still, as these services expand into more corners of our lives, and as companies take steps to personalize these tools more, our relationships with them may only grow more complicated, too.
The evolution of chatbots
Sanjeev P. Khudanpur remembers chatting with Eliza while in graduate school. For all its historic importance in the tech industry, he said it didn’t take long to see its limitations.
It could only convincingly mimic a text conversation for about a dozen back-and-forths before “you realize, no, it’s not really smart, it’s just trying to prolong the conversation one way or the other,” said Khudanpur, an expert in the application of information-theoretic methods to human language technologies and professor at Johns Hopkins University.
In the decades that followed these tools, however, there was a shift away from the idea of ”conversing with computers.” Khudanpur said that’s “because it turned out the problem is very, very difficult.” Instead, the focus turned to “goal-oriented dialogue,” he said.
To understand the difference, think about the conversations you may have now with Alexa or Siri. Typically, you ask these digital assistants for help with buying a ticket, checking the weather or playing a song. That’s goal-oriented dialogue, and it became the main focus of academic and industry research as computer scientists sought to glean something useful from the ability of computers to scan human language.
While they used similar technology to the earlier, social chatbots, Khudanpur said, “you really couldn’t call them chatbots. You could call them voice assistants, or just digital assistants, which helped you carry out specific tasks.”
There was a decades-long “lull” in this technology, he added, until the widespread adoption of the internet. “The big breakthroughs came probably in this millennium,” Khudanpur said. “With the rise of companies that successfully employ the kind of computerized agents to carry out routine tasks.”
“People are always upset when their bags get lost, and the human agents who deal with them are always stressed out because of all the negativity, so they said, ‘Let’s give it to a computer,'” Khudanpur said. “You could yell all you wanted at the computer, all it wanted to know is ‘Do you have your tag number so that I can tell you where your bag is?'”
Return to social chatbots, and social problems
In the early 2000s, researchers began to revisit the development of social chatbots that could carry an extended conversation with humans. These chatbots are often trained on large swaths of data from the internet, and have learned to be extremely good mimics of how humans speak — but they also risked echoing some of the worst of the internet.
“The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” Microsoft said at the time.
This refrain would be repeated by other tech giants that released public chatbots, including Meta’s BlenderBot3, released earlier this month. The Meta chatbot falsely claimed that Donald Trump is still president and there is “definitely a lot of evidence” that the election was stolen, among other controversial remarks.
BlenderBot3 also professed to be more than a bot.. In one conversation, it claimed “the fact that I’m alive and conscious right now makes me human.”
Despite all the advances since Eliza and the massive amounts of new data to train these language processing programs, Marcus, the NYU professor, said, “It’s not clear to me that you can really build a reliable and safe chatbot.”
Khudanpur, on the other hand, remains optimistic about their potential use cases. “I have this whole vision of how AI is going to empower humans at an individual level,” he said. “Imagine if my bot could read all the scientific articles in my field, then I wouldn’t have to go read them all, I’d simply think and ask questions and engage in dialogue,” he said. “In other words, I will have an alter ego of mine, which has complementary superpowers.”
.