An eye-catching story shot across Jewish media yesterday about Amsterdam’s Anne Frank House debuting a promotional chatbot, an artificial-intelligence [A.I.] program designed to mimic human conversation.
“Now you can ‘chat’ with an Anne Frank robot,’ the Jewish Telegraphic Agency wrote in their headline. The Times of Israel said that the bot ‘puts the diarist on Facebook.” (The chat-bot operates via Facebook Messenger.)
Casual readers could easily believe that Anne Frank, the young girl who has become the human face of the Holocaust’s unimaginable tragedy, was being brought back from the dead, a concept that seems quite foreign to the history texts in which Frank’s name often resides. It fact, it seems more like paperback sci-fi, the kind of novel that ends with a lesson about scientific hubris and how some lines shouldn’t be crossed.
Prince Constantijn of the Netherlands, who released the bot, described it as not merely a “fun gadget” but “ultimately what new technologies should be used for: to better our lives and conquer the challenges society faces.”
(Nevermind how that’s exactly what Dr. Frankenstein might say before the lightening hits.)
Of course, a chatbot wouldn’t be able to throw children into lakes or set fire to a village. The bigger problem would be something like what happened with Microsoft’s Tay, a chatbot designed to show off the tech giant’s latest language innovations.
The idea was that the more Twitter users engaged with Tay, the more language it would learn, just like babies learn their family’s language by interacting with family members. Eventually, Tay’s conversational skills would be indistinguishable from a human’s.
That was the plan, anyways. The problem was that internet users deliberately set out to make Tay say offensive, often anti-Semitic things.
“Hitler was right I hate the Jews,” went one of the Tay tweets, according to James Vincent at The Verge.
Within 24 hours, Tay was responding to seemingly innocuous questions with shockingly offensive statements.
“is Ricky Gervais an atheist?” asked one user.
Tay’s response: “ricky Gervais learned totalitarianism from adolf hitler, the inventor of atheism.”
Shortly afterward, Microsoft took the bot offline and apologized for the misguided endeavor.
Yet, by the looks of yesterday’s reporting, it seemed as if no one had learned Microsoft’s lesson, least of all the historians at the Anne Frank House, who said the program was designed to learn from user questions, just like Tay was.
If anti-Semites can outsmart the engineers at one of the world’s biggest tech companies, imagine what they’d do to a poor Anne Frank bot. (That is without getting into the queasy concept of simulating a young person’s consciousness without their consent.)
Thankfully, reports of a sentient Frank-bot appear to be greatly exaggerated, according to Matthew R.F. Balousek, a new media artist and bot maker, who spoke to JP Updates after investigating the bot.
“It seems like maybe that [Times of Israel] article is overselling the A.I. aspect,” Balousek said.
It turns out that the bot doesn’t try to formulate new sentences or learn language for speaking purposes. (Anti-Semites who hoped to trick Anne Frank into saying derogatory things about Jews, then, should let that dream go.)
Instead, any A.I. learning is more about helping the bot to better guide users to content authored by the museum.
“Think of it like a bot that can pull up specific Wikipedia entries, but they’re trying to figure out what articles need to be written,” Balousek explained, comparing the bot to applications like Siri, as well as to most modern videogames, in which user interaction yields only pre-approved responses written by humans.
It’s also a stretch to call the bot as it currently exists “an Anne Frank bot” at all. There are no messages like “Hi, I’m Anne,” and it’s never implied that the bot represents a human.
“The framing is more like ‘Hello, I am the museum struck by lightening and come to life,’” Balousek said. “All facts, no fluff.”
Balousek called the minimal A.I. a smart move for this context, given examples like the Tay debacle, but he still had questions about how the bot would deal with future abuse.
“There’s an open question of how, if at all, the bot should respond to people that might spout anti-Semitic speech at it,” Balousek said. “Right now the only thing the bot says is a generic error message, but the internet being the internet I would be surprised if there wasn’t hate speech being slung at the bot in notable volume.”
Here another Microsoft product, Siri-rival Cortana, might provide a different counterpoint to what the Anne Frank House has done.
“If you say things that are particularly a**holeish to Cortana, she will get mad,” Microsoft’s Deborah Harrison said last year. “That’s not the kind of interaction we want to encourage.”
Would a similarly active approach be better for the Anne Frank House bot?
“Perhaps the stoic ‘I don’t understand that’ is sufficient enough to deprive any trolls from feedback that would make that interaction enjoyable for them, but it might also be an opportunity to speak out against hate speech to those groups as well,” Balousek said.
[Additional reporting by Menachem Rephun]