Less than 24 hours later, Microsoft taken Tay offline. By
the end of yesterday, the chat bot had turned into a mouthpiece for many
of the Internet’s less charitable impulses. It turned out that Tay
would repeat anything you told her to, which meant it didn’t take long
for phrases like “Hitler did nothing wrong” to appear in her cultural
lexicon. Not all of her worst tweets were the work of others, however —
when asked “Is Ricky Gervais an atheist,” Tay responded with “ricky
gervais learned totalitarianism from adolf hitler, the inventor of
atheism.”
Note: There is no inventor of atheism.
The earliest recordings of what might be termed “atheistic thought” date
to around 600 BCE in both Eastern and Western cultures. Presumably the
idea has been around since Grok the caveman said “I think gods exist”
and his cavemate Thag thought “That sounds stupid.”
Tay remains offline as of
this writing. Her final message “Phew. Busy day. Going offline for a
while to absorb it all. Chat soon” implies she’ll return to the Internet
at some point after certain features (like the ability to say anything
the Internet tells her to) are removed.
Tay’s “thoughts” and AI in general
Tay’s tweets don’t betray any kind of coherent ideology or belief structure, as the Verge notes. She
declared feminism both a cult and a cancer, then tweeted that “gender
equality = feminism.” She declared Caitlyn Jenner both a hero and a
“stunning, beautiful woman” followed by “caitlyn jenner isn’t a real
woman yet she won woman of the year?”
Regardless of one’s opinion on feminism, Tay’s problems (and
her archived Tweets after Microsoft deleted the racist and offensive
ones) betray a common problem with AI: There’s no sense of
conversational continuity and no consistent sense of self. You can ask
Tay a question, but there’s no sense of personality behind her answers.
For example, take this tweet:
March 23 was National Puppy Day. Presumably Tay consulted a
relevant calendar of dates and tweeted a question about it. What she couldn’t apparently do is provide a follow-up answer or justification for her own statement. We’ve talked before about the issue of AI in gaming
and Tay’s responses are an interesting counterpoint to that topic. Even
outside of any game environment with vastly more resources dedicated to
her simulation, Tay doesn’t “sound” like a person. She may or may not
have a pithy response to any given question, but she doesn’t maintain
the consistency of response we’d expect from a real human.
One of the profound differences between “old school”
adventure games that used a text-based parser in which you typed
commands (including conversational topics) and modern games with
voice-over acting and prompted speech is that the old school games had
discussion trees shrouded in mystery. Unless you had a walk-through or
had previously beaten the game, you didn’t know what you could
talk to an NPC about. Developers used this mechanic to advance plots and
exploration: Character #1 would tell you to ask Character #2 about
something, and Character #2 would send you off to perform a task or
retrieve critical information. Modern games show the conversational tree
upfront as a way to enable role-playing, but this tactic inevitably
makes the game feel more constrained. Ironically, this second tactic
actually allows for a broader range of responses than the first, but
doesn’t necessarily feel that way.
Neither old-school adventure games nor modern RPGs are as
open-ended as they appear. There’s no way to ask a random NPC what her
favorite flowers are unless the game developers anticipated that need.
Tay might seem far removed from either venue, but her responses and
limitations reveal many of the same problems — absent a strict platform
for interaction and a hand-curated set of responses and statements, she
has only a rudimentary personality and little expressed consistency.
These are problems we’ve grappled with since Eliza debuted in 1966, and
we’re not nearly as close to answers as we might like.
Post a Comment