Comment | For those interested in the background, perhaps teachers thinking of discussion units for English classes, there have been several articles and programs in the news lately about the Jeopardy challenge and related topics.
Some program on PBS, I think Nova, just had an hour-long episode last week all about how the IBM researchers developed 'Watson,' the computer, including clips from a lot of sample games that 'he' had to play to audition for the Jeopardy producers. It actually made the viewers want to root for the computer, because when 'he' was wrong, it was often in such a weird way that everybody laughed at 'him.'
Also, Atlantic Monthly magazine this month (March 2011) had an article entitled 'Mind vs. Machine,' by Brian Christian, about the writer's experience forming one of a panel of human interviewees to compete in the annual 'Turing test' against a group of computers. (He's evidently written a longer version as a book.)
One of the most interesting parts was about the strategies that programmers have used over the years to make the computers' responses more human-like. The earliest programs just had stock evasive responses when they didn't know an answer, like picking up on a word in the question: 'Oh? Tell me more about ...' But evasion turned out to be more convincing if the computers were programmed to appear angry.
___________
p. 64: ... when it lacked any clear cue for what to say, MGonz* fell back not on therapy clichés like "How does that make you feel?" but on things like "You are obviously an asshole" ... It's a stroke of genius because ... argument is stateless—that is, unanchored from all context ... Each remark after the first is only about the previous remark. ... Verbal abuse is simply less complex than other forms of conversation.
*a 1989 program by Mark Humphrys at University College Dublin
___________
All that, plus several examples taken from conversations in which the programmers tested their programs by using them in online chatrooms, made me think of past episodes of overt verbal abuse in this forum. Not the spammers or spambots that start junk threads, like most recently, but users that had a pattern of responding with gratuitous insults. Someone could probably program a chatbot to be a troll, though I can't imagine any ethical university research department allowing such an unkind test on unwitting subjects.
But what if the next improvement on the abuse model were just to tone it down slightly, so that it's just annoying enough to disrupt expectations of logical or emotional consistency, and keep anyone from wanting to prolong the conversation, but not so rude that it explicitly violates forum standards? Surely it's just a coincidence that our latest forum 'personality' has such a quirky, off-the-wall manner that so often responds to a question with a question, and such a wounded, humble tone when challenged. Just cute enough, perhaps, to convince a certain number of users to give it the benefit of the doubt? (-;
Fortunately, Watson doesn't have to have any personality at all for Jeopardy -- though I think that's one of the reasons we feel sorry for 'him.' Why would you want to be able to answer all the trivia questions if you couldn't enjoy them, or understand the jokes? For us humans, it just doesn't compute. |
---|