Sunday, August 3, 2008

Wikipedia sez...

Artificial Stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of an AI program to adequately perform basic tasks. However, within the field of computer science, artificial stupidity is also used to refer to a technique of "dumbing down" computer programs in order to deliberately introduce errors in their responses.

Alan Turing, in his 1952 paper Computing Machinery and Intelligence, proposed a test for intelligence which has since become known as the Turing test. While there a number of different versions, the original test, described by Turing as being based on the "Imitation Game", involved a "machine intelligence" (a computer running an AI program), a female participant, and an interrogator. Both the AI and the female participant were to claim that they were female, and the interrogator's task was to work out which was the female participant and which was not by examining the participant's responses to typed questions. While it isn't necessarily clear whether or not Turing intended that the interrogator was to know that one of the participants was a computer, while discussing some of the possible objections to his argument Turing raised the concern that "machines cannot make mistakes".

"It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy."

Turing, 1950, page 448

As Turing then noted, the reply to this is a simple one: the machine should not attempt to "give the right answers to the arithmetic errors". Instead, deliberate errors should be introduced to the computer's responses.

Within computer science, there are at least two major applications for artificial stupidity: the generation of deliberate errors in chatbots attempting to pass the Turing test or to otherwise fool a participant into believing that they are human; and the deliberate limition of computer AIs in video games in order to control the game's difficulty.

The first Loebner prize competition was run in 1993. As reported in The Economist, the winning entry incorporated deliberate errors - described by The Economist as "artificial stupidity" - to fool the judges into believing that it was human. This technique has remained a part of the subsequent Loebner prize competitions, and reflects the issue first raised by Turing.

Lars Lidén argues that good game design involves finding a balance between the computer's "intelligence" and the player's ability to win. By finely tuning the level of "artificial stupidity", it is possible to create computer controlled plays that allow the player to win, but do so "without looking unintelligent".

According to its definition, a sufficiently developed Artificial Stupidity program would be able to make all the worst cases regarding a given situation. This would enable computer programmers and analysts to find flaws immediately while minimizing errors that are within the code.

However, it is mostly expected to be used within the development and debugging stages of computer software.

Excerpt taken in whole from http://en.wikipedia.org/wiki/Artificial_stupidity

1 comment:

eekbot said...

funny you bring this up. about a month ago, my sister was listening to a book on tape discussing this very topic. the only excerpt i heard was regarding the chinese room experiment.

http://en.wikipedia.org/wiki/Chinese_room