Halpern, The Trouble with the Turing Test (2006).pdf
(
147 KB
)
Pobierz
The Trouble with the Turing Test
The Trouble with the Turing Test
Mark Halpern
I
n the October 1950 issue of the British quarterly
Mind
, Alan Turing
published a 28-page paper titled “Computing Machinery and Intelligence.”
It was recognized almost instantly as a landmark. In 1956, less than six
years after its publication in a small periodical read almost exclusively
by academic philosophers, it was reprinted in
The World of Mathematics
,
an anthology of writings on the classic problems and themes of math-
ematics and logic, most of them written by the greatest mathematicians
and logicians of all time. (In an act that presaged much of the confusion
that followed regarding what Turing really said, James Newman, editor
of the anthology, silently re-titled the paper “Can a Machine Think?”)
Since then, it has become one of the most reprinted, cited, quoted, mis-
quoted, paraphrased, alluded to, and generally referenced philosophical
papers ever published. It has influenced a wide range of intellectual dis-
ciplines—artificial intelligence (AI), robotics, epistemology, philosophy of
mind—and helped shape public understanding, such as it is, of the limits
and possibilities of non-human, man-made, artificial “intelligence.”
Turing’s paper claimed that suitably programmed digital computers
would be generally accepted as
thinking
by around the year 2000, achieving
that status by successfully responding to human questions in a human-like
way. In preparing his readers to accept this idea, he explained what a
digital computer is, presenting it as a special case of the “discrete state
machine”; he offered a capsule explanation of what “programming” such
a machine means; and he refuted—at least to his own satisfaction—nine
arguments against his thesis that such a machine could be said to think.
(All this groundwork was needed in 1950, when few people had even
heard of computers.) But these sections of his paper are not what has made
it so historically significant. The part that has seized our imagination, to
the point where thousands who have never seen the paper nevertheless
clearly remember it, is Turing’s proposed test for determining whether
Mark Halpern
has been working in and with computer software for fifty years, starting out
with IBM’s Programming Research Department just after the release of Fortran, and going
on to work for several other companies, including Lockheed Missiles & Space Company, tiny
Silicon Valley startups, and then IBM again. He lives in the hills of Oakland, California,
with his wife and daughter. His e-mail address is markhalpern@iname.com. This article is
an abridged version of a more detailed and fully documented paper that can be found on his
website, www.rules-of-the-game.com.
42 ~ T
HE
N
EW
A
TLANTIS
Copyright 2006. All rights reserved. See
www.TheNewAtlantis.com
for more information.
T
HE
T
ROUBLE
WITH
THE
T
URING
T
EST
a computer is thinking—an experiment he calls the Imitation Game, but
which is now known as the Turing Test.
The Test calls for an interrogator to question a hidden entity, which
is either a computer or another human being. The questioner must then
decide, based solely on the hidden entity’s answers, whether he had been
interrogating a man or a machine. If the interrogator cannot distinguish
computers from humans any better than he can distinguish, say, men from
women by the same means of interrogation, then we have no good reason
to deny that the computer that deceived him was
thinking
. And the only
way a computer could imitate a human being that successfully, Turing
implies, would be to actually think like a human being.
Turing’s thought experiment was simple and powerful, but prob-
lematic from the start. Turing does not
argue
for the premise that the
ability to convince an unspecified number of observers, of unspecified
qualifications, for some unspecified length of time, and on an unspecified
number of occasions, would justify the conclusion that the computer was
thinking—he simply
asserts
it. Some of his defenders have tried to supply
the underpinning that Turing himself apparently thought unnecessary
by arguing that the Test merely asks us to judge the unseen entity in
the same way we regularly judge our fellow humans: if they answer our
questions in a reasonable way, we say they’re thinking. Why not apply the
same criterion to other, non-human entities that might also think?
But this defense fails, because we do
not
really judge our fellow
humans as thinking beings based on how they answer our questions—we
generally accept any human being on sight and without question as a
thinking being, just as we distinguish a man from a woman on sight.
A conversation may allow us to judge the quality or depth of another’s
thought, but not whether he is a thinking being at all; his membership in
the species
Homo sapiens
settles that question—or rather, prevents it from
even arising. If such a person’s words were incoherent, we might judge
him to be stupid, injured, drugged, or drunk. If his responses seemed like
nothing more than reshufflings and echoes of the words we had addressed
to him, or if they seemed to parry or evade our questions rather than
address them, we might conclude that he was not acting in good faith, or
that he was gravely brain-damaged and thus accidentally deprived of his
birthright ability to think.
Perhaps our automatic attribution of thinking ability to anyone who is
visibly human is deplorably superficial, lacking in philosophic or scientific
rigor. But for better or worse, that is what we do, and our concept of
thinking
being
is tightly bound up, first, with human appearance, and then with coher-
W
INTER
2006 ~ 43
Copyright 2006. All rights reserved. See
www.TheNewAtlantis.com
for more information.
M
ARK
H
ALPERN
ence of response. If we are to credit some non-human entity with thinking,
that entity had better respond in such a way as to make us see it, in our mind’s
eye, as a human being. And Turing, to his credit, accepted that criterion.
Turing expressed his judgment that computers can think in the form
of a prediction: namely, that the general public of fifty years hence will
have no qualms about using “thinking” to describe what computers do.
The original question, “Can machines think?” I believe to be too mean-
ingless to deserve discussion. Nevertheless I believe that at the end of
the century the use of words and general educated opinion will have
altered so much that one will be able to speak of machines thinking
without expecting to be contradicted.
Note that Turing bases that prediction not on an expectation that the com-
puter will perform any notable mathematical, scientific, or logical feat, such
as playing grandmaster-level chess or proving mathematical theorems, but
on the expectation that it will be able, within two generations or so, to carry
on a sustained question-and-answer exchange well enough to leave most
people, most of the time, unable to distinguish it from a human being.
And what Turing grasped better than most of his followers is that the
characteristic sign of the ability to think is not giving
correct
answers, but
responsive
ones—replies that show an understanding of the remarks that
prompted them. If we are to regard an interlocutor as a thinking being,
his responses need to be autonomous; to think is to think for yourself. The
belief that a hidden entity is thinking depends heavily on the words he
addresses to us being not re-hashings of the words we just said to him, but
words we did not use or think of ourselves—words that are not derivative
but original. By this criterion, no computer, however sophisticated, has
come anywhere near real thinking.
These facts have made the Test highly problematic for AI enthusiasts,
who want to enlist Turing as their spiritual father and philosophic patron.
While they have programmed the computer to do things that might have
astonished even him, today’s programmers cannot do what he believed
they would do—they cannot pass his test. And so the relationship of the
AI community to Turing is much like that of adolescents to their parents:
abject dependence alternating with embarrassed repudiation. For AI work-
ers, to be able to present themselves as “Turing’s Men” is invaluable; his
status is that of a von Neumann, Fermi, or Gell-Mann, just one step below
that of immortals like Newton and Einstein. He is the one undoubted
genius whose name is associated with the AI project (although his status
as a genius is not based on work in AI). The highest award given by the
44 ~ T
HE
N
EW
A
TLANTIS
Copyright 2006. All rights reserved. See
www.TheNewAtlantis.com
for more information.
T
HE
T
ROUBLE
WITH
THE
T
URING
T
EST
Association for Computing Machinery is the Turing Award, and his con-
cept of the computer as an instantiation of what we now call the Turing
Machine is fundamental to all theoretical computer science. When mem-
bers of the AI community need some illustrious forebear to lend dignity to
their position, Turing’s name is regularly invoked, and his paper referred
to as if holy writ. But when the specifics of that paper are brought up, and
when critics ask why the Test has not yet been successfully performed, he is
brushed aside as an early and rather unsophisticated enthusiast. His ideas,
we are then told, are no longer the foundation of AI work, and his paper
may safely be relegated to the shelf where unread classics gather dust, even
while we are asked to pay its author the profoundest respect. Turing’s is a
name to conjure with, and that is just what most AI workers do with it.
Not Fooled Yet
T
uring gave detailed examples of what he wanted and expected program-
mers to do. After introducing the general idea of the Test, he went on to
offer a presumably representative fragment of the dialogue that would
take place between the hidden entity and its interrogator. Perhaps the
key to successful discrimination between a programmed computer and a
human being is to ask the unseen entity the sort of questions that humans
find easy to answer (not necessarily correctly), but that an AI program-
mer will find impossible to predict and handle, and to use such questions
to unmask evasive and merely word-juggling answers. Consider Turing’s
suggested line of questioning with that strategy in mind:
Q:
Please write me a sonnet on the subject of the Forth Bridge.
A:
Count me out on this one. I never could write poetry.
Q:
Add 34957 to 70764.
A:
(Pause about 30 seconds and then give as answer) 105621.
Q:
Do you play chess?
A:
Ye s.
Q:
[describes an endgame position, then asks] What do you play?
A:
(After a pause of 15 seconds) R-R8 mate.
The first of these questions has no value as a discriminator, since the
vast majority of humans would be as unable as a computer to produce a
W
INTER
2006 ~ 45
Copyright 2006. All rights reserved. See
www.TheNewAtlantis.com
for more information.
M
ARK
H
ALPERN
sonnet on short notice, if ever. Turing has the computer plead not just
an inability to write a sonnet on an assigned subject, but an inability to
write a poem of any kind on any subject. A few follow-up questions on
this point might well have been revealing, even decisive for Test purposes.
But Turing’s imaginary interrogator never follows up on an interesting
answer, switching instead to another topic altogether.
The second question is likewise without discriminatory value, since
neither man nor machine would have any trouble with this arithmetic
task, given 30 seconds to perform it; but again, the computer is assumed
to understand something that the questioner has not mentioned—in this
case, that it is not only to add the two numbers, but to report their sum
to the interrogator.
The third question-answer exchange is negligible, but the fourth, like
the first two, raises problems. First, it fails as a discriminator, because no
one who really plays chess would be stumped by an end-game so simple
that a mate-in-one was available; second, it introduces an assumption that
cannot automatically be allowed: namely, that the computer plays to win.
It may seem rather pedantic to call attention to, and disallow, these simple
assumptions; after all, they amount to no more than ordinary common
sense.
Exactly
. Turing’s sample dialogue awards the computer just that
property that programmers have never been able to give their computers:
common sense. The questions Turing puts in the interrogator’s mouth
seem almost deliberately designed to keep him from understanding what
he’s dealing with, and Turing endows the computer with enough clever-
ness to fool the interrogator forever.
But if Turing’s imaginary interrogator is fooled, most of us are not.
And if we read him with some care, we note also a glaring contradiction in
Turing’s position: that between his initial refusal to respect the common
understanding of key words and concepts, and his appeal at the conclu-
sion of his argument to just such common usage. At the beginning of his
paper, Turing says:
If the meaning of the words “machine” and “think” are to be found
by examining how they are commonly used it is difficult to escape
the conclusion that the meaning and answer to the question, ‘Can a
machine think?’ is to be sought in a statistical survey such as a Gallup
poll. But this is absurd.
But then he suggests, as quoted above, that by the end of the twentieth
century an examination of “the use of words and general educated opin-
ion” would show that the public now accepts that the computer can think,
46 ~ T
HE
N
EW
A
TLANTIS
Copyright 2006. All rights reserved. See
www.TheNewAtlantis.com
for more information.
Plik z chomika:
TwinBat
Inne pliki z tego folderu:
Nicol, Till Malfunction Do Us Part. Predictions of Robotic Intimacy (2008).pdf
(90 KB)
Halpern, The Trouble with the Turing Test (2006).pdf
(147 KB)
Rubin, Artificial Intelligence and Human Nature (2003).pdf
(119 KB)
Martial Arts - Hagakure - The Way of the Samurai.pdf
(211 KB)
Martial Arts - IIVII - EBOOK - Ross Enamait - The Underground Guide To Warrior Fitness.pdf
(1800 KB)
Inne foldery tego chomika:
Filozofia - Artykuły
Filozofia - Książki
Programowanie - Książki
Prywatne
Wykłady
Zgłoś jeśli
naruszono regulamin