Saturday morning, Kevin and I trotted into the Angelicum bright and early (9:30 - early enough for a Saturday!) to attend a lecture on Artificial Intelligence being given by Fr. Philippe-André Holzer, OP, one of my favorite professors from my philosophy program last year. The lecture was jointly part of an ongoing Angelicum course consisting of different lectures by different professors, and an offering of the STOQ Project: Science, Theology, and the Ontological Quest, a project sponsored by the Angelicum, the Pontifical Council for Culture, the Pontifical Lateran University, the Pontifical Gregorian University, the Pontifical Athenaeum Regina Apostolorum, and several other universities.
Fr. Holzer had done his doctoral dissertation on Artificial Intelligence, but he never referred to it in class, so I was looking forward to finally hearing a little about it from him.
He began the lecture with a rather detailed but very comprehensible introduction to Turing machines (I won't even try) and then moved on to the Turing test, introduced by Alan Turing in his 1950 article "Computing Machinery and Intelligence." Basically, if you asked questions to which you were given typewritten answers, would it be possible, within a five-minute "conversation," to determine whether the one answering the questions were a person or a computer. If it were not possible to distinguish a computer respondent from a human, Alan Turing would be satisfied that the computer was demonstrating intelligence.
Turing devotes the entire second half of the article to addressing objections, both objections concerning the possibility of intelligent machines and to his method for establishing their intelligence. The article (linked above) is worth a read for anyone interested in the area, if you haven't already read it (Dad).
If you want to try the year 1966's approach to the Turing test yourself, just check out ELIZA, the "Rogerian-psychologist" computer program.
Fr. Hozler finished the lecture with John R. Searle's 1980 article, "Minds, Brains, and Programs," which effectively establishes that the emperor of Turing's test has no clothes. Searle proposes the following test:
Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore [page 418] (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a "script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call the "program."Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view – that is, from the point of view of somebody outside the room in which I am locked – my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I ama native English speaker. From the external point of view – from the point of view of someone reading my "answers" – the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.
The area of Artificial Intelligence hasn't been getting a lot of popular attention in the last decade or so, despite its popularity in the 1980s. The Internet swept in and grabbed the spotlight, the new millennium dawned without any revenge of the machine, and HAL and his fellow thinking machines quietly fell to the wayside, androids written off as Cold-War-induced sci fi paranoia. Well, at least two creepy steps toward an android are out there: On the level of intelligence, there's this "baby", and on the level of appearance, would you be able to figure out whether you were talking to Hiroshi Ishiguro or Geminoid?
1 comment:
Truly AI was left in the dust by WWW.
There are so many interesting machines that I would love to string together. Bill C. and I have drifted back and forth through this stuff for decades.
Much love to you both,
Dad
Post a Comment