Notes on Hofstadter's Coffeehouse Conversation

July 9, 2022 by Lucian Mogosanu

I just finished reviewing this1 and I must say, it makes for quite an intriguing read! especially in light of my recent ruminations on the matter. Only this time around I won't bother the reader with a fully annotated read, instead I will resort to some highlights and notes on the parts which I've actually found intriguing, if not downright amusing.

The piece is written in the Socratic dialectic style, involving three (imagined or real) persons debating the question of whether or not "machines can think". The participants make an analogy between Turing's test and the so-called "Imitation Game" and then, at first, they get bogged into the swamp of... The Matrix! Let's take a quick look at the first few lines:

Sandy: It seems to me that you could conclude something from a man's win in the Imitation Game. You wouldn't conclude he was a woman, but you could certainly say he had good insights into the feminine mentality (if there is such a thing). Now, if a computer could fool someone into thinking it was a person, I guess you'd have to say something similar about it-that it had good insights into what it's like to be human, into "the human condition" (whatever that is).

Pat: Maybe, but that isn't necessarily equivalent to thinking, is it? It seems to me that passing the Turing Test would merely prove that some machine or other could do a very good job of simulating thought.

Chris: I couldn't agree more with Pat. We all know that fancy computer programs exist today for simulating all sorts of complex phenomena. In theoretical physics, for instance, we simulate the behavior of particles, atoms, solids, liquids, gases, galaxies, and so on. But no one confuses any of those simulations with the real thing!

Sandy: In his book Brainstorms, the philosopher Daniel Dennett makes a similar point about simulated hurricanes.

Chris: That's a nice example, too. Obviously, what goes on inside a computer when it's simulating a hurricane is not a hurricane, for the machine's memory doesn't get torn to bits by 200 mile-an-hour winds, the floor of the machine room doesn't get flooded with rainwater, and so on.

[...]

This is followed by what is in my opinion a non-discussion with respect to the original topic -- for one, because, as Chris points out, whether simulated or not, the manifestation of said "thought" occurs on the very same teletype, while a simulation of a hurricane does not have a physical manifestation, but at most a "meta-physical" manifestation, maybe, if one may wish to call it so. As for the other, the essence of this thought experiment of the Turing test lies precisely in obscuring one of the parties of so-called communication.

Still, this part of the conversation is somewhat illuminating, as the cash register example and the questions about interpretation reveal that the Turing test works precisely by obscuring the layers of abstraction (be they mechanical, biological or whatever) lying beneath the "device under test". In fact the author figures out as much at the end of his text, when he recounts his experience with Unix's "talk" utility.

Then the discussion oscillates between attempts to provide a definition of thinking and attempts to identify a structure of the mind, or of "consciousness". Take the following, for example:

Sandy: I agree that humor probably is an acid test for a supposedly intelligent program, but equally important to me perhaps more so-would be to test its emotional responses. So I would ask it about its reactions to certain pieces of music or works of literature-especially my favorite ones.

Chris: What if it said, "I don't know that piece," or even, "I have no interest in music"? What if it tried its hardest (oops!-sorry, Pat!) .... Let me try that again. What if it did everything it could, to steer clear of emotional topics and references?

Sandy: That would certainly make me suspicious. Any consistent pattern of avoiding certain issues would raise serious doubts in my mind as to whether I was dealing with a thinking being.

Chris: Why do you say that? Why not just conclude you're dealing with a thinking but unemotional being?

Sandy: You've hit upon a sensitive point. I've thought about this for quite a long time, and I've concluded that I simply can't believe emotions and thought can be divorced. To put it another way, I think emotions are an automatic by-product of the ability to think. They are entailed by the very nature of thought.

Chris: That's an interesting conclusion, but what if you're wrong? What if I produced a machine that could think but not emote? Then its intelligence might go unrecognized because it failed to pass your kind of test.

For what it's worth, I think Sandy hit upon something here, but that something is not explored in too much depth, which is why the discussion devolves into:

Sandy: Oh come on! We all know that certain things don't feel anything or know anything. A thrown stone doesn't know anything about parabolas, and a whirling fan doesn't know anything about air. It's true I can't prove those statements-but here we are verging on questions of faith.

Pat: This reminds me of a Taoist story I read. It goes something like this. Two sages were standing on a bridge over a stream. One said to the other, "I wish I were a fish. They are so happy." The other replied, "How do you know whether fish are happy or not? You're not a fish!" The first said, "But you're not me, so how do you know whether I know how fish feel?"

Sandy: Beautiful! Talking about consciousness really does call for a certain amount of restraint. Otherwise, you might as well just jump on the solipsism bandwagon ("I am the only conscious being in the universe") or the panpsychism bandwagon ("Everything in the universe is conscious!").

The question of whether stones direct themselves over the course of a parabola or whether they are directed by the Earth's gravitational field is yet again a false one. Newton did not propose one model over the other -- yes, dear "scientists", the heliocentric model of the Solar system is as "valid" as the geocentric model, they are both just simply that, models, no more and no less -- he just analyzed the relationship between the gravitational fields of both objects and the result was a parabola.

But more importantly, from this bit of conversation we learn the following: the question of whether "a stone can think" (or not) is not by any means a scientific one. The question may of course be considered in a metaphysical context, as the Buddhists did, but deconstructing the stone in the physical sense will yield approximately nonsense, which is also approximately what the so-called field of quantum physics is at the end of the day. This in itself is not a tragedy, nor is it entirely useless, it's just not... scientific (in the Popperian sense), nor does it need to be.

Anyways, our preopinents try to bear more discussion on unquantifiable things, say, the one on "interest":

Sandy: Right! I wouldn't bother to talk to anyone if I weren't motivated by interest. And "interest" is just another name for a whole constellation of subconscious biases. When I talk, all my biases work together, and what you perceive on the surface level is my personality, my style. But that style arises from an immense number of tiny priorities, biases, leanings. When you add up a million of them interacting together, you get something that amounts to a lot of desires. It just all adds up! And that brings me to the other answer to Chris's question about feelingless calculation. Sure, that exists-in a cash register, a pocket calculator. I'd say it's even true of all today's computer programs. But eventually, when you put enough feelingless calculations together in a huge coordinated organization, you'll get something that has properties on another level. You can see it-in fact, you have to see it-not as a bunch of little calculations but as a system of tendencies and desires and beliefs and so on. When things get complicated enough, you're forced to change your level of description. To some extent that's already happening, which is why we use words such as "want," "think," "try," and "hope" to describe chess programs and other attempts at mechanical thought. Dennett calls that kind of level switch by the observer "adapting the intentional stance." The really interesting things in AI will only begin to happen, I'd guess, when the program itself adopts the intentional stance toward itself!

My question to Sandy would have then been: who's to say this "intentional stance" hasn't already manifested itself? Say, after I code an agent to solve a particular problem, who's to say that its so-called "intention" isn't then to solve the problem? By the way, can you see how this immediately falls into the discussion on wills and won'ts? Who's to say that an object, any object at all, "wills" something in a so-called "objective" sense, outside of any particular interpretation? Surely, this also runs into that old discussion of culture, as most bipedal monkeys actually learn things in a more or less Pavlovian fashion, through mere osmosis.

But wait, there's more naïvete:

Sandy: I'd try to dig down under the surface of your concept of "machine" and get at the intuitive connotations that lurk there, out of sight but deeply influencing your opinions. I think we all have a holdover image from the Industrial Revolution that sees machines as clunky iron contraptions gawkily moving under the power of some loudly chugging engine. Possibly that's even how the computer inventor Charles Babbage saw people! After all, he called his magnificent many-geared computer the "Analytical Engine."

Pat: Well, I certainly don't think people are just fancy steam shovels or electric can openers. There's something about people, something that-that-they've got a sort of flame inside them, something alive, something that flickers unpredictably, wavering, uncertain-but something creative!

Sandy: Great! That's just the sort of thing I wanted to hear. It's very human to think that way. Your flame image makes me think of candles, of fires, of vast thunderstorms with lightning dancing all over the sky in crazy, tumultuous patterns. But do you realize that just that kind of thing is visible on a computer's console? The flickering lights form amazing chaotic sparkling patterns. It's such a far cry from heaps of lifeless, clanking metal! It is flamelike, by God! Why don't you let the word "machine" conjure up images of dancing patterns of light rather than of giant steam shovels?

This so-called analysis is merely substanceless form. It doesn't matter one iota how our hypothetical thinking machine looks, but it does matter what its environment is and more importantly, how it interacts with it! So take an autonomous system such as the web server as the most banal example: its environment is the network, while its inputs and outputs are data communicated over HTTP. Looking at this particular example entirely as a black box: would you say it thinks? would you say it doesn't? is it "intelligent" by any measure? Who the fuck really knows, amirite? Who's to really distinguish this dude from that other one?

This is, of course, precisely Hofstadter's point in his post scriptum.

The discussion then carries on from the opposite point of view: are humans reduceable to machines? Let's see:

Sandy: For me, it's an almost unimaginable transition from the mechanical level of molecules to the living level of cells. But it's that exposure to biology that convinces me that people are machines. That thought makes me uncomfortable in some ways, but in other ways it is exhilarating.

Chris: I have one nagging question .... If people are machines, how come it's so hard to convince them of the fact? Surely a machine ought to be able to recognize its own machinehood!

Sandy: It's an interesting question. You have to allow for emotional factors here. To be told you're a machine is, in a way, to be told that you're nothing more than your physical parts, and it brings you face to face with your own vulnerability, destructibility, and, ultimately, your mortality. That's something nobody finds easy to face. But beyond this emotional objection, to see yourself as a machine, you have to "unadopt" the intentional stance you've grown up taking toward yourself-you have to jump all the way from the level where the complex lifelike activities take place to the bottommost mechanical level where ribosomes chug along RNA strands, for instance. But there are so many intermediate layers that they act as a shield, and the mechanical quality way down there becomes almost invisible. I think that when intelligent machines come around, that's how they will seem to us-and to themselves! Their mechanicalness will be buried so deep that they'll seem to be alive and conscious-just as we seem alive and conscious ....

Chris: You're baiting me! But I'm not going to bite.

Pat: I once heard a funny idea about what will happen when we eventually have intelligent machines. When we try to implant that intelligence into devices we'd like to control, their behavior won't be so predictable.

Sandy: They'll have a quirky little "flame" inside, maybe?

As with the example of the flying stone, the participants couldn't arrive at what is nowadays an obvious observation, that is, that the interaction between humans and machines would not only make machines "more human", but also the other way around! I'll admit, I didn't realize this out for a long while either, until one fateful day when I had to interact with a vending machine and I saw it for what it really was, namely a device aiming to subdue me. Sounds pretty crackpot, innit? But it is what it is. Anyways: why do these people implicitly assume that machines are inherently incapable of having internal states such as those we call "emotions"? Mayhaps Star Trek is to blame for this preconception.

Further, on systematic approaches to Turing testing:

Pat: You could have a range of Turing Tests-one-minute versions, five-minute versions, hour-long versions, and so forth. Wouldn't it be interesting if some official organization sponsored a periodic competition, like the annual computer-chess championships, for programs to try to pass the Turing Test?

Chris: The program that lasted the longest against some panel of distinguished judges would be the winner. Perhaps there could be a big prize for the first program that fools a famous judge for, say, ten minutes.

Pat: A prize for the program, or for its author.

Chris: For the program, of course!

Pat: That's ridiculous! What would a program do with a prize?

Chris: Come now, Pat. If a program's human enough to fool the judges, don't you think it's human enough to enjoy the prize? That's precisely the threshold where it, rather than its creators, deserves the credit, and the rewards. Wouldn't you agree?

Pat: Yeah, yeah-especially if the prize is an evening out on the town, dancing with the interrogators!

Sandy: I'd certainly like to see something like that established. I think it could be hilarious to watch the first programs flop pathetically!

Pat: You're pretty skeptical for an AI advocate, aren't you? Well, do you think any computer program today could pass a five minute Turing Test, given a sophisticated interrogator?

Sandy: I seriously doubt it. It's partly because no one is really working at it explicitly. I should mention, though, that there is one program whose inventors claim it has already passed a rudimentary version of the Turing Test. It is called "Parry," and in a series of remotely conducted interviews, it fooled several psychiatrists who were told they were talking to either a computer or a paranoid patient. This was an improvement over an earlier version, in which psychiatrists were simply handed transcripts of short interviews and asked to determine which ones were with a genuine paranoid and which ones were with a computer simulation.

In all fairness, the text was written almost two decades before the dot-com bubble, so way before spam was invented and then perfected and wahahay before the war on captchas turned hot. See, eventually the Turing test found a perfectly legitimate practical use, on both sides of the conflict.

Sandy: An interesting thing about Parry is that it creates no sentences on its own-it merely selects from a huge repertoire of canned sentences the one that in some sense responds best to the input sentence.

Pat: Amazing. But that would probably be impossible on a larger scale, wouldn't it?

Sandy: You better believe it (to use a canned remark)! Actually, this is something that's really not appreciated enough. The number of sentences you'd need to store in order to be able to respond in a normal way to all possible turns that a conversation could take is more than astronomical-it's really unimaginable. And they would have to be so intricately indexed, for retrieval...Anybody who thinks that somehow a program could be rigged up just to pull sentences out of storage like records in a jukebox, and that this program could pass the Turing Test, hasn't thought very hard about it. The funny part is that it is just this kind of unrealizable "parrot program" that most critics of artificial intelligence cite, when they argue against the concept of the Turing Test. Instead of imagining a truly intelligent machine, they want you to envision a gigantic, lumbering robot that intones canned sentences in a dull monotone. They set up the imagery in a contradictory way. They manage to convince you that you could see through to its mechanical level with ease, even as it is simultaneously performing tasks that we think of as fluid, intelligent processes. Then the critics say, "You see! A machine could pass the Turing Test and yet it would still be just a mechanical device, not intelligent at all." I see things almost the opposite way. If I were shown a machine that can do things that I can do-I mean pass the Turing Test-then, instead of feeling insulted or threatened, I'd chime in with philosopher Raymond Smullyan and say, "How wonderful machines are!"

First off, this is pretty much what Google's indexer does for example, while secondly I know I'm repeating myself, but why is there no discussion here about human parrots? This is precisely how most schoolbooks and newspapers work, they provide "just the facts" without any secondary considerations whatsoever on whatever matter involved, so why did it sound so inconceivable that a machine mimicking this behaviour would be possible? It's not a very sophisticated mechanism, yes, but on second thought, neither are the biological variety of neural nets.

To sum this up: while I still firmly believe that "thinking" and "computation" are separate categories, I see no reason why a "thinking computer" is impossible. In fact, for certain values of "thinking", I am convinced that such a computer has already been implemented, while in the perhaps-not-so-far-future, certain values of "computer" will render most human thinking deterministic, and therefore obsolete.

And at the end of this cycle, humanity will consist precisely of that thought which cannot be replaced by machines'.


  1. Reference and archived version. 

Filed under: olds.
RSS 2.0 feed. Comment. Send trackback.

3 Responses to “Notes on Hofstadter's Coffeehouse Conversation”

  1. #1:
    Cel Mihanie says:

    All the jibber-jabber about thinking AIs often makes me think that Lem was probably correct in the thesis which he presents in Solyaris: humans are incapable of handling true 'alien-ness' and are lying to themselves when they think they want to explore 'strange new worlds and civilizations'; what they are really looking for is a tweaked mirror of themselves, 'rubber forehead aliens' as the idiom goes.

    As for extraterrestrials, so for AIs. If one looks at AIs in fiction (yes, it is fiction but it offers a good insight into what humans expect from sentient AI research), one is struck by how the vast majority of them are really just human characters with a metallic instead of fleshy substrate, and the extra abilities that go with their mechanical/digital nature. Ranging from the basically-indistinguishable-from-well-adjusted-humans (e.g. Bishop, Call, replicants), to quaint aspies (Data), to a wide variety of nutters, schemers, dictators and psychopaths (Durandal, HAL, VIKI etc etc), to just snarling vicious killers (AM), all of the AIs in our collective imaginations seem to be just perfectly normal human archetypes. Real humans can be even weirder, in fact.

    The unfortunate implication of this observation is that when we are looking to develop sentient AIs, what we probably want, ultimately, is just better slaves. The question of whether these slaves "really think", whatever that means that day, will be of some interest to navel-gazers and robot rights activists, but at the end of the day, if they are human "enough", nobody will really care in practice. We only seem to care now because we're not there yet.

    A question that even fewer people are comfortable asking is, whether humans themselves are already AIs, or hosts for AIs. Remember, AI is about "artificial", not "computers". Whenever I am forced to interact with a member of (insert political party or ideology here), it is painfully obvious that much of their verbal and intellectual output consists of talking points programmed in by various leaders and manipulators. You can hear the gears turn. You can predict pretty much all they are going to say or do, every single detail of their alleged lives, with incredible accuracy. A real robot is much more unpredictable. Who is the real AI now?

    I too believe that a thinking computer is inevitable, unless we are fortunate enough to be "returned to monke" by a merciful comet. I think, though, that it is more likely for true sentient AI to take an unexpected form. To dive into fiction again, I'm thinking of the WAU from SOMA. Alien, inscrutable, truly inhuman, more like a plant than a human voice from a terminal. When real AI sentience arrives, the question will be less "does it have feelings" and more like "what the hell is happ--".

  2. #2:
    spyked says:

    > The question of whether these slaves "really think", whatever that means that day, will be of some interest to navel-gazers and robot rights activists, but at the end of the day, if they are human "enough", nobody will really care in practice

    Speaking of which: Ergo Proxy managed to predict sexbots. There's a few niche activities that would greatly benefit (for certain values of "benefit" -- at least from an economic standpoint) from such so-called androids.

    Indeed, there's a great deal of confusion among researchers when it comes to "intelligence" -- whether it's "artificial" or not makes little difference, why not "synthesized" versus "grown"? Perhaps an agent need not think, at least not in the sense of contemplation, in order to be intelligent (that's certainly true of most animals). From that point of view, I suppose that a web server is as intelligent as it gets in this day and age, only, of course, there's more where that came from.

    > whether humans themselves are already AIs, or hosts for AIs

    Indeed, that's when I decided to reduce 'em all to mere chatbots.

    > When real AI sentience arrives, the question will be less "does it have feelings" and more like "what the hell is happ--".

    This is an extra take on your comparison between AIs and extraterrestrials, which brings us to the following point: what if the so-called "AI sentience" is already here as an emergent phenomenon that we've yet to understand? Perhaps, but just perhaps, it's yet another manifestation of that religious impulse.

  3. #3:
    spyked says:

    > Ergo Proxy managed to predict sexbots

    In all fairness, Philip K. Dick managed to do this.

Leave a Reply