The difference between thinking and computation

February 18, 2022 by Lucian Mogosanu

The woman and I were having our usual morning conversation the other day, when she briefly recounts to me the following episode in her life, which I shall reproduce here approximately:

"So I was spending my time with these two nerds1 from Vianu2 solving SAT problems and they were solving a geometry problem with two equilateral triangles sharing two of the edges, one turned upside down on top of the other, the one on top smaller than the other. The problem statement was asking to find the area of the smaller one knowing the perimeter of the bigger one, so while they rigorously start applying the methods and procedures taught in their advanced mathematics classes, I take a look at the triangles and I observe that the smaller one's edge is half of the bigger one's. So I readily produce an answer to the problem, while those two are still rigorously applying the methods and procedures they've spent so much time learning. So after two minutes of labour, they finally find the answer, and upon finding the answer they look at me with big eyes, asking come again? So I guess it took me only a few seconds to find the answer while they were applying all those sophisticated methods for minutes."

To which I reply:

"So this is where the difference between thinking and computation comes from, huh?"

In retrospect I might not have been too clear, so let me attempt to explain: the example above elucidates, at least to my eye, a longstanding problem in the computing sciences, namely that of the relationship between the human capacity of thinking, and the mechanical phenomenon of universal computation, as defined and described by Turing et al. Sure, the question isn't answered fully, and maybe not to the degree academics are expecting, but at the very least our example above illustrates clearly what the latter isn't.

More precisely: thinking, however you wish to define it, involves primarily, as necessities, evaluating an object and drawing observations upon it; while computation involves primarily, as a necessity, applying an algorithm upon whatever one is faced with. In other words, for the former, the object, the problem is a first-class citizen, while the latter reduces said object to merely an input to be passed through the rigid process of the first class citizen, the algorithm.

I see Frank Herbert is in fashion again nowadays3, and it seems he got it way before it was cool to discuss such things: the Duke, or the Baron if you will, does not expect his mentat to think, which is why and how mentats are described as literal organic computing machines, albeit with no power to e.g. make any decisions. This alone makes "automated decision-making" a contradiction in terms: applying an algorithm may enable one to at most emulate making a decision through some hard-coded program logic, but at most that; and the emulation will inevitably fail when some new variable becomes observable in the set of inputs, or otherwise when some inputs suddenly become unavailable.

Of course, this doesn't exclude the possibility that some piece of artificial4 machinery exists that is indeed able to make such inferences that are otherwise attributed to thinking. Also, the fact that I haven't been able to see this sort of machinery in my more than twenty years of computing may not mean much. Still, the evidence thus far points more in the direction of humans being reduced to mindless machines rather than the opposite. So until and unless I see new evidence, I for one remain skeptical on one hand, and on the other quite convinced that there is first and foremost a difference in quality between thinking and computation.


  1. Story of her life, especially since she ended up spending the vast majority of her time with me. 

  2. Tudor Vianu National College; high school in Bucharest bearing the name of a Romanian philosopher, famous for the excellent preparation of its students for maths and (especially) informatics olympiads. Although given recent events, I suppose fame is all it's left with, nowadays. 

  3. Unfortunately I can't really write down a review to the latest Dune, at least not otherwise than in relationship to the version started by David Lynch. I guess this says something, both about Lynch and about the ability of people in this day and age to... well, to think.

    By the way, this is the silver lining in that otherwise bleak prediction about art being completely overtaken by computers: if a tree falls in the forest and there's no one there to hear it, in the end it doesn't matter one iota whether the sound was in fact generated by a computer or by an actual falling tree. There needs to be someone there to hear it in the first place, otherwise falsity itself becomes a premise. Conversely, if I deem a musical piece to be beautiful, then in the end it matters quite little to me whether it was generated by an algorithm, a human strumming a guitar or a blackbird. 

  4. Turing-based, although... why not some other type, kind or sort! 

Filed under: asphalt.
RSS 2.0 feed. Comment. Send trackback.

8 Responses to “The difference between thinking and computation”

  1. #1:
    Alex says:

    Something about this line of reasoning rubs me the wrong way. Let me see if I can actually put in into words:

    thinking .. involves .. evaluating an object and drawing observations upon it; while computation involves applying an algorithm upon whatever one is faced with

    To my mind, computation is a (large-ish) *subset* of thinking; or, put it the other way, thinking is a superset of computation, exhibiting a lot of its traits/qualities.

    What you call "evaluating an object" - where does it start? With taking in relevant information about said object, via hardware/wetware sensors? And then following various logic pathways, categorizing said information you acquired, und so weiter?

    In your example re: geometry, the only salient difference is that the lady took the time to examine the objects of the problem, and that led her down a simpler computational path. Examining in what ways can a problem be solved is also an algorithmic kind of thinking, on a meta level; one takes note of the kind of problem presented, then chooses a suitable solution from a range of options.

    What I will grant, is the fact that meta-analysis is computationally expensive and often goes unused. Few species possess the ability, and just a subset of the individuals possess the inclination. Nevertheless, since such analysis exists as an option, and can be approached programmatically, I posit that it largely falls in the "computing" category.

    What I would actually mark as the difference between thinking and (nude, raw) computing is the ability to form new associations out of existing data and structures, new symbols and templates. A processor has a predefined set of instructions, a predefined set of symbols; on the next layer, software, by and large, has a predefined set of instructions. Wetware is remarkable in its ability to react to new stimuli, form a novel representation of those, and then immediately use this new thinking-template to inform new decisions.

    So here's the kicker: most of life on Earth may actually be described to go based simply on computation, with some episodic thinking that happens every now and then. Therefore, the very line between these two concepts is .. somewhat blurry.

    n.b. Speaking of mentats, do not forget Paul Atreides exhibited such traits, and he was going to undergo the training, becoming a Duke-Mentat, combining both political power and vast processing power. Even "regular" mentats such as Thufir Hawat were shouldering a lot of decision-making, starting with House security. Again, I find the line between thinking and computing to be quite blurry in a mentat.

  2. #2:
    spyked says:

    Let me attempt to disentangle things, hoping that I won't get them more entangled along the way.

    > Something about this line of reasoning rubs me the wrong way.

    I guess much of the problem comes straight from the title: what is the nature of this "difference" that the author is speaking about? This sounds to me precisely like (and it is indeed a mockery of) "people being equal to one another", or in this particular case, of the fallacious fundaments of "artificial intelligence". Or otherwise, to put it in the words of old Dijkstra: "[...] whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim".

    In other words, the article is challenging that very link between thinking and computation, yet it indeed brings very little to back the challenge up. I for one don't feel compelled to demonstrate that things stand one way or the other, I do *strongly believe* however that the "computational agent" is naught but a metaphor, same as the linguistic one. I'm not saying it's not a useful metaphor either, just that its use is limited, perhaps more limited than the so-called scientists in the field are willing to admit. Sure, machines have sensors and actuators and everything in between, I just don't think that attempting to attribute properties such as "thinking" or "intelligence" (or even any sort of individuality) to them is anything but an old example of ye olde anthropomorphism.

    Moving on, take

    > To my mind, computation is a (large-ish) *subset* of thinking

    versus

    > So here's the kicker: most of life on Earth may actually be described to go based simply on computation, with some episodic thinking that happens every now and then.

    So which one is a subset of which? also keeping in mind that as you mentioned, life, like wetware (actually: precisely the other way around) "learns" from the environment, while silicon can at most emulate this adaptation process.

    I suspect that the reasons behind this limitation have much to do with the limitations of the so-called "universal" Turing machine: much like there are mathematical functions that a Turing machine cannot compute (i.e. they are not effectively computable), the biochemistry and genetics in life are based upon processes which silicon computers cannot reproduce. Now, using the DNA itself as a computational device, that is indeed interesting, although I've not yet heard of a computational class to describe such processes. The field is still young, but who knows, maybe in the '30s or '40s, if we live by then...

    Another problem with the computing we're familiar with is that it is but a model of how (certain) things world, and like all models, it is reductionist. Feynman attempted to apply the notion of computation in physics itself and that came to nothing, which yet again makes me skeptical of the attempts to apply the same notion when it comes to life, or just parts of it for that matter.

    So to conclude, assuming that both computation and thinking are actual things in and of themselves, what I mean by "qualitative difference" above is that they belong to entirely distinct ontological categories and that it doesn't make much sense to try to find a relation between them in the first place.

    > Speaking of mentats, do not forget Paul Atreides exhibited such traits, and he was going to undergo the training, becoming a Duke-Mentat

    Indeed! what I'm proposing is that it's his "dukeness" (his... "will to power", let's say, I don't have any closer philosophical proxy to reference) which allows him to think and make decisions; and his "mentatness" combined with drugs which allows him to see.

  3. #3:
    spyked says:

    I guess this could be an article standing on its own feet, but I will also leave here a few thoughts on the following idea:

    Examining in what ways can a problem be solved is also an algorithmic kind of thinking, on a meta level; one takes note of the kind of problem presented, then chooses a suitable solution from a range of options.

    I for one will refrain from representing the processes involving thinking this way, as in my experience many such solutions were found before becoming fully conscious of the problem, or otherwise they didn't involve any sort of algorithmic searching at all. I don't pretend to fully understand what went on there and I don't want to start throwing words around (in my opinion the postmodern West tries to make too much of e.g. the so-called mindfulness), I'm just saying that science has a poor understanding of what *causes* thoughts (or whether the cause-effect framework applies here at all) and "choosing a suitable solution" might be the convenient answer, not necessarily the correct one.

  4. #4:
    Verisimilitude says:

    I agree with the view of intelligence being pattern-recognition. In this view, the woman clearly just recognized and applied a simpler pattern than the other two. This is still computation, or something like it.

    How many of our best thoughts aren't the result of constantly thinking about topics, and simply having some recognizer for nice ideas? I've mulled over just a few things for years, but then I extend them, let them hit reality, and all the while I'm solving tiny problems therein with smaller ideas, and doing this without end leads me to recognize other patterns and ideas, which can grow beyond their purposes, all for the better.

    The only reason I'd claim a brain isn't a computer is because I refuse to believe all brains have limitations, and formal systems have limitations.

  5. #5:
    spyked says:

    If intelligence is indeed a form of pattern-recognition, then whatever equations are laid down on the paper shall have to model the underlying process of "learning guided by evolution", as explained by Chomsky in his rebuttals of the current statistical approaches to the field of "machine learning". After all, "machine learning" is but a glorified simplification of signal processing, control systems and so on and so forth, as is the discrete version of "algorithmic pattern matching". There is much more to learning than mere back-propagation and transfer functions, and the way the current practitioners in the field struggle with the results of their work (either through nonsensical "deep learning" or through manual feature crafting) is nothing short of ridiculous.

    The only reason I'd claim a brain isn't a computer is because I refuse to believe all brains have limitations, and formal systems have limitations.

    It's trivial to demonstrate that all brains have limitations, by looking for example at their... physical size. The brain doesn't even concern me per se, as does the relationship between the mind and the underlying reality, and the possibility of modelling this relationship as a computational process. As clearly stated, I do think of computation in its formal sense of "effective computation" (otherwise, whoever claims that computation and thinking are equivalent should clearly specify their computational model of choice), and as shown in fundamental computer science courses, assuming they teach those in universities anymore, there are plenty of examples of mathematical functions that the human mind can process, yet a computer cannot, and provably so.

  6. [...] is almost over and I haven't written anything on this here Tar Pit of mine since February's latest intellectual wankery. After all, even the greatest wanker can only wank so much until he's spent; one's gotta spend time [...]

  7. [...] reviewing this1 and I must say, it makes for quite an intriguing read! especially in light of my recent ruminations on the matter. Only this time around I won't bother the reader with a fully annotated read, instead [...]

  8. [...] Despite the fashion, not quite everything is reduceable to programming. ↩ [...]

Leave a Reply