<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	>
<channel>
	<title>Comments on: The difference between thinking and computation</title>
	<atom:link href="http://thetarpit.org/2022/the-difference-between-thinking-and-computation/feed" rel="self" type="application/rss+xml" />
	<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation</link>
	<description>"Now I feel like I know less about what that blog is about than I did before."</description>
	<pubDate>Tue, 07 Apr 2026 22:39:34 +0000</pubDate>
	<generator>http://thetarpit.org</generator>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
		<item>
		<title>By: Artificial intelligence &#171; The Tar Pit</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-5102</link>
		<dc:creator>Artificial intelligence &#171; The Tar Pit</dc:creator>
		<pubDate>Sun, 26 May 2024 21:32:48 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-5102</guid>
		<description>[...] For what it's worth, I'm not naïve enough to believe that the current generative AI machinery is in any way "intelligent", that is, unless we attempt to twist the definition of intelligence to fit whatever ideosophical frameworks are fashionable today. In principle I don't reject the idea of machinery that is capable of expressing something which may be characterized as (intelligent) thought -- if anything humans, evolved, not built, as they are, provide an accurate example of precisely such machinery. But let's be clear on the fact that the so-called Large Language Models aren't it, for obvious reasons. [...]</description>
		<content:encoded><![CDATA[<p>[...] For what it's worth, I'm not naïve enough to believe that the current generative AI machinery is in any way "intelligent", that is, unless we attempt to twist the definition of intelligence to fit whatever ideosophical frameworks are fashionable today. In principle I don't reject the idea of machinery that is capable of expressing something which may be characterized as (intelligent) thought -- if anything humans, evolved, not built, as they are, provide an accurate example of precisely such machinery. But let's be clear on the fact that the so-called Large Language Models aren't it, for obvious reasons. [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: The Lost Tools of Learning, annotated &#171; The Tar Pit</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-4148</link>
		<dc:creator>The Lost Tools of Learning, annotated &#171; The Tar Pit</dc:creator>
		<pubDate>Sun, 07 May 2023 14:20:02 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-4148</guid>
		<description>[...] Despite the fashion, not quite everything is reduceable to programming.&#160;&#8617; [...]</description>
		<content:encoded><![CDATA[<p>[...] Despite the fashion, not quite everything is reduceable to programming.&#160;&#8617; [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Notes on Hofstadter's Coffeehouse Conversation &#171; The Tar Pit</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-2474</link>
		<dc:creator>Notes on Hofstadter's Coffeehouse Conversation &#171; The Tar Pit</dc:creator>
		<pubDate>Sat, 09 Jul 2022 13:22:26 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-2474</guid>
		<description>[...] reviewing this1 and I must say, it makes for quite an intriguing read! especially in light of my recent ruminations on the matter. Only this time around I won't bother the reader with a fully annotated read, instead [...]</description>
		<content:encoded><![CDATA[<p>[...] reviewing this1 and I must say, it makes for quite an intriguing read! especially in light of my recent ruminations on the matter. Only this time around I won't bother the reader with a fully annotated read, instead [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Spring cleaning; or: the less said the better, but still better than nothing &#171; The Tar Pit</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-1909</link>
		<dc:creator>Spring cleaning; or: the less said the better, but still better than nothing &#171; The Tar Pit</dc:creator>
		<pubDate>Thu, 31 Mar 2022 18:54:44 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-1909</guid>
		<description>[...] is almost over and I haven't written anything on this here Tar Pit of mine since February's latest intellectual wankery. After all, even the greatest wanker can only wank so much until he's spent; one's gotta spend time [...]</description>
		<content:encoded><![CDATA[<p>[...] is almost over and I haven't written anything on this here Tar Pit of mine since February's latest intellectual wankery. After all, even the greatest wanker can only wank so much until he's spent; one's gotta spend time [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: spyked</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-1763</link>
		<dc:creator>spyked</dc:creator>
		<pubDate>Sun, 06 Mar 2022 14:52:55 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-1763</guid>
		<description>If intelligence is indeed a form of pattern-recognition, then whatever equations are laid down on the paper shall have to model the underlying process of "learning guided by evolution", as explained by Chomsky in his rebuttals of the current statistical approaches to the field of "machine learning". After all, "machine learning" is but a glorified simplification of signal processing, control systems and so on and so forth, as is the discrete version of "algorithmic pattern matching". There is much more to learning than mere back-propagation and transfer functions, and the way the current practitioners in the field struggle with the results of their work (either through nonsensical "deep learning" or through manual feature crafting) is nothing short of ridiculous.

&lt;blockquote&gt;The only reason I'd claim a brain isn't a computer is because I refuse to believe all brains have limitations, and formal systems have limitations.&lt;/blockquote&gt;

It's trivial to demonstrate that all brains have limitations, by looking for example at their... physical size. The brain doesn't even concern me per se, as does the relationship between the mind and the underlying reality, and the possibility of modelling this relationship as a computational process. As clearly stated, I do think of computation in its formal sense of "effective computation" (otherwise, whoever claims that computation and thinking are equivalent should clearly specify their computational model of choice), and as shown in fundamental computer science courses, assuming they teach those in universities anymore, there are plenty of examples of mathematical functions that the human mind can process, yet a computer cannot, and provably so.</description>
		<content:encoded><![CDATA[<p>If intelligence is indeed a form of pattern-recognition, then whatever equations are laid down on the paper shall have to model the underlying process of "learning guided by evolution", as explained by Chomsky in his rebuttals of the current statistical approaches to the field of "machine learning". After all, "machine learning" is but a glorified simplification of signal processing, control systems and so on and so forth, as is the discrete version of "algorithmic pattern matching". There is much more to learning than mere back-propagation and transfer functions, and the way the current practitioners in the field struggle with the results of their work (either through nonsensical "deep learning" or through manual feature crafting) is nothing short of ridiculous.</p>
<blockquote><p>The only reason I'd claim a brain isn't a computer is because I refuse to believe all brains have limitations, and formal systems have limitations.</p></blockquote>
<p>It's trivial to demonstrate that all brains have limitations, by looking for example at their... physical size. The brain doesn't even concern me per se, as does the relationship between the mind and the underlying reality, and the possibility of modelling this relationship as a computational process. As clearly stated, I do think of computation in its formal sense of "effective computation" (otherwise, whoever claims that computation and thinking are equivalent should clearly specify their computational model of choice), and as shown in fundamental computer science courses, assuming they teach those in universities anymore, there are plenty of examples of mathematical functions that the human mind can process, yet a computer cannot, and provably so.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Verisimilitude</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-1760</link>
		<dc:creator>Verisimilitude</dc:creator>
		<pubDate>Sun, 06 Mar 2022 09:41:13 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-1760</guid>
		<description>I agree with the view of intelligence being pattern-recognition.  In this view, the woman clearly just recognized and applied a simpler pattern than the other two.  This is still computation, or something like it.

How many of our best thoughts aren't the result of constantly thinking about topics, and simply having some recognizer for nice ideas?  I've mulled over just a few things for years, but then I extend them, let them hit reality, and all the while I'm solving tiny problems therein with smaller ideas, and doing this without end leads me to recognize other patterns and ideas, which can grow beyond their purposes, all for the better.

The only reason I'd claim a brain isn't a computer is because I refuse to believe all brains have limitations, and formal systems have limitations.</description>
		<content:encoded><![CDATA[<p>I agree with the view of intelligence being pattern-recognition.  In this view, the woman clearly just recognized and applied a simpler pattern than the other two.  This is still computation, or something like it.</p>
<p>How many of our best thoughts aren't the result of constantly thinking about topics, and simply having some recognizer for nice ideas?  I've mulled over just a few things for years, but then I extend them, let them hit reality, and all the while I'm solving tiny problems therein with smaller ideas, and doing this without end leads me to recognize other patterns and ideas, which can grow beyond their purposes, all for the better.</p>
<p>The only reason I'd claim a brain isn't a computer is because I refuse to believe all brains have limitations, and formal systems have limitations.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: spyked</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-1690</link>
		<dc:creator>spyked</dc:creator>
		<pubDate>Mon, 21 Feb 2022 21:16:05 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-1690</guid>
		<description>I guess this could be an article standing on its own feet, but I will also leave here a few thoughts on the following idea:

&lt;blockquote&gt;Examining in what ways can a problem be solved is also an algorithmic kind of thinking, on a meta level; one takes note of the kind of problem presented, then chooses a suitable solution from a range of options.&lt;/blockquote&gt;

I for one will refrain from representing the processes involving thinking this way, as in my experience many such solutions were found before becoming fully conscious of the problem, or otherwise they didn't involve any sort of algorithmic searching at all. I don't pretend to fully understand what went on there and I don't want to start throwing words around (in my opinion the postmodern West tries to make too much of e.g. the so-called mindfulness), I'm just saying that science has a poor understanding of what *causes* thoughts (or whether the cause-effect framework applies here at all) and "choosing a suitable solution" might be the convenient answer, not necessarily the correct one.</description>
		<content:encoded><![CDATA[<p>I guess this could be an article standing on its own feet, but I will also leave here a few thoughts on the following idea:</p>
<blockquote><p>Examining in what ways can a problem be solved is also an algorithmic kind of thinking, on a meta level; one takes note of the kind of problem presented, then chooses a suitable solution from a range of options.</p></blockquote>
<p>I for one will refrain from representing the processes involving thinking this way, as in my experience many such solutions were found before becoming fully conscious of the problem, or otherwise they didn't involve any sort of algorithmic searching at all. I don't pretend to fully understand what went on there and I don't want to start throwing words around (in my opinion the postmodern West tries to make too much of e.g. the so-called mindfulness), I'm just saying that science has a poor understanding of what *causes* thoughts (or whether the cause-effect framework applies here at all) and "choosing a suitable solution" might be the convenient answer, not necessarily the correct one.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: spyked</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-1689</link>
		<dc:creator>spyked</dc:creator>
		<pubDate>Mon, 21 Feb 2022 20:56:29 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-1689</guid>
		<description>Let me attempt to disentangle things, hoping that I won't get them more entangled along the way.

&gt; Something about this line of reasoning rubs me the wrong way.

I guess much of the problem comes straight from the title: what is the nature of this "difference" that the author is speaking about? This sounds to me precisely like (and it is indeed a mockery of) "people being equal to one another", or in this particular case, of the fallacious fundaments of "artificial intelligence". Or otherwise, to put it in the words of &lt;a href="https://archive.ph/hPOKr#selection-95.1-98.0" rel="nofollow"&gt;old Dijkstra&lt;/a&gt;: "[...] whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim".

In other words, the article is challenging that very link between thinking and computation, yet it indeed brings very little to back the challenge up. I for one don't feel compelled to demonstrate that things stand one way or the other, I do *strongly believe* however that the "computational agent" is naught but a metaphor, same as the &lt;a href="http://thetarpit.org/2021/re-ivanovna-et-al-2020" rel="nofollow"&gt;linguistic one&lt;/a&gt;. I'm not saying it's not a useful metaphor either, just that its use is limited, perhaps more limited than the so-called scientists in the field are willing to admit. Sure, machines have sensors and actuators and everything in between, I just don't think that attempting to attribute properties such as "thinking" or "intelligence" (or even any sort of individuality) to them is anything but an old example of ye olde anthropomorphism.

Moving on, take

&gt; To my mind, computation is a (large-ish) *subset* of thinking

versus

&gt; So here's the kicker: most of life on Earth may actually be described to go based simply on computation, with some episodic thinking that happens every now and then.

So which one is a subset of which? also keeping in mind that as you mentioned, life, like wetware (actually: precisely the other way around) "learns" from the environment, while silicon can at most emulate this adaptation process.

I suspect that the reasons behind this limitation have much to do with the limitations of the so-called "universal" Turing machine: much like there are mathematical functions that a Turing machine cannot compute (i.e. they are not effectively computable), the biochemistry and genetics in life are based upon processes which silicon computers cannot reproduce. Now, using the DNA itself as a computational device, that is indeed interesting, although I've not yet heard of a computational class to describe such processes. The field is still young, but who knows, maybe in the '30s or '40s, if we live by then...

Another problem with the computing we're familiar with is that it is but a model of how (certain) things world, and like all models, it is reductionist. Feynman attempted to apply the notion of computation in physics itself and that came to nothing, which yet again makes me skeptical of the attempts to apply the same notion when it comes to life, or just parts of it for that matter.

So to conclude, assuming that both computation and thinking are actual things in and of themselves, what I mean by "qualitative difference" above is that they belong to entirely distinct ontological categories and that it doesn't make much sense to try to find a relation between them in the first place.

&gt; Speaking of mentats, do not forget Paul Atreides exhibited such traits, and he was going to undergo the training, becoming a Duke-Mentat

Indeed! what I'm proposing is that it's his "dukeness" (his... "will to power", let's say, I don't have any closer philosophical proxy to reference) which allows him to think and make decisions; and his "mentatness" combined with drugs which allows him to see.</description>
		<content:encoded><![CDATA[<p>Let me attempt to disentangle things, hoping that I won't get them more entangled along the way.</p>
<p>> Something about this line of reasoning rubs me the wrong way.</p>
<p>I guess much of the problem comes straight from the title: what is the nature of this "difference" that the author is speaking about? This sounds to me precisely like (and it is indeed a mockery of) "people being equal to one another", or in this particular case, of the fallacious fundaments of "artificial intelligence". Or otherwise, to put it in the words of <a href="https://archive.ph/hPOKr#selection-95.1-98.0" rel="nofollow">old Dijkstra</a>: "[...] whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim".</p>
<p>In other words, the article is challenging that very link between thinking and computation, yet it indeed brings very little to back the challenge up. I for one don't feel compelled to demonstrate that things stand one way or the other, I do *strongly believe* however that the "computational agent" is naught but a metaphor, same as the <a href="http://thetarpit.org/2021/re-ivanovna-et-al-2020" rel="nofollow">linguistic one</a>. I'm not saying it's not a useful metaphor either, just that its use is limited, perhaps more limited than the so-called scientists in the field are willing to admit. Sure, machines have sensors and actuators and everything in between, I just don't think that attempting to attribute properties such as "thinking" or "intelligence" (or even any sort of individuality) to them is anything but an old example of ye olde anthropomorphism.</p>
<p>Moving on, take</p>
<p>> To my mind, computation is a (large-ish) *subset* of thinking</p>
<p>versus</p>
<p>> So here's the kicker: most of life on Earth may actually be described to go based simply on computation, with some episodic thinking that happens every now and then.</p>
<p>So which one is a subset of which? also keeping in mind that as you mentioned, life, like wetware (actually: precisely the other way around) "learns" from the environment, while silicon can at most emulate this adaptation process.</p>
<p>I suspect that the reasons behind this limitation have much to do with the limitations of the so-called "universal" Turing machine: much like there are mathematical functions that a Turing machine cannot compute (i.e. they are not effectively computable), the biochemistry and genetics in life are based upon processes which silicon computers cannot reproduce. Now, using the DNA itself as a computational device, that is indeed interesting, although I've not yet heard of a computational class to describe such processes. The field is still young, but who knows, maybe in the '30s or '40s, if we live by then...</p>
<p>Another problem with the computing we're familiar with is that it is but a model of how (certain) things world, and like all models, it is reductionist. Feynman attempted to apply the notion of computation in physics itself and that came to nothing, which yet again makes me skeptical of the attempts to apply the same notion when it comes to life, or just parts of it for that matter.</p>
<p>So to conclude, assuming that both computation and thinking are actual things in and of themselves, what I mean by "qualitative difference" above is that they belong to entirely distinct ontological categories and that it doesn't make much sense to try to find a relation between them in the first place.</p>
<p>> Speaking of mentats, do not forget Paul Atreides exhibited such traits, and he was going to undergo the training, becoming a Duke-Mentat</p>
<p>Indeed! what I'm proposing is that it's his "dukeness" (his... "will to power", let's say, I don't have any closer philosophical proxy to reference) which allows him to think and make decisions; and his "mentatness" combined with drugs which allows him to see.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alex</title>
		<link>http://thetarpit.org/2022/the-difference-between-thinking-and-computation#comment-1688</link>
		<dc:creator>Alex</dc:creator>
		<pubDate>Mon, 21 Feb 2022 10:27:48 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=443#comment-1688</guid>
		<description>Something about this line of reasoning rubs me the wrong way. Let me see if I can actually put in into words:

&lt;blockquote&gt;thinking .. involves .. evaluating an object and drawing observations upon it; while computation involves applying an algorithm upon whatever one is faced with&lt;/blockquote&gt;

To my mind, computation is a (large-ish) *subset* of thinking; or, put it the other way, thinking is a superset of computation, exhibiting a lot of its traits/qualities.

What you call "evaluating an object" - where does it start? With taking in relevant information about said object, via hardware/wetware sensors? And then following various logic pathways, categorizing said information you acquired, und so weiter?

In your example re: geometry, the only salient difference is that the lady took the time to examine the objects of the problem, and that led her down a simpler computational path. Examining in what ways can a problem be solved is also an algorithmic kind of thinking, on a meta level; one takes note of the kind of problem presented, then chooses a suitable solution from a range of options.

What I will grant, is the fact that meta-analysis is computationally expensive and often goes unused. Few species possess the ability, and just a subset of the individuals possess the inclination. Nevertheless, since such analysis exists as an option, and can be approached programmatically, I posit that it largely falls in the "computing" category.

What I would actually mark as the difference between thinking and (nude, raw) computing is the ability to form new associations out of existing data and structures, new symbols and templates. A processor has a predefined set of instructions, a predefined set of symbols; on the next layer, software, by and large, has a predefined set of instructions. Wetware is remarkable in its ability to react to new stimuli, form a novel representation of those, and then immediately use this new thinking-template to inform new decisions.

So here's the kicker: most of life on Earth may actually be described to go based simply on computation, with some episodic thinking that happens every now and then. Therefore, the very line between these two concepts is .. somewhat blurry.

n.b. Speaking of mentats, do not forget Paul Atreides exhibited such traits, and he was going to undergo the training, becoming a Duke-Mentat, combining both political power and vast processing power. Even "regular" mentats such as Thufir Hawat were shouldering a lot of decision-making, starting with House security. Again, I find the line between thinking and computing to be quite blurry in a mentat.</description>
		<content:encoded><![CDATA[<p>Something about this line of reasoning rubs me the wrong way. Let me see if I can actually put in into words:</p>
<blockquote><p>thinking .. involves .. evaluating an object and drawing observations upon it; while computation involves applying an algorithm upon whatever one is faced with</p></blockquote>
<p>To my mind, computation is a (large-ish) *subset* of thinking; or, put it the other way, thinking is a superset of computation, exhibiting a lot of its traits/qualities.</p>
<p>What you call "evaluating an object" - where does it start? With taking in relevant information about said object, via hardware/wetware sensors? And then following various logic pathways, categorizing said information you acquired, und so weiter?</p>
<p>In your example re: geometry, the only salient difference is that the lady took the time to examine the objects of the problem, and that led her down a simpler computational path. Examining in what ways can a problem be solved is also an algorithmic kind of thinking, on a meta level; one takes note of the kind of problem presented, then chooses a suitable solution from a range of options.</p>
<p>What I will grant, is the fact that meta-analysis is computationally expensive and often goes unused. Few species possess the ability, and just a subset of the individuals possess the inclination. Nevertheless, since such analysis exists as an option, and can be approached programmatically, I posit that it largely falls in the "computing" category.</p>
<p>What I would actually mark as the difference between thinking and (nude, raw) computing is the ability to form new associations out of existing data and structures, new symbols and templates. A processor has a predefined set of instructions, a predefined set of symbols; on the next layer, software, by and large, has a predefined set of instructions. Wetware is remarkable in its ability to react to new stimuli, form a novel representation of those, and then immediately use this new thinking-template to inform new decisions.</p>
<p>So here's the kicker: most of life on Earth may actually be described to go based simply on computation, with some episodic thinking that happens every now and then. Therefore, the very line between these two concepts is .. somewhat blurry.</p>
<p>n.b. Speaking of mentats, do not forget Paul Atreides exhibited such traits, and he was going to undergo the training, becoming a Duke-Mentat, combining both political power and vast processing power. Even "regular" mentats such as Thufir Hawat were shouldering a lot of decision-making, starting with House security. Again, I find the line between thinking and computing to be quite blurry in a mentat.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
