<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	>
<channel>
	<title>Comments on: Notes on Hofstadter's Coffeehouse Conversation</title>
	<atom:link href="http://thetarpit.org/2022/notes-on-hofstadters-coffeehouse-conversation/feed" rel="self" type="application/rss+xml" />
	<link>http://thetarpit.org/2022/notes-on-hofstadters-coffeehouse-conversation</link>
	<description>"Now I feel like I know less about what that blog is about than I did before."</description>
	<pubDate>Sun, 19 Apr 2026 16:14:55 +0000</pubDate>
	<generator>http://thetarpit.org</generator>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
		<item>
		<title>By: spyked</title>
		<link>http://thetarpit.org/2022/notes-on-hofstadters-coffeehouse-conversation#comment-2491</link>
		<dc:creator>spyked</dc:creator>
		<pubDate>Sat, 09 Jul 2022 20:31:52 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=457#comment-2491</guid>
		<description>&gt; Ergo Proxy managed to predict sexbots

In all fairness, Philip K. Dick managed to do this.</description>
		<content:encoded><![CDATA[<p>> Ergo Proxy managed to predict sexbots</p>
<p>In all fairness, Philip K. Dick managed to do this.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: spyked</title>
		<link>http://thetarpit.org/2022/notes-on-hofstadters-coffeehouse-conversation#comment-2490</link>
		<dc:creator>spyked</dc:creator>
		<pubDate>Sat, 09 Jul 2022 20:22:48 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=457#comment-2490</guid>
		<description>&gt; The question of whether these slaves "really think", whatever that means that day, will be of some interest to navel-gazers and robot rights activists, but at the end of the day, if they are human "enough", nobody will really care in practice

Speaking of which: Ergo Proxy managed to predict sexbots. There's a few niche activities that would greatly benefit (for certain values of "benefit" -- at least from an economic standpoint) from such so-called androids.

Indeed, there's a great deal of confusion among researchers when it comes to "intelligence" -- whether it's "artificial" or not makes little difference, why not "synthesized" versus "grown"? Perhaps an agent need not think, at least not in the sense of contemplation, in order to be intelligent (that's certainly true of most animals). From that point of view, I suppose that a web server is as intelligent as it gets in this day and age, only, of course, there's more where that came from.

&gt; whether humans themselves are already AIs, or hosts for AIs

Indeed, that's when I decided to reduce 'em all to mere &lt;a href="http://thetarpit.org/2022/spring-cleaning-or-the-less-said-the-better-but-still-better-than-nothing?b=d%20rather&#038;e=.#select" rel="nofollow"&gt;chatbots&lt;/a&gt;.

&gt; When real AI sentience arrives, the question will be less "does it have feelings" and more like "what the hell is happ--".

This is an extra take on your comparison between AIs and extraterrestrials, which brings us to the following point: what if the so-called "AI sentience" is already here as an emergent phenomenon that we've yet to understand? Perhaps, but just perhaps, it's yet another manifestation of that &lt;a href="http://thetarpit.org/2021/on-technology" rel="nofollow"&gt;religious impulse&lt;/a&gt;.</description>
		<content:encoded><![CDATA[<p>> The question of whether these slaves "really think", whatever that means that day, will be of some interest to navel-gazers and robot rights activists, but at the end of the day, if they are human "enough", nobody will really care in practice</p>
<p>Speaking of which: Ergo Proxy managed to predict sexbots. There's a few niche activities that would greatly benefit (for certain values of "benefit" -- at least from an economic standpoint) from such so-called androids.</p>
<p>Indeed, there's a great deal of confusion among researchers when it comes to "intelligence" -- whether it's "artificial" or not makes little difference, why not "synthesized" versus "grown"? Perhaps an agent need not think, at least not in the sense of contemplation, in order to be intelligent (that's certainly true of most animals). From that point of view, I suppose that a web server is as intelligent as it gets in this day and age, only, of course, there's more where that came from.</p>
<p>> whether humans themselves are already AIs, or hosts for AIs</p>
<p>Indeed, that's when I decided to reduce 'em all to mere <a href="http://thetarpit.org/2022/spring-cleaning-or-the-less-said-the-better-but-still-better-than-nothing?b=d%20rather&#038;e=.#select" rel="nofollow">chatbots</a>.</p>
<p>> When real AI sentience arrives, the question will be less "does it have feelings" and more like "what the hell is happ--".</p>
<p>This is an extra take on your comparison between AIs and extraterrestrials, which brings us to the following point: what if the so-called "AI sentience" is already here as an emergent phenomenon that we've yet to understand? Perhaps, but just perhaps, it's yet another manifestation of that <a href="http://thetarpit.org/2021/on-technology" rel="nofollow">religious impulse</a>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Cel Mihanie</title>
		<link>http://thetarpit.org/2022/notes-on-hofstadters-coffeehouse-conversation#comment-2487</link>
		<dc:creator>Cel Mihanie</dc:creator>
		<pubDate>Sat, 09 Jul 2022 16:51:39 +0000</pubDate>
		<guid isPermaLink="false">http://thetarpit.org/?p=457#comment-2487</guid>
		<description>All the jibber-jabber about thinking AIs often makes me think that Lem was probably correct in the thesis which he presents in Solyaris: humans are incapable of handling true 'alien-ness' and are lying to themselves when they think they want to explore 'strange new worlds and civilizations'; what they are really looking for is a tweaked mirror of themselves, 'rubber forehead aliens' as the idiom goes.

As for extraterrestrials, so for AIs. If one looks at AIs in fiction (yes, it is fiction but it offers a good insight into what humans expect from sentient AI research), one is struck by how the vast majority of them are really just human characters with a metallic instead of fleshy substrate, and the extra abilities that go with their mechanical/digital nature. Ranging from the basically-indistinguishable-from-well-adjusted-humans (e.g. Bishop, Call, replicants), to quaint aspies (Data), to a wide variety of nutters, schemers, dictators and psychopaths (Durandal, HAL, VIKI etc etc), to just snarling vicious killers (AM), all of the AIs in our collective imaginations seem to be just perfectly normal human archetypes. Real humans can be even weirder, in fact.

The unfortunate implication of this observation is that when we are looking to develop sentient AIs, what we probably want, ultimately, is just better slaves. The question of whether these slaves "really think", whatever that means that day, will be of some interest to navel-gazers and robot rights activists, but at the end of the day, if they are human "enough", nobody will really care in practice. We only seem to care now because we're not there yet.

A question that even fewer people are comfortable asking is, whether humans themselves are already AIs, or hosts for AIs. Remember, AI is about "artificial", not "computers". Whenever I am forced to interact with a member of (insert political party or ideology here), it is painfully obvious that much of their verbal and intellectual output consists of talking points programmed in by various leaders and manipulators. You can hear the gears turn. You can predict pretty much all they are going to say or do, every single detail of their alleged lives, with incredible accuracy. A real robot is much more unpredictable. Who is the real AI now?

I too believe that a thinking computer is inevitable, unless we are fortunate enough to be "returned to monke" by a merciful comet. I think, though, that it is more likely for true sentient AI to take an unexpected form. To dive into fiction again, I'm thinking of the WAU from SOMA. Alien, inscrutable, truly inhuman, more like a plant than a human voice from a terminal. When real AI sentience arrives, the question will be less "does it have feelings" and more like "what the hell is happ--".</description>
		<content:encoded><![CDATA[<p>All the jibber-jabber about thinking AIs often makes me think that Lem was probably correct in the thesis which he presents in Solyaris: humans are incapable of handling true 'alien-ness' and are lying to themselves when they think they want to explore 'strange new worlds and civilizations'; what they are really looking for is a tweaked mirror of themselves, 'rubber forehead aliens' as the idiom goes.</p>
<p>As for extraterrestrials, so for AIs. If one looks at AIs in fiction (yes, it is fiction but it offers a good insight into what humans expect from sentient AI research), one is struck by how the vast majority of them are really just human characters with a metallic instead of fleshy substrate, and the extra abilities that go with their mechanical/digital nature. Ranging from the basically-indistinguishable-from-well-adjusted-humans (e.g. Bishop, Call, replicants), to quaint aspies (Data), to a wide variety of nutters, schemers, dictators and psychopaths (Durandal, HAL, VIKI etc etc), to just snarling vicious killers (AM), all of the AIs in our collective imaginations seem to be just perfectly normal human archetypes. Real humans can be even weirder, in fact.</p>
<p>The unfortunate implication of this observation is that when we are looking to develop sentient AIs, what we probably want, ultimately, is just better slaves. The question of whether these slaves "really think", whatever that means that day, will be of some interest to navel-gazers and robot rights activists, but at the end of the day, if they are human "enough", nobody will really care in practice. We only seem to care now because we're not there yet.</p>
<p>A question that even fewer people are comfortable asking is, whether humans themselves are already AIs, or hosts for AIs. Remember, AI is about "artificial", not "computers". Whenever I am forced to interact with a member of (insert political party or ideology here), it is painfully obvious that much of their verbal and intellectual output consists of talking points programmed in by various leaders and manipulators. You can hear the gears turn. You can predict pretty much all they are going to say or do, every single detail of their alleged lives, with incredible accuracy. A real robot is much more unpredictable. Who is the real AI now?</p>
<p>I too believe that a thinking computer is inevitable, unless we are fortunate enough to be "returned to monke" by a merciful comet. I think, though, that it is more likely for true sentient AI to take an unexpected form. To dive into fiction again, I'm thinking of the WAU from SOMA. Alien, inscrutable, truly inhuman, more like a plant than a human voice from a terminal. When real AI sentience arrives, the question will be less "does it have feelings" and more like "what the hell is happ--".</p>
]]></content:encoded>
	</item>
</channel>
</rss>
