After plurious discussions on the subject of AI as it is understood today, it occurs to me that so far no one has really sat to seriously talk about the so-called "human-computer interaction" aspect of this whole deal, and of course, how this particular aspect influences changes in the sociopolitical landscape of what was once known as the "civilized" world. Now, "human-computer interaction" is a funky term from the (so-called) computing sciences, but what I'm really interested is the psychosocial implications of the ubiquitous inclusion of such a thing as AI in people's lives.
Although I may not be very knowledgeable in the field, I am quite aware of past "intellectual" forms of expressions on the subject. I was there when the "Google is making you stupid" derpage came out; it's on the same level of reasoning as "weed is is getting you baked", as if there's no freedom of will on the part of the subject who does these things. I was also there when Jonze's Her, or the Wachowskis' Matrix, for that matter, came out and I am thoroughly unimpressed with the notion that AI can manipulate people; yeah, and so can reality TV, and... so what? Sure, the common man lives in constructed society, that's a fair premise; sure, he interacts with technology all day long, that's also fair. But... what about that? No one seriously cares about the dramas of incels who fall in love with AI.
I will further simplify this by intentionally confusing the terms "AI" and "technology", because for all practical intents and purpose they mean the same thing in this context. Say, have you ever interacted with an ATM or a self-checkout machine? On one hand, you do notice how horrible these are, right? But on the other, how could you tell whether they are built using a LLM or not? and how is this relevant? It's all a piece of machinery that you're trying to use to achieve some result. Do you really think it would make any difference if the self-checkout machine were to talk sass at you instead of throwing some dumb message and plainly "refusing" to work?
I don't know whether anyone's ever made this observation -- I'm supposing that someone has -- but did you ever notice how various quirks in these automated machines tend to make you build... automatisms? After repeated interactions with the damned self-checkout machine, you'll immediately know that the pressure sensor of the scale is off, so you'll find some means to make it work so that it stays out of your way as much as possible. And you'll develop that automatism just as, say, you develop muscle memory when you're typing or how you learn to "remember" to hold a crooked door while trying to lock it. And granted, some of these automatisms are useful, in the sense that they make you work some particular items, such as the ones mentioned above, more efficiently, that is, faster. As Kahneman would say, as time goes by, the interaction tends to involve more of a System 1 type of thinking rather than System 2.
Now, the problem with tech, as I've said two paragraphs before, is that most technology meanwhile tended to get reduced to computers, and nowadays most computing tends to be reduced to a fucking chatbox. So I guess this process of human automatization1 works the same with LLMs, only, here's the real perverse thing: AI strives really hard to emulate human language. Let's understand each other: yes, ChatGPT is able to spit out sequences of characters which are virtually indistinguishable from talking; and moreover, you are able to derive some meaning from said sequences of characters, by virtue of the whole thing being based on humongous amounts of training data. However, those sequences are very pointedly not talking. Sure, you may not know whether some spambot on ye olde Facebook holds any actual flesh behind it or not, but this really isn't the point; the point is, that the output of that spambot is not talking. And for that matter, neither is parroting articles from the press; and for that matter, neither is the press itself.
To take but one example: say you asked ChatGPT how to cook pasta al dente, and after a few backs and forths you got the recipe that fit your need. That's great, and I bet ChatGPT can even crack some jokes along the way; but at its very essence, how is this "conversation" of yours any different from: looking up the same thing ("how to cook pasta al dente") on Google; and reading an article that describes the same process? Sure, ChatGPT can whip out the same thing in a custom, distilled form; but how are the two processes distinguishable in essence2? I'll wager that they aren't; and that at most you're fooling yourself that you're talking to a "human-like" entity, and that at the extreme you're the same dude from Jonze's movie, fooling yourself to the very end3. This point may be lost among the more autistic among my readers, but: there's really very little semblance of humanity in interacting with a LLM, and that semblance is fake.
So here's how I think this is playing out -- and my observation is derived from direct experience with the parties involved: the human mind, in its race towards efficiency (particularly through unification), and fooled as it stands by LLMs, will ultimately tend not only to confuse them with humans, but it will also end up confusing humans with ChatGPTs. This brings Heidegger's observation on "standing reserves" to its final conclusion: i.e. that, in the words of one great fictional character, humans are a resource and that thereby they shall be used for all that which they can deliver as responses to demands -- yes, demands, not questions -- and inasmuch as they can't... well, that's how you get slavery at best and genocide at worst. Isn't that what those poisonous pop-influencers at Davos keep telling you?
As both a summary and a conclusion to the whole HCI angle: technology is raising the bar in terms of economic efficiency, and as it does that, the more mediocre among us will end up violently drowned in it.
-
I'm fairly sure Derrida has a name for this, only... what can I do, I'm a peasant!
Omu' cât trăiește învață, like they say over here in the colonies. ↩
-
"But Lucian", you'll ask, "how is the same conversation with my mom distinguishable from a ChatGPT chat?" That's a great question, and one obvious answer is that your mom is a distinct person, while ChatGPT isn't. But the more interesting part is that your mom may engage the subject in ways that you haven't expected, such as by asking how come you've started taking an interest in cooking all of a sudden. That's the thing, human conversations aren't "rational" in the sense contemplated by LLM engineering.
Let me ask you this: ChatGPT may be able to generate the previous paragraph, but would it be able to turn it into a "your mom" joke without your "asking" it explicitly? Well... precisely! ↩
-
Furthermore, you're fooled by the shamans of transhumanism that there is anything authentically human in that interaction. ↩