For what it's worth, I'm not naïve enough to believe that the current generative AI machinery is in any way "intelligent", that is, unless we attempt to twist the definition of intelligence to fit whatever ideosophical frameworks are fashionable today. In principle I don't reject the idea of machinery that is capable of expressing something which may be characterized as (intelligent) thought -- if anything humans, evolved, not built, as they are, provide an accurate example of precisely such machinery. But let's be clear on the fact that the so-called Large Language Models aren't it, for obvious reasons.
Still, throughout history we're often stuck for a long while in metaphorical swamps, so since everyone and their dog calls the current thing "AI", I'm left with no option but to adopt the same, if only for the sake of brevity. So whenever you read the word AI in the following paragraphs, make note that I'm referring to LLMs or whatever the fuck technologies they're using nowadays for knowledge representation...
... especially since a major problem of the field of artificial intelligence since its inception consisted of precisely this question: how can one model (using computers) and (thus) mechanize knowledge building and retrieval? That's how you got symbolic programming and the ELIZA experiment on one hand, while on the other you got all the wads of system software and "apps" to make your interaction with machines somewhat bearable. In other words: once we all got freed by the shackles of physical labour, we started thinking how in the world could we start fitting intellectual labour into the same framework.
So then I will admit that AI, while far from being intelligent, is (gasp!) somewhat useful. It is useful first and foremost as a very imprecise tool for searching for links -- links in the more general sense of relations between words, not necessarily URLs. If this sounds familiar, it's because it is. This is for example how you get stuff such as this piece:
If you've ever typed "air purifier reviews" into Google, you were probably looking for the kind of content you'll find on HouseFresh.com. The site was started in 2020 by Gisele Navarro and her husband, based on a decade of experience writing about indoor air quality products. They filled their basement with purifiers, running rigorous science-based tests and writing articles to help consumers sort through marketing hype.
HouseFresh is an example of what has been a flourishing industry of independent publishers producing exactly the sort of original content Google says it wants to promote. And indeed, soon after the website's launch, the tech giant started showing HouseFresh at the top of search results. The website grew into a thriving business with 15 full-time employees. Navarro had big plans for the future.
Then, in September 2023, Google made one in a series of major updates to the algorithm that runs its search engine.
"It decimated us," Navarro says. "Suddenly the search terms that used to bring up HouseFresh were sending people to big lifestyle magazines that clearly don't even test the products. The articles are full of information that I know is wrong."
And so on and so forth. In short: Google, while thoroughly lacking in technological prowess1, went all-in strategically by integrating their AI into their search engine. For what it's worth I'm not sure what else they could have done to remain on the market, since that's ChatGPT does too: it's trained using some public or private repositories and the result of this training is a graph that acts as a database for retrieving... data. Just like Google's "search engine"2, LLMs will provide structured answers to queries, in a format that a human can understand. They use up a hell of a lot processing power just to tackle the "a human can understand" angle of the problem; but whatever, it works for whatever arbitrary values of "works" you may deem. For example I know a shitload of things but sometimes I'm too lazy to look up the details myself, so I let some LLM do it for me. I can easily verify that information, but this act of summarization saves me a lot of time spent searching for specifics, even when the program gets it wrong -- especially then, because it gives me an insight into its limitations, which I can overcome the usual way, i.e. by rephrasing the question. Sure, sometimes this won't bring me anywhere -- which brings me to the second part of this article.
I suspect that the author of the BBC article hints at a different issue: what do you do about the information that you can't verify? That is indeed a tricky question, how does one defend against ideology and nonsensical propaganda in the age of decay through AI? A straight answer would be that you can't do that, since the whole thing was designed for propagating nonsense from the beginning. But I'm sorry to break it to you, your ability to reliably retrieve information waas impaired sometime around 2020 and it will likely remain thusly impaired until the current empire crumbles to pieces, and maybe for some further time after.
The more complicated answer is that if you want to use this kind of thing, then you're stuck getting the data, curating it yourself and training your own model. That last part is the easier one among them in my opinion, the whole point is that had you had the raw data to curate, you wouldn't need all the AI mumbo-jumbo in the first place.
To sum this up, AI is indeed useful, but it's only useful after the fact. For example I asked some GPT to generate a Romanian trap song about ancient Dacia, all the while employing the words "varză", "barză", "viezure" and "mânz"3. The results were naïvely nonsensical, but all the while the resulting "poetry" denoted some historical background on the matter, e.g. the association between Dacians and wolves. This ain't much, it's merely a weight on the edge of a graph, but I wouldn't expect e.g. a 4th grader to get this association nowadays. So if ChatGPT is at the level of a 7th grader, it's definitely somewhere, and if not through technological advancement, then merely by virtue of the failure of education it'll soon have more PhDs than you could get in a lifetime.
The problem with AI is that it's totalitarian in a much deeper sense than previously anticipated. The end of AI lies in its very nature: once there's no more data to consume, it will dreadfully choke on its own vomit.
-
Nowadays they sell an operating system along with some "applications", all written by their developers, but none of them working as they oughta. For example try uploading photos to Google Drive from an Android phone and you'll find yourself in a world of pain: supposedly the upload process runs in the background, so you can do your thing while Google does its thing; only the damned upload service provides no sort of progress notification, nor any other sort of notification when their thing fails doing their thing.
Anyways, last time I ran into this, I had attempted to gather the Boga photo series taken by a group of five people. So instead of bothering with Google shitapps, I had everyone download the photos on their PCs -- yet another ordeal, but by now what can I even say -- and scp it somewhere.
Seriously, how are you still using smartphones for any serious work? Just throw them all out the window already. ↩
-
Remember when search engines were a thing? and when Zuck came and fucked up Google's game by moving a large part of the content inside a walled garden? Oh, no question about it, Google has been through this before. ↩
-
Something similar was previously attempted in the realm of black metal music by a band called Negură Bunget. To be honest I haven't listened to it, mainly due to the fact that black metal lyrics are utterly incomprehensible. If I'm to read their lyrics from a booklet, then I might as well skip the music, which I don't enjoy all that much anyway. ↩
So today I was watching some random internet dood discussing arbitrary "features" of the AI sorts, such as: the user draws a doodle of a cat and AI makes it into a cubist "artwork"... anyway, I was watching that and it struck me: AI is really the religious pantsuit dream, because it can spawn arbitrary artifacts out of anything that you can possibly conceive!
For better or worse, I for one think that this particular kind of magic is going to stay with us for a while.
The problem with AI is that while it's very efficient at giving solutions to problems that have been solved before, it's absolutely terrible at tackling any sort of new problems.
Caveat emptor.
[...] at the heart of something called "inductive" reasoning. This is through no coincidence what the AI folks are trying to achieve: build a program that is a sort of tabula rasa, then feed it with a [...]