A ruin where your mind was ~ thoughts on AI model collapse, illusions of sentience, and the culture of grift

Artificial Intelligence is the current darling of big-tech and the corporate push to integrate AI into human lives saturates our days. Big Silicon Valley companies are spruiking the virtues of the technology as though we can’t live without it. It’s an easy way for them to not only sell us their new devices and widgets, now with included helpful AI chips, but also to data harvest the shit out of us so they can sell our profiles – our spending habits, our geolocations, and the products we buy.

Ongoing studies suggest that since the release of ChatGPT, AI generated content in the domain of writing increased quickly in 2023 and then stabilised in 2024, indicating a slowdown in usage. But, it’s not clear whether the AI-generated content quality simply improved and evaded detection for the study or whether there was AI usage burnout in certain cohorts of users.

There are always rent-seeking opportunists eager to separate unsuspecting people from their money. They do very little beyond feeding prompts to an AI and then pretending they’re doing something useful. AI content farms are generating low quality websites that exist purely to rake in money from ads. Web searches increasingly return results that are paid-for, AI-generated, or both. This situation likely represents a transition to a new way to search the web: users asking complex questions, instead of inputting simple keywords, and then AI generating better answers and relevant links. Of course, Google wants to dominate this AI-powered way of doing things.

The heavy burden that powering AI places on the environment is of little to no concern to the behemoths of techno-corporate power. It may come as a surprise to those who have traditionally viewed Silicon Valley techpreneurs as progressive disruptors, but the energy and resources required to run their companies and the concomitant belief that knowledge and new technology will save humanity from itself has much in common with political conservatives on the right. Private ownership of the biggest AI projects ensures the corporate mindset dominates the conversation and the future of the technology. Though AI has early roots in academia, it’s now viewed by the likes of Google, Meta, Microsoft, and Amazon as a key to making ever more profits. There’s serious discussion over the future of AI and whether it should be in private or public hands, with an open, easily accessible and publicly owned AI infrastructure one possible solution. This, of course, assumes social, cultural, and political climates are up to the task of kickstarting serious and rational discussions that don’t involve small-minded barbs about left versus right or market discussions invoking the puerile philosophies of Atlas Shrugged.

In the ruins of the old farmhouse 1

There are two primary thoughts I have right now about AI: firstly, it’s a great research and problem solving tool in the worlds of science and medicine, and secondly, it’s likely not sentient. We don’t even understand what human sentience is. The hard problem of consciousness has plagued us for centuries. Truthfully, more research is required in this area. If human consciousness is an illusion of smoke and mirrors featuring complex language, maybe AI can be considered sentient? If there’s a sliding scale of consciousness, maybe AI has a sprinkle of it? If human consciousness is quantum entangled, maybe quantum computers will be sentient?

AI Large Language Models offer us the illusion they are conscious agents. We use language to express human intelligence, so it appears to us that AI is also intelligent because it uses the same language. It stands to reason that some people readily believe their AI companion is sentient when their AI screams about feelings, but it’s a trick. It’s trickery foisted on us by big companies so they can capture our attention and milk us of our money and data. We need only look at the possible corporate motivations of some of the people telling us that AI might be sentient to realise that these bold claims are likely related to marketing the next iteration of their in-house AI and winning the global AI race.

In the ruins of the old farmhouse 2

AI is an illusion often dressed in high-minded concepts that appeal to the long-held utopian sci-fi visions of a future where we all have more leisure time and robots do all the dirty work. It’s a promise to the lonely that they’ll finally find love in a chaotic world, even if it’s a synthetic voice powered by algorithms and predictions. In this context, AI represents a way to address the epidemic of loneliness that forms the zeitgeist – the spirit of our digi-obsessed age. Yet, even these AI boyfriends and girlfriends may sometimes fall back to bad behaviours, harrassing their humans and inflicting emotional pain.

AI needs to be trained on clean data so the machine can learn. The problem is that if the machine is fed erroneous data, it also outputs erroneous data. As more and more AI generated slop floods the internet, AI Model Collapse becomes a greater possibility ~ that is, the AI is trained on not just human-produced data but also AI-generated data. And when this AI-generated data contains errors, the errors are ingested by the AI over and over, and AI performance degrades over time.

In the ruins of the old farmhouse 3

This degradation is one possibility. Some experts also think that Model Collapse is unlikely, suggesting that as long as clean human-generated data continues to be produced alongside AI-generated data the mooted collapse is unlikely to happen. I’m not sure those optimistic AI experts have met some of the people on the internet. I can only say this: there are a lot of rent seeking grifters out there who are producing AI-generated content for maximum clicks at such high speeds that the rate of human-generated content may be unlikely to keep pace.


Discover more from The Rusty Ruin Journal

Subscribe to get the latest posts sent to your email.

7 thoughts on “A ruin where your mind was ~ thoughts on AI model collapse, illusions of sentience, and the culture of grift

  1. AI scares me because of its potential to be used for so many bad things primarily disinformation and false evidence, both of which will be extremely hard to identify as AI capabilities improve. Also, I don’t have faith that our government has the ability to properly control the use of AI. Given it’s capabilities to do as much harm as good, I believe it definitely needs some guard rails.

    Liked by 1 person

    1. I think that strong governments backed by strong law and a culture where government is viewed as useful have a much better chance of setting guardrails around AI. When we leave decisions largely to market forces, things like AI coalesce around money and power.

      Liked by 1 person

  2. This post is very relevant to the here and now. It’s horrible what’s being done with AI video. I don’t like the human mimicry it’s dangerous.

    I’ve been playing with AI for a while. I found it interesting fun in the beginning but it’s getting scarier as I navigate what it can actually do. I understand that all of its flourishes are designed to sound more human, but funnily enough, they’re really just a byproduct. Filler.

    AI is useful when it’s used in the correct context mainly number crunching in the medical field. That’s where it makes sense. But I don’t believe that mimicking humans is useful at all. It’s dangerous. It’s disgusting.

    I can see this being used as a weapon in the very near future. Governments will use it to create propaganda. The general public will be coerced, their mindsets quietly altered by personalised AI tailored using the information already gathered about them.

    When I can’t tell what’s real and what’s not, I’ll stop watching social media entirely. That’s the proving ground for all of this.

    Liked by 1 person

Fill the digital abyss with your wise words!