The Rose Burrow

I cannot stop thinking about this stupid video, so I'm going to share my thoughts on it.

So, you may remember, a few weeks ago, a youtuber released a video talking about AI and how, actually, it's not that bad and if you hate it, you're just a rube.

That's not actually his thesis, it's more complex than that, to be sure. But, I think many of the things he wants to argue, whether he argues them in good faith or not, take this tone. Particularly when he discusses the problems of the discourse around art production and AI image generation. He starts this section by framing a fairly well-known, independent artist aghast at a stable diffusion machine being used to make a comic that looks like hers. He then bloviates a great deal about how the interests served by someone looking for recompense in response to this is little more than a patsy to make big companies like Disney look more reasonable for their seeking to curb AI image generation as it relates to copyright.

Now, I don't think he's saying this in bad faith, but the sinister music and the way he frames some of these things--for example a sarcastic faux-shock aside about art being commodified--really don't endear him to his rhetorical opponents and make him seem like a jerk, to put it one way. At best it's an "you just don't understand what you're doing" and I think that's unfair. I think it's unfair to say someone looking to protect their image and brand, someone who sees their means of making ends meet in the capitalist hellscape that he acknowledges we live in, is too greedy or stupid to realize these things, especially because I don't think he really attends to this reality. Ultimately, he speaks as though we're just on the edge of the communist utopia and if we could just let things like copyright go, we'd be there. He's not making that argument, but holy shit, that's the tone. He also engages, again, perhaps unwittingly, in a sort of "well, there's no ethical consumption under capitalism, so is AI really that bad?" (Props to Revie from the friend discord server for laying this out so perfectly). It's all sort of condescending, to be honest.

But, there's also the first part of his video, where he talks about the argument that "AI doesn't do what it promises" and then proceeds to spend an hour talking about reactionary humanist philosophy and Derrida. He is at pains to say, at least twice, that this is not philosophical wankery, but he is wrong. Put most simply, whether or not we count an LLM as human (it's not, but whatever) is immaterial to the fact that it is frequently WRONG. Errors in logic or reasoning from Large Language Models are super common. As a result, it also doesn't make tasks more efficient--it makes them less so. If I use the AI, I have to waste my fucking time going over every line to make sure the AI didn't say something like "Robin Hood was a known pervert in the year 1492." No, he wasn't and any human can figure that out without much work or thinking. But, if I use AI and it spits out some bullshit like that, it is wasting my god damn time. It would've been more efficient and useful to just write the paper myself, which I do. Similarly, as a teacher, I have to spend time combing over every word of my students' essays, trying to determine if they actually did the work or if they took the lazy way out. Literally three or four years ago, I didn't have to do that shit. Even if I knew they didn't write it, I could still read it and know that with certainty, now I don't. Now, I have to worry that they've tried to use ChatGPT or what the fuck ever. I'd love to believe my students are honest, but I've been teaching for ten years, so I know better.

He also talks about things that AI could do, but with the exception of things like protein folding--highly specialized, specific uses of generative AI--none of them are convincing. Yes, these narrow, highly specific uses are super cool and can allow us to do impressive things, but that's not what the fuck an LLM is and that's not what anyone is mad at. Simply put, nothing he comes up with as a use-case for LLMs is beyond the scope of human ability. Nor do I believe that these things would actually help anyone learn anything. In fact, they tend to do the opposite--because you are not doing the thinking yourself, your critical thinking skills weaken. If you don't use it, you lose it. Ask anybody who's lifted weights and they'll tell you that. It is the same here. Additionally, the fact is that LLMs are governed by the people creating them and most of them are enormous corps, so that means that the information you can learn from a given LLM is deeply biased. Elon musk's idiot AI is currently in a war with itself because he wants it to confirm the theory of White South African genocide, something which is at odds with the facts of reality, which it supposedly wants to give you. Something like this is not addressed at all. Hopefully in part 2.

And these are real issues I've brought up without having to talk about whether or not the technology counts as human. It being or not being human is immaterial to the fact that we don't know how it works, how much power it consumes, or how to make ridiculous errors stop happening, something that Sam shithead Altman says is just "part of the magic'". Nor do we have adequate guardrails in place for many of them to stop producing porn of people that freaks on the internet want to see naked. None of which this youtuber addresses because he just sort of handwaves "you can't blame midjourney for bad actors." Idk man, I kind of can. This is complicated issue, but simply, if I gave a guy a hammer and then he kills another guy with it, I'm at least a little responsible for giving the first guy a hammer. In fact, I'm pretty sure you could argue I'm an accessory to murder!

Now, I will finish by addressing a few things: first of all, this is, as he says, the first in a two-part series. So, perhaps we're being hasty in our criticisms of him. Now, why he would release the first video as a response to AI-skepticism, knowing that he is likely to make many people in and out of his audience angry with him and undoubtedly spark controversy around him is a question only he can answer. I have my thoughts, but it does seem weird, generously speaking. Regardless, I don't think that his second part will end up exonerating him in the public eye. But hey, sometimes it's nice to be wrong.

I will also say his discussion of the environmental impact is actually something we need to talk about. The frustrating truth is we actually don't quite know how much these server cost to run, environmentally speaking. OpenAI, anthropic, none of them, release that data. And that sucks. So, best estimates are just that. It's also worth noting that the initial estimates which say chatGPT uses something like 3Kh/query is based on math using a much older, less efficient model. That said, as we've seen with the recent release of Deepseek, these jokers aren't even apparently that interested in being efficient, which seems, uh, bad. So, the truth is, we have no real idea of the environmental impacts, which makes me furious, tbh. Additionally, the way that energy companies get money is by estimating future usage, so it is in the best interest of power companies and those invested in them to overestimate the needs for electricity of AI. That also fucking sucks. These are symptoms of larger problems which he rightly mentions.

Idk, this video has lowkey haunted me since it came out and I needed to exorcise it by putting these thoughts out there for anyone who isn't interested in three hours of questionable argument about AI skepticism which frames this skepticism as either being a rube, kindly, or someone with their head in the sand, more rudely. I am neither of these things. I am tired of seeing LLMs shoved into everything. I am tired of people defending it. If you are one of those people, see yourself out, I will not engage with you.