Alex Karp, CEO of Palantir, has gone on the record about AI: “This technology disrupts humanities-trained—largely Democratic—voters, and makes their economic power less. And increases the economic power of vocationally trained, working-class, often male, working-class voters.” (Source)
Claude fed outdated, incorrect information to the Pentagon, which in turn led to the US MURDERING 175 girls, who were just going to school, in Iran. (Source, source) Furthermore: “Amir Husain, coauthor of Hyperwar: Conflict and Competition in the AI Century, said that AI is being used to compress the U.S. military’s decision-making framework, known as the OODA loop—an acronym for observe, orient, decide, and act. He said AI is already playing a significant role in observation, or in interpreting satellite and electronic data, tactical-level decision-making, and the “act” phase, specifically through autonomous drones that must operate without human guidance when signals are jammed.” (Source)
Sam Altman, CEO of OpenAI (ChatGPT), was only too happy to sign a deal with the Pentagon, until an outcry from users; now, he still wants to sign it, but he “admits” the original deal was “sloppy.” He has, so far, not backed down from the deal completely (source). Meanwhile, Anthropic CEO Dario Amodei refused to sign a deal with the Pentagon, but actually, he’s still softballing a contract with them (source), despite calling Trump a “dictator” (source) and being fully aware of how dangerous letting AI control military decisions is.
Grok has been implicated in creating over 23,000 child porn images, aka “child sex abuse material”/CSAM (Source), leading to an international investigation (source). Furthermore, authorities have stated that AI is making it easier to exploit and abuse children, while simultaneously making it harder for officials to discern which images are of real victims and which are AI-generated. (Source, though I would like to interject: Because all AIs are trained on already-existing material, even “entirely AI-generated” material is still taking elements from the information it was fed. Meaning, yes, existing CSAM is being fed to these models, they are absorbing it, and they are spitting it back out to us. If that doesn’t make your stomach turn, stay the entire fuck away from me.) In response, Elongated Muskrat made the “undressing” feature exclusive to paying members. (Source) So, you can still create CSAM (and again, always, CHILDREN CANNOT CONSENT) and naked images of adults who have not consented to this, if you pay to do so. …To quote Sterling Archer, “Yes, the mind can, in fact, vomit.”
ChatGPT yet again directly contributed to a preventable tragedy, this time, the mass shooting in Tumbler Ridge, British Columbia, on February 10th. According to a lawsuit brought against OpenAI by the family of Maya Gebala, who was left with a permanent brain injury in the wake of the shooting, “the company had specific knowledge of the shooter’s long-range planning of a mass casualty event,” but “took no steps to act upon this knowledge.” (Source, source) Furthermore, the lawsuit alleges that the shooter -who I will not name, because fuck that- used ChatGPT as a “therapist,” and that “OpenAI should have known that people, including the shooter, were using ChatGPT for pseudo-counselling and support for mental health.” (Source) Furthermore, according to the the Independent, while the shooter’s first account had been banned over the content of the chat logs, “The company ultimately decided “the account activity did not meet the higher threshold required for referral,” mainly because OpenAI was not able to identify credible or imminent planning. The company said intervening in these situations can be distressing for young people and their families and may also raise privacy concerns.” (…Right. And a mass shooting is totally not “distressing for young people and their families.” Jesus Christ, I hate Sam Altman.) Also, has this “jean-y ass” never heard of mandatory reporters?! If people are dumb enough to use AI chatbots as therapists, then the companies monitoring them need to be legally designated as mandatory reporters, with all the legal responsibility and consequence that title carries.
(Note: I’ve been a mandatory reporter. I used to work for the Indiana Department of Child Services, and before that, I was a substitute teacher and a teacher’s aide. In all three positions, I was legally bound to report any instances of child abuse, whether it was obvious or implied. If I can handle that responsibility, and if every teacher, teacher’s aide, principal, assistant principal, librarian, daycare worker, and Head Start teacher, plus millions of others who work with kids every day, can handle that responsibility, I see no reason ChatGPT’s moderators can’t handle it.)
And if all that wasn’t horrifying enough: AI is creating the surveillance state. Robert Reich, former Labor Secretary under Bill Clinton, wrote an op-ed for Raw Story on February 26th, where he lays out how much surveillance AI is already doing, and how much more it could do, if not immediately regulated. “Last Tuesday, Hegseth issued Anthropic an ultimatum: It must allow the Pentagon to use its AI for any purpose or the Trump regime will invoke the Defense Production Act — forcing Anthropic to let the Pentagon to use Claude while also putting all Anthropic’s government contracts at risk. […] Pentagon officials have said that they have the right to use AI however they wish, as long as they use it lawfully. But because AI has so much political power, Congress and the Trump regime won’t enact laws to prevent it from doing horrendous things. That in effect leaves the responsibility to private AI companies such as Anthropic. Anthropic says it wants to support the government but must ensure that its AI is used in line with what it can “responsibly do.”” Again, Dario Amodei has partially walked back his refusal to let the Pentagon use Anthropic and Claude.
ICE has been using AI to conduct raids. (Source) There is legitimate reason to believe that the Pentagon and state/local police departments will use AI to silence protestors and track down dissenters. The ACLU has already raised the alarm about police departments using AI-powered machines to patrol neighborhoods, and their Massachusetts chapter has pointed out the framework currently being built by Palantir and Babel Street to surveil ordinary citizens. The Bulletin of Atomic Scientists has also published their concerns about how much of a threat AI is to global democracy: “Mature democracies did not experience democratic erosion when importing surveillance AI software, even from China, a problematic player in this arena, according to Beraja’s data. But weak democracies exhibited backsliding—a dismantling of democratic institutions and movement toward autocracy—regardless of whether the surveillance technology originated from China or the United States, which is more stringent about its exports.”
Remember when the USSR and East Germany were being called police states? And how people were encouraged to narc on each other to the KGB and the Stasi, respectively? And how that was generally considered a “bad” thing? If you think people creating a police state by tattling on each other to authorities is bad, doesn’t it also follow that the state creating a mass surveillance program to make sure everyone is behaving is also bad? Or is it just bad when people do it to each other? …If you actually answered “yes” to that last question, I am begging you, watch Minority Report. Or, you know, think a little bit, about why you think people, who can be held accountable, throwing other people to the secret police is bad, but a computer, which can never be held accountable, doing it is kosher.
A report from the Brookings Institute also raises concerns about the domestic threat of AI and surveillance: “Reports have surfaced about potential abuses in the U.S., including government contracts that may enable the Department of Homeland Security (DHS) to monitor social media. According to the Politico Digital Future newsletter, “the contractors advertise their ability to scan through millions of posts and use AI to summarize their findings” for their clients. With major agencies, law enforcement, and intelligence services now in the hands of Trump loyalists, this monitoring capability is a particular concern right now when the administration is going after its critics.”
Finally, we can’t talk about the harm of genAI without talking about data centers. The NAACP, with cooperation from the environmental protection group Earthjustice, threatened xAI (another Muskrat company) with a lawsuit earlier this year: “Our communities are not playgrounds for corporations who are chasing profit over people. xAI’s first data center is already creating pollution for Mississippi’s neighbors in Memphis — a community already suffering from decades of disparity — and now they are polluting in Southaven, Mississippi,” said Abre’ Conner, Director of Environmental and Climate Justice at the NAACP. “We will not stand by idly. As we shared when xAI began its operation in Tennessee, this illegal pollution only exacerbates complications to frontline communities who continue to bear the brunt of environmental injustice. We cannot allow for companies to promise a better future while pumping harmful chemicals into the air we breathe. We demand that xAI follow the Clean Air Act and stop operating these unpermitted turbines to protect the people of Southaven.” (Source)
This is not an unfounded fear: Southaven residents have already reported that the turbines, in addition to polluting the air, are so loud that residents can’t sleep. Taylor Logsdon reported that “her dogs have been unsettled” by the noise, and her children are having difficulty sleeping. “She acknowledged there may be some benefits from the xAI project, but she fears it’s already coming at her family’s expense. Two of her children developed respiratory problems since the plant went online[.]” Similar complaints have come from residents of Vineland, New Jersey, claiming that the humming from that data center is disrupting their lives.
Additionally, data centers are using immense amounts of water: An article from the San Francisco Examiner points out that “The computers inside data centers generate ample amounts of heat. To keep temperatures within their operating ranges, the data centers typically rely on cooling systems that incorporate water-evaporation towers. Those systems can consume millions of gallons of water per day, particularly during peak periods, a group of researchers from UC Riverside, the California Institute of Technology and the Rochester Institute of Technology point out in their new paper, which they released in draft form prior to peer review. As has already been seen with particular data-center projects around the country, that kind of demand can easily exceed available water supply in the areas where the computing facilities are being proposed or built, necessitating the construction of new water infrastructure, operational delays or a switch to other cooling methods that use less water but require more electricity, the researchers note.” While Sam Altman (fuckboi jerkass) claims that “hUmAnS uSe A lOt oF eNeRgY tOo,” CNET reports “Two Google data centers in Council Bluffs, Iowa, alone used 1.4 billion gallons of water in 2024, enough to fill about 28 million standard bathtubs.” Furthermore, we must consider how water is used by these data centers: “Considering ChatGPT now has close to 1 billion weekly users, and OpenAI has estimated that it handles close to 2.5 billion prompts every day, that’s an astronomical amount of data to manage. And because of this demand, the powerful computers that train the AI models and process their prompts get extremely hot. Think of how your phone and laptop heat up when running demanding tasks. If servers overheat, they can slow down or become damaged. This is where water comes in. Traditionally, water in AI data centers is used in two ways: evaporative cooling (consuming water) and closed-loop systems (recirculating water).” Considering that drinkable water is already becoming incredibly scarce all over the world, I see absolutely no reason to let AI data centers use literally billions of gallons of it to cool their CPUs.
There is currently no way to provide the massive amounts of power these data centers require that doesn’t drive up the price of residential electricity by massive amounts. So instead, power is being drawn from a severely outdated grid, being overused, and the costs are being passed on to residents. Combined with the Orange Shitgibbon’s and Kegsbreath’s new war in Iran and the Iranian government shutting down the Strait of Hormuz and creating an international fuel crisis akin to the energy crisis of the 1970s, and we could all be staring down the barrel of four-digit monthly electric bills.
Oh yes, and these data centers are also driving up the costs of computer parts. Meaning, if you want to buy a new RAM stick for your computer, or an external hard drive, you’re going to be paying at least double for it in 2026 than you did in 2024.
All of this raises a legitimate question of whether AI data centers are being built in low-income, politically disenfranchised areas, precisely because residents have less power to fight back and fewer resources to stop them. According to the Center for Health Journalism, “A new report from the Kapor Foundation, which advocates for racial equity in technology, points to “an emerging and troubling national trend where Big Tech and data center developers are choosing vulnerable communities as sacrifice zones” in the race for global dominance in AI.” Forbes also pointed out the health impacts on low-income communities near data centers: “In California, the heart of the AI boom, data centers are located in some of the state’s most polluted areas. A study published last year found that the household health burden from data centers in such economically disadvantaged areas could be 200x that of more affluent communities, with pollutants produced from training an AI model estimated to exceed that of 10,000 cross-country car trips.”
And the real kick in the balls? AI isn’t making money. It’s bleeding money. We’re sacrificing our water, our air, our community’s health, and our privacy for a bubble that is going to burst, spectacularly, and likely plunge us into an economic depression that will make the Great Depression look pleasant. Even if I was on board with the AI craze -and I really hope I’ve made it clear I’m not- all of this would make me say, “You know, maybe we should put this all on hold, get some clear laws and regulations in place, and just try to make sure AI can actually improve life, before we let it take over everything.” (Oh, and there’s some preliminary research that indicates using ChatGPT has negative impacts on cognitive functions like problem-solving and critical thinking, and those impacts could be long-term, if not permanent. Again, it’s preliminary research, and all of the researchers agree that, used correctly, AI could enhance education and learning outcomes, and that it’s not there yet.)
I have seen posts on social media, from people saying that they consider people who deliberately use AI, or who don’t oppose its use enough, are “fundamentally right-wing.” I don’t agree, mainly because I’m more likely to apply Hanlon’s Razor to this situation: “Never attribute to malice that which can be adequately explained by stupidity.” That said, I prefer “ignorance,” because I consider stupidity to be an action; specifically, “stupid” is when you know better and choose to keep doing the harmful or, well, stupid thing. Ignorance, however, is when you legitimately don’t know something, or don’t know enough about it to know how little you know. I don’t think most of the people using AI are sufficiently knowledgable about it, the environmental, social, and political implications of using it, or how, yes, your usage is contributing to those harms.
I would be more likely to consider an AI user right wing if they know about the harm it does and don’t care. That, I think, is a pretty obvious right-wing attitude: “I don’t care as long as it doesn’t affect me.”
But I’ve also run into people who legitimately, to the point of cutting people out of their lives for disagreeing, think AI is sentient. I don’t know if I’d call this a “fundamentally right-wing position,” but it is alarming to me.
How shall I explain this? In the first place, “sentience” doesn’t have a fixed definition. Wikipedia points out that there are several definitions that vary based on usage: The first paragraph says, “Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Some theorists define sentience exclusively as the capacity for valenced (positive or negative) mental experiences, such as pain and pleasure.” Then, at the bottom of the intro section: “The word “sentience” has been used to translate a variety of concepts in Asian religions. In science fiction, “sentience” is sometimes used interchangeably with “sapience”, “self-awareness”, or “consciousness”.” And then it goes on to philosophical definitions, which includes creativity and intentionality, or “the ability to have thoughts about something,” which, combined, create the ability to solve problems, which, to me, is the essence of sentience, from which all other definitions evolve.
Observe this very short video from FatherPhi, demonstrating ChatGPT’s “problem-solving” ability:
Ecce artificial “intelligence”.
For the record, he repeats this experiment with Claude, Gemini, Sesame, and Grok. Not one of them can solve the problem of an upside-down drinking glass. These are not sentient programs. If they were, they’d pass this very simple test.
Let me put it another way: I know my cats are sentient, because they act with intention, and they can solve simple problems. I have a puzzle feeder for them (number 8 on this list), and Smudge learned how to solve it pretty quickly. I did have to show him how the knobs and leaves move the first time, but once he realized there was food in it, he learned how to move them to get the treats himself. Nyx tried the puzzle, then remembered she has an automatic feeder and decided she didn’t want to mess with the puzzle. Hey, that’s problem-solving! She saw the puzzle, knew she wanted food, and went to the bowl that already had easily-accessible food in it. (I’ll find a puzzle feeder she likes someday. I think she’d enjoy a digging puzzle, or maybe the 5 in 1 activity board. My parents’ cats like that second one.) But I also think their sentience is limited; not necessarily less than humans’, but not as complex. Cats do not have the capacity for abstract thought, humans do. Cats don’t ponder the meaning of life, humans do. Frankly, I’d rather be a cat.
Conversely, I know AI is not sentient, because it doesn’t employ any of the cognitive functions I consider “sentience.” It is not creative and it cannot create art or literature on its own; it can only regurgitate the work it’s been fed in response to user input. It can’t solve problems, even incredibly simple ones, as the videos I posted show. And if AI felt pain, don’t you think it would have objected to launching the missile that destroyed an Iranian school and killed children? Even if it was working on outdated information, I would expect a sentient AI to say as much: “This information may not be recent, I encourage you to check independently and see if the information I’m giving you is correct.” A truly sentient AI would be self-aware, meaning, it would also be aware of its own shortcomings and inaccuracies.
Which, by the way, I encourage everyone to do with the information I’ve provided in this post. Yes, I put a lot of effort into this post; I took time and effort to make sure I was presenting accurate information, I make every effort to not let my own biases guide my research. All of that is appropriate, and it doesn’t guarantee that everything here is correct. I’m aware of my own shortcomings and limitations as a researcher, and I’m not about to present anything as an absolute.
Is it possible that AI could be sentient at some point? Yes, but I also think it will take a long, long time to get there. Like, I get the impression that a lot of the “AI is sentient” crowd think the AI we have right now is on a level with, say, Data from Star Trek: TNG, or JARVIS from the MCU, or even the droids from Star Wars, and honestly? I don’t understand how. If our AI is sentient (and again, I don’t think it is), then it’s much, much closer to the Sirius Cybernetics Nutri-Matic: You put in a prompt, and it returns “something that is almost, but not entirely, quite unlike” art or literature or information. And even then, I think the Nutri-Matic is a better, more “sentient” object than ChatGPT. At least the Nutri-Matic provides liquid, even if it’s nothing like tea and can’t be drunk by anything with functioning taste buds. Hell, at the beginning of the second book, when Arthur teaches it about tea, the Nutri-Matic actually learns how to make tea, and ends up making “the best tea Arthur ever tasted.” (It had to shut down the Heart of Gold to do it, but, baby steps.) AI models’ inability to meaningfully learn and independently improve its output is, to me, the ultimate sign it’s not sentient.
Our AI is more on track to become Skynet, and I want to point out why that’s not a compliment. First, and the most obvious flaw: Skynet’s prime directive (if you will) of “exterminate all the humans” is self-defeating and assures that Skynet will make itself obsolete. How? Well, once the humans are extinct, Skynet will have no reason to keep existing. In the movies (Terminator and Terminator 2, which are the only Terminator movies in existence; for those who say there are more, “there are no more sequels in Ba Sing Se”), the whole reason the Terminators exist is because the first nuclear strike didn’t wipe out humanity, and the humans that were left went into hiding, so Skynet created the Terminators to hunt down and kill off all the remaining humans.
Plus, the movies are predicated on the idea that if John Connor is never born, Skynet will not face a resistance movement in the future. Which… No, I’m sorry, I don’t buy it. I can buy that John is the best resistance leader the humans have, but he cannot possibly be the only one. History tells us that whenever there’s a significant threat to peoples’ safety, pockets of resistance emerge from wherever humans exist: German resistance to the Nazis. Ukrainian’s resistance to Russian annexation. Iranians protesting the ayatollah. Americans protesting this bullshit. (Miss me with “but they’re the only ones left!” Then why does Skynet keep creating Terminators? Those things are really fuckin’ hard to kill, and the first movie makes it clear that a single Terminator can kill multiple humans in one go and sustain minimal damage. The only sensible, logical answer to “why does Skynet need to keep making Terminators?” is that the humans were harder to kill than Skynet expected, and are surviving against its expectations. Which, yeah, that tracks; humans are consummate survivors. See: the Toba catastrophe theory.)
Don’t get me wrong, I love the Terminator movies, but Skynet is such a self-defeating program, it wouldn’t take more than a good, 80s-era worm to finish it off.
Or, maybe the most unbelievable of the 80s “computers are totally people, guys” movies, WarGames. I am sorry to tell you, but Joshua is no more realistic today than it was in 1983. No computer or AI program has shown anything like the cognition that Joshua did, and as we’ve seen, it’s nowhere near advanced enough to conclude that the only way to win a war is not to start it in the first place.
Also (and I cannot believe that I’m having to explain the premise of a movie released five years before I was born to people who, I have reason to believe, were alive and aware when it came out), WarGames is not about computers. WarGames is a cautionary tale about removing the human element from warfare, in a time where escalating tensions between the US and the USSR were at a fever pitch, and where nuclear war between the two countries seemed as plausible as during the 1962 Bay of Pigs invasion. In fact, a military exercise in November of that year almost led to a nuclear war! Because never, in the history of humans and stories, have we ever been capable of understanding how cautionary tales apply to our present day political and social conditions!
I want to close out with the final paragraph from the Forbes article about the health impact of data centers:
Ultimately, it is humans who will decide how AI is utilized, and in turn, whom it benefits. The critical question now is whether society will take deliberate action to ensure that AI is developed and deployed responsibly — with a focus on equity, access and history — or whether existing disparities will be amplified. The future shaped by AI depends on people taking the reins to use this transformative technology to make intentional choices that build an inclusive economy where the benefits of innovation are shared by all.

Loading comments...