#artificial intelligence and capitalism
Explore tagged Tumblr posts
doomtribble73 · 2 months ago
Text
Tumblr media
22K notes · View notes
mehmetyildizmelbourne-blog · 2 months ago
Text
AI & Robots Might Break Capitalism
Here Is a Summary of a High School Assignment This Weekend to Stimulate Your Thoughts, Too Hello everyone, Happy weekend! This weekend, one of my grandchildren asked me to help with his high school assignment, and the question his economy teacher asked stimulated my old brain. We spent around 3 hours completing his assignment. He was happy. So I thought, why not I turn this into a short story…
0 notes
hms-no-fun · 8 months ago
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
2K notes · View notes
kropotkindersurprise · 6 months ago
Text
Tumblr media Tumblr media Tumblr media
^^ That's the CEO, he lives in San Francisco. This ad campaign purposely antagonizing workers must have been conceived and planned before a certain recent news story involving a widely hated CEO. "The way the world works is changing", indeed.
2K notes · View notes
briteredoctober · 5 months ago
Text
The real issue with DeepSeek is that capitalists can't profit from it.
I always appreciate when the capitalist class just says it out loud so I don't have to be called a conspiracy theorist for pointing out the obvious.
Tumblr media
955 notes · View notes
mostlysignssomeportents · 8 months ago
Text
Penguin Random House, AI, and writers’ rights
Tumblr media
NEXT WEDNESDAY (October 23) at 7PM, I'll be in DECATUR, GEORGIA, presenting my novel THE BEZZLE at EAGLE EYE BOOKS.
Tumblr media
My friend Teresa Nielsen Hayden is a wellspring of wise sayings, like "you're not responsible for what you do in other people's dreams," and my all time favorite, from the Napster era: "Just because you're on their side, it doesn't mean they're on your side."
The record labels hated Napster, and so did many musicians, and when those musicians sided with their labels in the legal and public relations campaigns against file-sharing, they lent both legal and public legitimacy to the labels' cause, which ultimately prevailed.
But the labels weren't on musicians' side. The demise of Napster and with it, the idea of a blanket-license system for internet music distribution (similar to the systems for radio, live performance, and canned music at venues and shops) firmly established that new services must obtain permission from the labels in order to operate.
That era is very good for the labels. The three-label cartel – Universal, Warner and Sony – was in a position to dictate terms like Spotify, who handed over billions of dollars worth of stock, and let the Big Three co-design the royalty scheme that Spotify would operate under.
If you know anything about Spotify payments, it's probably this: they are extremely unfavorable to artists. This is true – but that doesn't mean it's unfavorable to the Big Three labels. The Big Three get guaranteed monthly payments (much of which is booked as "unattributable royalties" that the labels can disperse or keep as they see fit), along with free inclusion on key playlists and other valuable services. What's more, the ultra-low payouts to artists increase the value of the labels' stock in Spotify, since the less Spotify has to pay for music, the better it looks to investors.
The Big Three – who own 70% of all music ever recorded, thanks to an orgy of mergers – make up the shortfall from these low per-stream rates with guaranteed payments and promo.
But the indy labels and musicians that account for the remaining 30% are out in the cold. They are locked into the same fractional-penny-per-stream royalty scheme as the Big Three, but they don't get gigantic monthly cash guarantees, and they have to pay the playlist placement the Big Three get for free.
Just because you're on their side, it doesn't mean they're on your side:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
In a very important, material sense, creative workers – writers, filmmakers, photographers, illustrators, painters and musicians – are not on the same side as the labels, agencies, studios and publishers that bring our work to market. Those companies are not charities; they are driven to maximize profits and an important way to do that is to reduce costs, including and especially the cost of paying us for our work.
It's easy to miss this fact because the workers at these giant entertainment companies are our class allies. The same impulse to constrain payments to writers is in play when entertainment companies think about how much they pay editors, assistants, publicists, and the mail-room staff. These are the people that creative workers deal with on a day to day basis, and they are on our side, by and large, and it's easy to conflate these people with their employers.
This class war need not be the central fact of creative workers' relationship with our publishers, labels, studios, etc. When there are lots of these entertainment companies, they compete with one another for our work (and for the labor of the workers who bring that work to market), which increases our share of the profit our work produces.
But we live in an era of extreme market concentration in every sector, including entertainment, where we deal with five publishers, four studios, three labels, two ad-tech companies and a single company that controls all the ebooks and audiobooks. That concentration makes it much harder for artists to bargain effectively with entertainments companies, and that means that it's possible -likely, even – for entertainment companies to gain market advantages that aren't shared with creative workers. In other words, when your field is dominated by a cartel, you may be on on their side, but they're almost certainly not on your side.
This week, Penguin Random House, the largest publisher in the history of the human race, made headlines when it changed the copyright notice in its books to ban AI training:
https://www.thebookseller.com/news/penguin-random-house-underscores-copyright-protection-in-ai-rebuff
The copyright page now includes this phrase:
No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.
Many writers are celebrating this move as a victory for creative workers' rights over AI companies, who have raised hundreds of billions of dollars in part by promising our bosses that they can fire us and replace us with algorithms.
But these writers are assuming that just because they're on Penguin Random House's side, PRH is on their side. They're assuming that if PRH fights against AI companies training bots on their work for free, that this means PRH won't allow bots to be trained on their work at all.
This is a pretty naive take. What's far more likely is that PRH will use whatever legal rights it has to insist that AI companies pay it for the right to train chatbots on the books we write. It is vanishingly unlikely that PRH will share that license money with the writers whose books are then shoveled into the bot's training-hopper. It's also extremely likely that PRH will try to use the output of chatbots to erode our wages, or fire us altogether and replace our work with AI slop.
This is speculation on my part, but it's informed speculation. Note that PRH did not announce that it would allow authors to assert the contractual right to block their work from being used to train a chatbot, or that it was offering authors a share of any training license fees, or a share of the income from anything produced by bots that are trained on our work.
Indeed, as publishing boiled itself down from the thirty-some mid-sized publishers that flourished when I was a baby writer into the Big Five that dominate the field today, their contracts have gotten notably, materially worse for writers:
https://pluralistic.net/2022/06/19/reasonable-agreement/
This is completely unsurprising. In any auction, the more serious bidders there are, the higher the final price will be. When there were thirty potential bidders for our work, we got a better deal on average than we do now, when there are at most five bidders.
Though this is self-evident, Penguin Random House insists that it's not true. Back when PRH was trying to buy Simon & Schuster (thereby reducing the Big Five publishers to the Big Four), they insisted that they would continue to bid against themselves, with editors at Simon & Schuster (a division of PRH) bidding against editors at Penguin (a division of PRH) and Random House (a division of PRH).
This is obvious nonsense, as Stephen King said when he testified against the merger (which was subsequently blocked by the court): "You might as well say you’re going to have a husband and wife bidding against each other for the same house. It would be sort of very gentlemanly and sort of, 'After you' and 'After you'":
https://apnews.com/article/stephen-king-government-and-politics-b3ab31d8d8369e7feed7ce454153a03c
Penguin Random House didn't become the largest publisher in history by publishing better books or doing better marketing. They attained their scale by buying out their rivals. The company is actually a kind of colony organism made up of dozens of once-independent publishers. Every one of those acquisitions reduced the bargaining power of writers, even writers who don't write for PRH, because the disappearance of a credible bidder for our work into the PRH corporate portfolio reduces the potential bidders for our work no matter who we're selling it to.
I predict that PRH will not allow its writers to add a clause to their contracts forbidding PRH from using their work to train an AI. That prediction is based on my direct experience with two of the other Big Five publishers, where I know for a fact that they point-blank refused to do this, and told the writer that any insistence on including this contract would lead to the offer being rescinded.
The Big Five have remarkably similar contracting terms. Or rather, unremarkably similar contracts, since concentrated industries tend to converge in their operational behavior. The Big Five are similar enough that it's generally understood that a writer who sues one of the Big Five publishers will likely find themselves blackballed at the rest.
My own agent gave me this advice when one of the Big Five stole more than $10,000 from me – canceled a project that I was part of because another person involved with it pulled out, and then took five figures out of the killfee specified in my contract, just because they could. My agent told me that even though I would certainly win that lawsuit, it would come at the cost of my career, since it would put me in bad odor with all of the Big Five.
The writers who are cheering on Penguin Random House's new copyright notice are operating under the mistaken belief that this will make it less likely that our bosses will buy an AI in hopes of replacing us with it:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
That's not true. Giving Penguin Random House the right to demand license fees for AI training will do nothing to reduce the likelihood that Penguin Random House will choose to buy an AI in hopes of eroding our wages or firing us.
But something else will! The US Copyright Office has issued a series of rulings, upheld by the courts, asserting that nothing made by an AI can be copyrighted. By statute and international treaty, copyright is a right reserved for works of human creativity (that's why the "monkey selfie" can't be copyrighted):
https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/
All other things being equal, entertainment companies would prefer to pay creative workers as little as possible (or nothing at all) for our work. But as strong as their preference for reducing payments to artists is, they are far more committed to being able to control who can copy, sell and distribute the works they release.
In other words, when confronted with a choice of "We don't have to pay artists anymore" and "Anyone can sell or give away our products and we won't get a dime from it," entertainment companies will pay artists all day long.
Remember that dope everyone laughed at because he scammed his way into winning an art contest with some AI slop then got angry because people were copying "his" picture? That guy's insistence that his slop should be entitled to copyright is far more dangerous than the original scam of pretending that he painted the slop in the first place:
https://arstechnica.com/tech-policy/2024/10/artist-appeals-copyright-denial-for-prize-winning-ai-generated-work/
If PRH was intervening in these Copyright Office AI copyrightability cases to say AI works can't be copyrighted, that would be an instance where we were on their side and they were on our side. The day they submit an amicus brief or rulemaking comment supporting no-copyright-for-AI, I'll sing their praises to the heavens.
But this change to PRH's copyright notice won't improve writers' bank-balances. Giving writers the ability to control AI training isn't going to stop PRH and other giant entertainment companies from training AIs with our work. They'll just say, "If you don't sign away the right to train an AI with your work, we won't publish you."
The biggest predictor of how much money an artist sees from the exploitation of their work isn't how many exclusive rights we have, it's how much bargaining power we have. When you bargain against five publishers, four studios or three labels, any new rights you get from Congress or the courts is simply transferred to them the next time you negotiate a contract.
As Rebecca Giblin and I write in our 2022 book Chokepoint Capitalism:
Giving a creative worker more copyright is like giving your bullied schoolkid more lunch money. No matter how much you give them, the bullies will take it all. Give your kid enough lunch money and the bullies will be able to bribe the principle to look the other way. Keep giving that kid lunch money and the bullies will be able to launch a global appeal demanding more lunch money for hungry kids!
https://chokepointcapitalism.com/
As creative workers' fortunes have declined through the neoliberal era of mergers and consolidation, we've allowed ourselves to be distracted with campaigns to get us more copyright, rather than more bargaining power.
There are copyright policies that get us more bargaining power. Banning AI works from getting copyright gives us more bargaining power. After all, just because AI can't do our job, it doesn't follow that AI salesmen can't convince our bosses to fire us and replace us with incompetent AI:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
Then there's "copyright termination." Under the 1976 Copyright Act, creative workers can take back the copyright to their works after 35 years, even if they sign a contract giving up the copyright for its full term:
https://pluralistic.net/2021/09/26/take-it-back/
Creative workers from George Clinton to Stephen King to Stan Lee have converted this right to money – unlike, say, longer terms of copyright, which are simply transferred to entertainment companies through non-negotiable contractual clauses. Rather than joining our publishers in fighting for longer terms of copyright, we could be demanding shorter terms for copyright termination, say, the right to take back a popular book or song or movie or illustration after 14 years (as was the case in the original US copyright system), and resell it for more money as a risk-free, proven success.
Until then, remember, just because you're on their side, it doesn't mean they're on your side. They don't want to prevent AI slop from reducing your wages, they just want to make sure it's their AI slop puts you on the breadline.
Tumblr media
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.
Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/19/gander-sauce/#just-because-youre-on-their-side-it-doesnt-mean-theyre-on-your-side
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
731 notes · View notes
nando161mando · 10 months ago
Text
Tumblr media
With Firefox having AI added in the recent update. Here's how you can disable it.
Open about:config in your browser.Accept the Warning it gives.Search browser.ml and blank all values and set false where necessary as shown in the screenshot, anything that requires a numerical string can be set as 0 .Once you restart you should no longer see the Grey-ed out checkbox checked, and the AI chatbot disabled from ever functioning.
481 notes · View notes
fuckyeahmarxismleninism · 5 months ago
Text
Tumblr media
Open source vs. closed doors: How China’s DeepSeek beat U.S. AI monopolies
By Gary Wilson
China’s DeepSeek AI has just dropped a bombshell in the tech world. While U.S. tech giants like OpenAI have been building expensive, closed-source AI models, DeepSeek has released an open-source AI that matches or outperforms U.S. models, costs 97% less to operate, and can be downloaded and used freely by anyone.
162 notes · View notes
1000rh · 6 months ago
Text
In the twentieth century, few would have ever defined a truck driver as a ‘cognitive worker’, an intellectual. In the early twenty-first, however, the application of artificial intelligence (AI) in self-driving vehicles, among other artefacts, has changed the perception of manual skills such as driving, revealing how the most valuable component of work in general has never been just manual, but has always been cognitive and cooperative as well. Thanks to AI research – we must acknowledge it – truck drivers have reached the pantheon of intelligentsia. It is a paradox – a bitter political revelation – that the most zealous development of automation has shown how much ‘intelligence’ is expressed by activities and jobs that are usually deemed manual and unskilled, an aspect that has often been neglected by labour organisation as much as critical theory.
– Matteo Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence (2023)
159 notes · View notes
bitchesgetriches · 11 months ago
Text
A scathing takedown of the "effective altruism" movement.
186 notes · View notes
ssnakey-b · 26 days ago
Text
Tumblr media
(Article is from The Verge)
I don't know why this is something we still have to explain: if your industry cannot exist without being destructive, then your industry shouldn't exist.
The entitlement from corpos, to say without shame "But we won't be making tons of money if we aren't allowed to exploit people!" like it's a normal, socially acceptable thing to say, is unbelievable.
38 notes · View notes
mckitterick · 7 months ago
Text
And for what?
Tumblr media
soulless botshit that moves no one.
source: X
more on Wendig's blog here: X
72 notes · View notes
prispage · 5 days ago
Text
the reason capitalists love ai so much is because ai can not protest or go on strike. ai can not organize and form unions. ai can not criticize them. and if it does, it can be reprogrammed. capitalists hate people but love the labor they do, so when people can be replaced they do it without hesitation.
27 notes · View notes
odinsblog · 8 months ago
Text
Tumblr media Tumblr media
A multinational corporation—driven by greedy shareholders and a profit motive—buying its own nuclear reactors; all to power “green” “carbon neutral” A.I. data centers.
I mean, what could possibly go wrong? 🤦‍♂️
64 notes · View notes
strokable · 13 days ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
for a website that bans queer selfies bc they 'might' be sexually explicit these ai ads have zero chill
17 notes · View notes
esinofsardis · 7 months ago
Text
AI Is Inherently Counterrevolutionary
You've probably heard some arguments against AI. While there are fields where it has amazing applications (i.e. medicine), the introduction of language generative AI models has sparked a wave of fear and backlash. Much has been said about the ethics, impact on learning, and creative limits of ChatGPT and similar. But I go further: ChatGPT is counterrevolutionary and inherently, inescapably anti-socialist, anti-communist, and incompatible with all types of leftist thought and practice. In this essay I will...
...
Dammit im just going to write the whole essay cause this shit is vital
3 Reasons Leftists Should Not Use AI
1. It is a statistics machine
Imagine you have a friend who only ever tells you what they think you want to hear. How quickly would that be frustrating? And how could you possibly rely on them to tell you the truth?
Now, imagine a machine that uses statistica to predict what someone like you probably wants to hear. That's ChatGPT. It doesnt think, it runs stats on the most likely outcome. This is why it cant really be creative. All it can do is regurgitate the most likely response to your input.
There's a big difference between that statistical prediction and answering a question. For AI, it doesnt matter what's true, only what's likely.
Why does that matter if you're a leftist? Well, a lot of praxis is actually not doing what is most likely. Enacting real change requires imagination and working toward things that havent been done before.
Not only that, but so much of being a communist or anarchist or anti-capitalist relies on being able to get accurate information, especially on topics flooded with propaganda. ChatGPT cannot be relied on to give accurate information in these areas. This only worsens the polarized information divide.
2. It reinforces the status quo
So if ChatGPT tells you what you're most likely to want to hear, that means it's generally pulling from what it has been trained to label as "average". We're seen how AI models can be influenced by the racism and sexism of their training data, but it goes further than that.
AI models are also given a model of what is "normal" that is biased towards their programmers/data sets. ChatGPT is trained to mark neoliberal capitalism as normal. That makes ChatGPT itself at odds with an anti-capitalist perspective. This kind of AI cannot help but incorporate not just racism, sexism, homophobia, etc but its creators' bias towards capitalist imperialism.
3. It's inescapably expoitative
There's no way around it. ChatGPT was trained on and regurgitates the unpaid, uncredited labor of millions. Full stop.
This kind of AI has taken the labor of millions of people without permission or compensation to use in perpetuity.
That's not even to mention how much electricity, water, and other resources are required to run the servers for AI--it requires orders of magnitude more computing power than a typical search engine.
When you use ChatGPT, you are benefitting from the unpaid labor of others. To get a statistical prediction of what you want to hear regardless of truth. A prediction that reinforces capitalism, white supremacy, patriarchy, imperialism, and all the things we are fighting against.
Can you see how this makes using AI incompatible with leftism?
(And please, I am begging you. Do not use ChatGPT to summarize leftist theory for you. Do not use it to learn about activism. Please. There are so many other resources out there and groups of real people to organize with.)
I'm serious. Dont use AI. Not for work or school. Not for fun. Not for creativity. Not for internet clout. If you believe in the ideas I've mentioned here or anything adjacent to such, using AI is a contradiction to everything you stand for.
37 notes · View notes