#stochastic parrots
Explore tagged Tumblr posts
Text
Resource List: Problems with AI/GenAI
The following list leads to a variety of reports and resources related to challenges and harms caused by the AI hype machine. It is by no means exhaustive and was originally just meant for myself to help me keep track of things, but maybe some of you will find it useful as well.
If you have people in your circles who have fallen for the hype, or if you just want to dive deeper into some aspects of this whole mess yourself, these articles, papers, books, and podcasts can serve as good starting points. Many of them include links to additional resources, and if you follow some of these researchers/authors on social media, your feeds will soon be filled with even more insightful stuff.
For a collection of news items on “AI being shitty”, also see this “Questioning AI Resource List” compiled by Michelle Note.
~~~
General Primers
“What is AI? Everyone thinks they know, but no one can agree. And that’s a problem.” (Will Douglas Heaven, MIT Technology Review, 2024-07-10) Deep dive into the history of AI, the origins of the terminology, the rich techbro fanatics behind the cult-like hype, the researchers/scientists calling for a saner approach, and the implications for politics and society that should concern us all. (Link to original MIT TR page with paywall | Archived version)
“The WIRED Guide to Artificial Intelligence” (Tom Simonite, WIRED, 2023-02-08) General overview and timeline of the beginnings of AI as well as a summary of the current state of AI, the controversies surrounding GenAI, and the challenges for society due to all the hype. (WIRED.com link)
“The debate over understanding in AI’s large language models” (Melanie Mitchell & David C. Krakauer, PNAS, 2022-10-12) Detailed account of the major sides currently debating whether LLMs are capable of understanding language in any humanlike sense. Includes extensive list of references with links to related papers and research. (PNAS.org link)
“AI History Timeline” (interactive chart) (AI Watch / European Commission) Visual overview of the history of AI with selected important AI breakthroughs from 1950 to the present. (AI Watch link)
Focus: Environmental Impact
“The real cost of AI is being paid in deserts far from Silicon Valley” (book extract) (Karen Hao, Rest of World, 2025-05-26) Extract from Hao’s book, Empire of AI, focusing on the devastating impact that OpenAI’s reckless ventures have on Chile's mineral reserves, its water resources, and its indigenous communities. (Rest of World link)
“AI is draining water from areas that need it most” (Leonardo Nicoletti, Michelle Ma and Dina Bass, Bloomberg Technology, 2025-05-08) Facts and figures related to the immense water consumption of data centers, roughly two thirds of which are now in places with high to extremely high levels of water stress. (Link to original Bloomberg page with paywall | Archived version | LinkedIn post by author)
“We Went to the Town Elon Musk Is Poisoning” (video) (More Perfect Union, 2025-05-30) Short documentary about how Musk’s massive xAI data center is poisoning Memphis and its predominantly Black neighborhoods by burning enough gas to power a small city, with no permits and no pollution controls. (YouTube video link)
“The Unpaid Toll: Quantifying the Public Health Impact of AI” (Yuelin Han, Zhifeng Wu et al., UC Riverside, 2024-12-09) Research paper about the potential public health burden, specifically due to the degradation of air quality caused by AI’s lifecycle operations, which are valued at up to more than $20 billion per year for US data centers in 2030 and unevenly impact economically disadvantaged communities. (Arxiv.org link)
“Power Hungry: AI and our energy future” (Mat Honan (ed.), MIT Technology Review, 2025-05) Deep dive into AI’s energy requirements and its carbon debt, with detailed math on energy usage down to the prompt level. (Link to original MIT TR page with paywall | Archived version | LinkedIn post by editor)
Focus: Exploitation of Workers and the General Public
“The Exploited Labor Behind Artificial Intelligence” (Adrienne Williams, Milagros Miceli and Timnit Gebru, Noema Magazine, 2022-10-13) Detailed account (including various references to related pieces) of how AI systems are fueled by millions of underpaid gig workers, data labelers, content moderators etc., especially in the Global South, who are performing repetitive tasks under precarious labor conditions while the tech companies that have branded themselves “AI first” are making millions on the backs of those exploited workers. (Noema Magazine link)
“How AI companies exploit data workers in Kenya” (video) (Janosch Delcker & Mariel Müller, DW, 2024-12-11) Video report about the invisible workers behind the “AI revolution” who painstakingly tag the data needed to power the artificial intelligence systems many of us use. (DW.com link)
“Where Cloud Meets Cement – A Case Study Analysis of Data Center Development” (Hanna Barakat, Chris Cameron, Alix Dunn, Prathm Juneja and Emma Prest, The Maybe, 2025-04) Investigative reporting on five planned data centers around the world that are often framed as “economic opportunities” but in reality cause much harm to local communities through strain on the electrical grid, toxic emissions, and high water/energy consumption. (The Maybe link | LinkedIn post by author)
“Artificial Power: 2025 Landscape Report” (AI Now Institute, 2025-06-03) Detailed report on the state of play in the AI market and the stakes for the public, with the primary diagnosis being that the push to integrate AI everywhere grants AI companies and tech oligarchs power that goes far beyond their deep pockets, so we need to ask not how AI is being used by us but how it is being used on us. (AI Now Institute link | LinkedIn post by authors)
Focus: Criminal Justice
“AI + criminal legal system = bad” (Josie Duffy Rice & Hannah Riley, The Jump Line, 2025-06-11) Newsletter issue that zooms in on the increasing use of AI in policing and incarceration; includes various links to further reports as well as an interview with Matthew Guariglia of the Electronic Frontier Foundation. (The Jump Line on Substack link)
“Artificial Intelligence Is Putting Innocent People at Risk of Being Incarcerated” (Alyxaundria Sanford, Innocence Project, 2024-02-14) Report about how the increased use of AI by law enforcement is yet another example for the misapplication of forensic science that disproportionately affects marginalized/Black communities and has already led to several confirmed cases of misidentification due to facial recognition software. (Innocence Project link)
“AI Generated Police Reports Raise Concerns Around Transparency, Bias” (Jay Stanley, ACLU, 2024-12-10) Quick primer on why AI-generated police reports threaten to exacerbate existing problems and create new ones in law enforcement. (ACLU.org link)
Focus: Society/Education
“Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (Nataliya Kosmyna et al., MIT Media Lab, 2025-06-10) Study focusing on neural and behavioral consequences for people relying on LLM assistance for essay writing tasks, with the results showing that users had lower cognitive activity, struggled to accurately quote their own work, and consistently underperformed at neural, linguistic, and behavioral levels compared to the other study participants who did not rely on LLMs – thus raising concerns about the long-term educational implications of LLM reliance. (Arxiv.org link)
“AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” (Michael Gerlich) Study investigating the relationship and significant negative correlation between frequent AI usage and critical thinking skills, with a focus on cognitive offloading as a mediating factor and highlighting the potential cognitive costs of AI tool reliance. (MDPI.com link | LinkedIn post by author)
“Don’t believe the hype. AI myths and the need for a critical approach in higher education.” (Jürgen Rudolph, Fadhil Ismail, Shannon Tan and Pauline Seah, JALT, 2025-02-18) Editorial focusing on the pervasive AI/GenAI hype in higher education and eight myths that shape current discourse, making it clear that AI is not an autonomous, intelligent entity but a mere product that depends on often exploitative labour and data extraction practices and tends to exacerbate existing inequalities. (JALT link | LinkedIn post by author)
“Teachers Are Not OK” (Jason Koebler, 404 Media, 2025-06-02) Collection of quotes and first-hand accounts of teachers related to how schools are not prepared for ChatGPT and describing the negative impact GenAI is having on teaching and the educational sector. (404 Media link)
“Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.” (Ben Williamson, Alex Molnar and Faith Boninger, NEPC, 2024-03-05) Report on the need for stronger regulation and why AI in education is a public problem because it reinforces issues like bureaucratic opacity, threatens student privacy, furthers school commercialization, worsens inequalities, erodes teacher autonomy, and drives dangerous faith in magical technosolutions. (NEPC link | LinkedIn post by author)
“Against the Commodification of Education—if harms then not AI” (Dagmar Monett & Gilbert Paquet, JODDE, 2025-05-11) Paper calling for a change in direction with regard to the unbridled integration of AI/GenAI in educational systems so we can first deal with key concerns such as preserving academic integrity, ensuring the quality of information provided by GenAI systems, respecting IP rights, and limiting the influence of tech corporations, as well as answer critical questions about the future of education, the tools’ impact on students, and the implications for the teaching profession. (JODDE link)
“They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” (Kashmir Hill, The New York Times, 2025-06-13) Disturbing report on how GenAI chatbots can lead vulnerable people down conspiratorial rabbit holes and encourage distorted perceptions of reality and worse. (Link to original NYT article | Gift Article | Archived version)
“What AI thinks a beautiful woman looks like” (Nitasha Tiku & Szu Yu Chen, Washington Post, 2024-05-31) Illustrated report on the biases and stereotypes of GenAI systems that they inherited from the flawed data they were fed during their training. (Washington Post link without paywall)
Books
“The AI Con” (Emily M. Bender & Alex Hanna, 2025) Blurb: A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world. https://thecon.ai/
“Empire of AI” (Karen Hao, 2025) Blurb: From a brilliant longtime AI insider with intimate access to the world of Sam Altman’s OpenAI from the beginning, an eye-opening account of arguably the most fateful tech arms race in history, reshaping the planet in real time, from the cockpit of the company that is driving the frenzy. https://karendhao.com/
“Data Grab: The New Colonialism of Big Tech and How to Fight Back” (Ulises A. Mejias & Nick Couldry, 2024) Blurb: A compelling argument that the extractive practices of today’s tech giants are the continuation of colonialism—and a crucial guide to collective resistance. https://press.uchicago.edu/ucp/books/book/chicago/D/bo216184200.html
“Feeding the Machine: The Hidden Human Labour Powering AI” (James Muldoon, Mark Graham and Callum Cant, 2024) Blurb: A myth-dissolving exposé of how artificial intelligence exploits human labor, and a resounding argument for a more equitable digital future. https://www.bloomsbury.com/us/feeding-the-machine-9781639734979/
Newsletters/Podcasts
“Tech Won’t Save Us” About: Weekly conversations with experts to dissect the tech industry and the powerful people at its helm with the goal to provide insights that will shine a different light on the industry, make us reconsider our relationship to technology, and question the narratives we’ve been fed about it for decades. https://techwontsave.us/about
“Mystery AI Hype Theater 3000: The Newsletter” About: AI has too much hype. In this companion newsletter, linguist Prof. Emily M. Bender and sociologist Dr. Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They talk about everything "AI", from machine consciousness to science fiction, to political economy to art made by machines. https://buttondown.com/maiht3k/archive/
“Charting Gen AI” About: Key developments in GenAI and the impacts these are having on human-made media, as well as the ethics and behaviour of the AIs and calls for regulatory intervention to protect the rights of artists, performers, and creators. https://grahamlovelace.substack.com
“Where's Your Ed At” / “Better Offline” Newsletter and podcast by Ed Zitron, focusing on current developments related to AI/GenAI, the rot economy built by Big Tech, and the worrisome future that tech’s elite wants to build. https://www.wheresyoured.at / https://linktr.ee/betteroffline
Last update: 2025-06-20
2 notes
·
View notes
Text
I get really annoyed by the way that people talk about technology. Unfortunately, as a teacher, I can't voice my full annoyance. As such, I wrote it down here.
Technology is not an inevitable progression. You don't learn advanced mathematics because you figured out the alphabet and bricks, there are many roads to each discovery, and there are choices encoded in each piece of technology. Acting like something is inevitable is claiming that you have no responsibility for how it turns out. It's saying that "this is going to happen anyway, so I'm just going to go ahead and make it happen."
You're not supposed to want to be Robert Oppenheimer. That wasn't the point of that movie. It was a cop-out when he said it, and it's a cop-out now.
Future posts marked with "contraslop" will be me figuring out the logic for whatever AI policy I institute next semester.
11 notes
·
View notes
Text
3 years ago, the stochastic parrots paper was published.
When things go bad this time, at least don’t buy into the narrative that nobody saw it coming. They did, they yelled about it, they got fired for it.
2 notes
·
View notes
Text
What are LLMs: Large language models explained?
Large Language Models (LLMs), exemplified by GPT-4, stand as revolutionary artificial intelligence systems utilizing deep learning algorithms for natural language processing. These models have reshaped technology interaction, fostering applications like text prediction, content creation, language translation, and voice assistants.
Despite their prowess, LLMs, dubbed “stochastic parrots,” spark debates over whether they merely repeat memorized content without genuine understanding. LLMs, evolving from autocomplete technology, drive innovations from search engines to creative writing. However, challenges persist, including hallucination issues generating false information, limitations in reasoning and critical thinking, and an inability to abstract beyond specific examples. As LLMs advance, they hold immense potential but must overcome hurdles to simulate comprehensive human intelligence.
For more information, please visit the AIBrilliance blog page.
2 notes
·
View notes
Text
I am absolutely in favour of finding a better term than "AI" as they are not. Putting 'generative' at the front is just adding another fib.
The issue with Stochastic parrots is (and I'm prepared to be proven wrong) is the general public doesn't understand it any more clearly.
1 note
·
View note
Text
And also somehow thinks symbolism is some specific list of things you memorize by rote.
Which, there are some symbols and memes (old, pre-internet meaning) and imagery that will show up repeatedly or which reference older works and which will be apparent to people familiar with or learnéd in art history. How could there not be, after (tens of) thousands of years of human art!? Lots of religious paintings, for example, have loads of this kind of baggage that they're carrying — or responding to by specifically rejecting!
And sometimes having knowledge of the artist or their life will allow you to see things that are likely or even obviously symbolic to them, too. I think of Frida Kahlo's work, especially, when I think of this.
But these things are rarely specifically stated or noted by the artist. They're almost always up to some degree of interpretation or based on inference and study.
And a piece can move you even if you know nothing about any of that! Not only that, but people can also take their own meaning from art and see interpretations and find meaning the artist didn't intend. That's totally valid, too (as long as you use that personal frame when talking about these sorts of feelings about work).
The idea that there's one rigidly defined way to look at any piece of art — poetry, prose, painting, sculpture, etc. — is so strange.
AI people: we're just as much artists as you are, you gotta be so observant and go through so many correcting phases for the picture to look good uwu also AI people:
77K notes
·
View notes
Text
AI hype is just going to turn out to be the modern day mechanical turk scam.
The Washington Post - Google’s weird AI answers hint at a fundamental problem Analysis by Will Oremus May 29, 2024 Narayanan said the company’s “easiest way out of this mess” might be to pay human fact-checkers for millions of the most common search queries. “Essentially, Google would become a content farm masquerading as a search engine, laundering low-wage human labor with the imprimatur of AI.”
I had a dream of soylent chatbots made out of people. Maybe the chatbots aren't made out of human bodies or even people toiling away in some scam center, but the way this sausage is made, and served, is nevertheless going to sour everyone eventually. Chloe Humbert Mar 16, 2024
Lying AI should not be doing the people's business or science. Lives are at stake and the U.S. government and scientific scholars are buying into tech hype boondoggles. Is it corruption, incompetence, or sabotage? Chloe Humbert Mar 22, 2024
#ai hype#lying ai#google#tech hype#tech won't save us#ai doesn't exist#chatbots#llms#stochastic parrots
0 notes
Video
revenge of the stochastic parrots by Davivid Rose Via Flickr: I typed "stochastic parrots" in the Google search bar. This is what appeared. (AI humor?) Please click here to read my "autobiography": thewordsofjdyf333.blogspot.com/ And my Flicker "profile" page may be viewed by clicking on this link: www.flickr.com/people/jdyf333/ My telephone number is: 510-260-9695
0 notes
Text
The Brave Little Toaster

Picks and Shovels is a new, standalone technothriller starring Marty Hench, my two-fisted, hard-fighting, tech-scam-busting forensic accountant. You can pre-order it on my latest Kickstarter, which features a brilliant audiobook read by Wil Wheaton.
The AI bubble is the new crypto bubble: you can tell because the same people are behind it, and they're doing the same thing with AI as they did with crypto – trying desperately to find a use case to cram it into, despite the yawning indifference and outright hostility of the users:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
This week on the excellent Trashfuture podcast, the regulars – joined by 404 Media's Jason Koebler – have a hilarious – as in, I was wheezing with laughter! – riff on this year's CES, where companies are demoing home appliances with LLMs built in:
https://www.podbean.com/media/share/pb-hgi6c-179b908
Why would you need a chatbot in your dishwasher? As it turns out, there's a credulous, Poe's-law-grade Forbes article that lays out the (incredibly stupid) case for this (incredibly stupid) idea:
https://www.forbes.com/sites/bernardmarr/2024/03/29/generative-ai-is-coming-to-your-home-appliances/
As the Trashfuturians mapped out this new apex of the AI hype cycle, I found myself thinking of a short story I wrote 15 years ago, satirizing the "Internet of Things" hype we were mired in. It's called "The Brave Little Toaster", and it was published in MIT Tech Review's TRSF anthology in 2011:
http://bestsf.net/trsf-the-best-new-science-fiction-technology-review-2011/
The story was meant to poke fun at the preposterous IoT hype of the day, and I recall thinking that creating a world of talking appliance was the height of Philip K Dickist absurdism. Little did I dream that a decade and a half later, the story would be even more relevant, thanks to AI pump-and-dumpers who sweatily jammed chatbots into kitchen appliances.
So I figured I'd republish The Brave Little Toaster; it's been reprinted here and there since (there's a high school English textbook that included it, along with a bunch of pretty fun exercises for students), and I podcasted it back in the day:
https://ia803103.us.archive.org/35/items/Cory_Doctorow_Podcast_212/Cory_Doctorow_Podcast_212_Brave_Little_Toaster.mp3
A word about the title of this story. It should sound familiar – I nicked it from a brilliant story by Tom Disch that was made into a very weird cartoon:
https://www.youtube.com/watch?v=I8C_JaT8Lvg
My story is one of several I wrote by stealing the titles of other stories and riffing on them; they were very successful, winning several awards, getting widely translated and reprinted, and so on:
https://locusmag.com/2012/05/cory-doctorow-a-prose-by-any-other-name/
All right, on to the story!
One day, Mister Toussaint came home to find an extra 300 euros' worth of groceries on his doorstep. So he called up Miz Rousseau, the grocer, and said, "Why have you sent me all this food? My fridge is already full of delicious things. I don't need this stuff and besides, I can't pay for it."
But Miz Rousseau told him that he had ordered the food. His refrigerator had sent in the list, and she had the signed order to prove it.
Furious, Mister Toussaint confronted his refrigerator. It was mysteriously empty, even though it had been full that morning. Or rather, it was almost empty: there was a single pouch of energy drink sitting on a shelf in the back. He'd gotten it from an enthusiastically smiling young woman on the metro platform the day before. She'd been giving them to everyone.
"Why did you throw away all my food?" he demanded. The refrigerator hummed smugly at him.
"It was spoiled," it said.
#
But the food hadn't been spoiled. Mister Toussaint pored over his refrigerator's diagnostics and logfiles, and soon enough, he had the answer. It was the energy beverage, of course.
"Row, row, row your boat," it sang. "Gently down the stream. Merrily, merrily, merrily, merrily, I'm offgassing ethelyne." Mister Toussaint sniffed the pouch suspiciously.
"No you're not," he said. The label said that the drink was called LOONY GOONY and it promised ONE TRILLION TIMES MORE POWERFUL THAN ESPRESSO!!!!!ONE11! Mister Toussaint began to suspect that the pouch was some kind of stupid Internet of Things prank. He hated those.
He chucked the pouch in the rubbish can and put his new groceries away.
#
The next day, Mister Toussaint came home and discovered that the overflowing rubbish was still sitting in its little bag under the sink. The can had not cycled it through the trapdoor to the chute that ran to the big collection-point at ground level, 104 storeys below.
"Why haven't you emptied yourself?" he demanded. The trashcan told him that toxic substances had to be manually sorted. "What toxic substances?"
So he took out everything in the bin, one piece at a time. You've probably guessed what the trouble was.
"Excuse me if I'm chattery, I do not mean to nattery, but I'm a mercury battery!" LOONY GOONY's singing voice really got on Mister Toussaint's nerves.
"No you're not," Mister Toussaint said.
#
Mister Toussaint tried the microwave. Even the cleverest squeezy-pouch couldn't survive a good nuking. But the microwave wouldn't switch on. "I'm no drink and I'm no meal," LOONY GOONY sang. "I'm a ferrous lump of steel!"
The dishwasher wouldn't wash it ("I don't mean to annoy or chafe, but I'm simply not dishwasher safe!"). The toilet wouldn't flush it ("I don't belong in the bog, because down there I'm sure to clog!"). The windows wouldn't retract their safety screen to let it drop, but that wasn't much of a surprise.
"I hate you," Mister Toussaint said to LOONY GOONY, and he stuck it in his coat pocket. He'd throw it out in a trash-can on the way to work.
#
They arrested Mister Toussaint at the 678th Street station. They were waiting for him on the platform, and they cuffed him just as soon as he stepped off the train. The entire station had been evacuated and the police wore full biohazard containment gear. They'd even shrinkwrapped their machine-guns.
"You'd better wear a breather and you'd better wear a hat, I'm a vial of terrible deadly hazmat," LOONY GOONY sang.
When they released Mister Toussaint the next day, they made him take LOONY GOONY home with him. There were lots more people with LOONY GOONYs to process.
#
Mister Toussaint paid the rush-rush fee that the storage depot charged to send over his container. They forklifted it out of the giant warehouse under the desert and zipped it straight to the cargo-bay in Mister Toussaint's building. He put on old, stupid clothes and clipped some lights to his glasses and started sorting.
Most of the things in container were stupid. He'd been throwing away stupid stuff all his life, because the smart stuff was just so much easier. But then his grandpa had died and they'd cleaned out his little room at the pensioner's ward and he'd just shoved it all in the container and sent it out the desert.
From time to time, he'd thought of the eight cubic meters of stupidity he'd inherited and sighed a put-upon sigh. He'd loved Grandpa, but he wished the old man had used some of the ample spare time from the tail end of his life to replace his junk with stuff that could more gracefully reintegrate with the materials stream.
How inconsiderate!
#
The house chattered enthusiastically at the toaster when he plugged it in, but the toaster said nothing back. It couldn't. It was stupid. Its bread-slots were crusted over with carbon residue and it dribbled crumbs from the ill-fitting tray beneath it. It had been designed and built by cavemen who hadn't ever considered the advantages of networked environments.
It was stupid, but it was brave. It would do anything Mister Toussaint asked it to do.
"It's getting hot and sticky and I'm not playing any games, you'd better get me out before I burst into flames!" LOONY GOONY sang loudly, but the toaster ignored it.
"I don't mean to endanger your abode, but if you don't let me out, I'm going to explode!" The smart appliances chattered nervously at one another, but the brave little toaster said nothing as Mister Toussaint depressed its lever again.
"You'd better get out and save your ass, before I start leaking poison gas!" LOONY GOONY's voice was panicky. Mister Toussaint smiled and depressed the lever.
Just as he did, he thought to check in with the flat's diagnostics. Just in time, too! Its quorum-sensors were redlining as it listened in on the appliances' consternation. Mister Toussaint unplugged the fridge and the microwave and the dishwasher.
The cooker and trash-can were hard-wired, but they didn't represent a quorum.
#
The fire department took away the melted toaster and used their axes to knock huge, vindictive holes in Mister Toussaint's walls. "Just looking for embers," they claimed. But he knew that they were pissed off because there was simply no good excuse for sticking a pouch of independently powered computation and sensors and transmitters into an antique toaster and pushing down the lever until oily, toxic smoke filled the whole 104th floor.
Mister Toussaint's neighbors weren't happy about it either.
But Mister Toussaint didn't mind. It had all been worth it, just to hear LOONY GOONY beg and weep for its life as its edges curled up and blackened.
He argued mightily, but the firefighters refused to let him keep the toaster.
#
If you enjoyed that and would like to read more of my fiction, may I suggest that you pre-order my next novel as a print book, ebook or audiobook, via the Kickstarter I launched yesterday?
https://www.kickstarter.com/projects/doctorow/picks-and-shovels-marty-hench-at-the-dawn-of-enshittification?ref=created_projects
Check out my Kickstarter to pre-order copies of my next novel, Picks and Shovels!
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/01/08/sirius-cybernetics-corporation/#chatterbox
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#brave little toaster#iot#internet of things#internet of shit#fiction#short fiction#short stories#thomas m disch#science fiction#sf#gen ai#ai#generative ai#llms#chatbots#stochastic parrots#mit tech review#tech review#trashfuture#forbes#ces#torment nexus#pluralistic
223 notes
·
View notes
Text
stepping outta my trash can and pulling up my internet clown pants:
let's talk about how u can tell executives are using AI to generate statements and memos
sweetie, that em dash you've been making fun of ppl for? that's for the Twilight proles not the AI-holes.
let's talk about the brand exonerative tense and brand marketing in layoffs statements.
writing crisis comms is a highly specialized field that requires years of experience, expertise and the ability to draft on the fly about some of the most difficult topics in industry. and of course... you gotta know the right outlets.
but when you use AI to modify or "refine" copy, you get statements that ring hollow because AI is only good at linguistic flattening. think of using AI to re-draft or revise statements as a whoopie cushion. honey, you know that's not a real fart.
let's take a look✨

what does it mean to "position Gaming for enduring success... to focus on strategic growth... to increase agility and effectiveness?"
here it doesn't just mean they're simply "following Microsoft's lead." now you're used to reading the exonerative tense in statements about war and policing. these often deliberately get rid of all human subjects and prefer passive voice, which is narrowly avoided here. but note that while a subject such as I, we or our does appear at multiple points, it's only to simulate the appearance of unanimous consent (3rd person authoritative: see monarchs and CEOs).
except for where we're talking about "employees who are affected." they are affected. but the cause slips through our linguistic fingers like wet sand. there is no actor, no sandcastle, simply an inevitable wave of change.
and well, except for all the headlines that in fact use that same passive linguistic structure to elide a subject.🙃
did the layoffs begin? omg how did that happen- someone had to rubber stamp it right? and someone is impacted. but there's not really an indication of anything beyond an overarching brand subject.
let's turn back to the Xbox CEO's statement and the phrase "our platform, hardware and game roadmap have never looked stronger."
AI generated corporate copy often contains brand reassurance statements like this one. these are marketing, even when featured in internal crisis comms, with the expectation of distribution by whatever means. with literally thousands of layoffs statements at their disposal, AI can be used to generate statements like this one either in whole or in this case, likely in part.
so let's tldr this.
does it sound flat? is it using words like "agile", "deliver + exceptional" multiple times in close proximity, and "thrive?"
it's not just CEO-speak. it's burying the digital hatchet with AI trained on publicly available and previous executive communications and LinkedIn posts.
optimized for brand resiliency.
workshopped for maximum shareholder satisfaction.
publicized on outlets voted "most likely to exclude the noun from the verb in ways that maximize reach"
welcome to the era of the Brand Exonerative Tense.
#microsoft#xbox#xbox layoffs#9100 people are losing their jobs#that's 4% of of the staff#ceos#because guys in $300 crew necks and jeans are overleveraged in stochastic parrots
6 notes
·
View notes
Text

thanks go to Meryl Streep who made me appreciate Abba.
welp, if you ask it the right questions… but if you’re turning your nose up it’s only gonna hear maga bullshit.
remember, sensitive dependence on initial conditions!
@jstor
2 notes
·
View notes
Text
I WON'T SURRENDER TO "DAUGHTERS OF THE ANTHROPOCENE". WON'T SURRENDER TO SCIONS OF IT ALL. WHEN THEY LACE UP THEIR SHOES AND GO DANCING FOR ME WON'T SURRENDER EVEN WHEN I'M GONE.
0 notes
Text
Surely there are no possible drawbacks to this. Especially given how we’re seeing right now in real time how a malicious power can use all this information against us.
#what the fuck people WHAT THE FUCK?!#now the stochastic parrot will not only make your art and do your job it’ll spend your money too
0 notes
Note
my first and only therapist recommended i take dog milk pills to cure my autism. puppygirl hrt and i declined it
thsi is the kind of expertise and instituitiioanl authroity youre missing out on if you ask the stochastic parrot machine to help you with your problems instead of a guy with a psych degree;....
202 notes
·
View notes
Text
I am absolutely in favour of finding a better term than "AI" as they are not. Putting 'generative' at the front is just adding another fib.
The issue with Stochastic parrots is (and I'm prepared to be proven wrong) is the general public doesn't understand it any more clearly.
0 notes
Text
Idk I think if you aren't going to do the work of becoming a technical observer and trying to understand the nuances of how these models work (and I sure as hell am not gonna bother yet) it's best to avoid idle philosophizing about "bullshit engines" or "stochastic parrots" or "world models"
Both because you are probably making some assumptions that are completely wrong which will make you look like a fool and also because it doesn't really matter - the ultimate success of these models rests on the reliability of their outputs, not on whether they are "truly intelligent" or whatever.
And if you want to have an uninformed take anyway... can I interest you in registering a prediction? Here are a few of mine:
- No fully self-driving cars sold to individual consumers before 2030
- AI bubble initially deflates after a couple more years without slam-dunk profitable projects, but research and iterative improvement continues
- Almost all white collar jobs incorporate some form of AI that meaningfully boosts productivity by mid 2030s
284 notes
·
View notes