Felt uncool without my own tumblr account. Have an account now. Still don't feel cool. Darn it. // Rarely active here anymore but if you need to reach me, drop me a DM or use fenway03tumblr[at]gmail.com.
Don't wanna be here? Send us removal request.
Text
Resource List: Problems with AI/GenAI
The following list leads to a variety of reports and resources related to challenges and harms caused by the AI hype machine. It is by no means exhaustive and was originally just meant for myself to help me keep track of things, but maybe some of you will find it useful as well.
If you have people in your circles who have fallen for the hype, or if you just want to dive deeper into some aspects of this whole mess yourself, these articles, papers, books, and podcasts can serve as good starting points. Many of them include links to additional resources, and if you follow some of these researchers/authors on social media, your feeds will soon be filled with even more insightful stuff.
For a collection of news items on “AI being shitty”, also see this “Questioning AI Resource List” compiled by Michelle Note.
~~~
General Primers
“What is AI? Everyone thinks they know, but no one can agree. And that’s a problem.” (Will Douglas Heaven, MIT Technology Review, 2024-07-10) Deep dive into the history of AI, the origins of the terminology, the rich techbro fanatics behind the cult-like hype, the researchers/scientists calling for a saner approach, and the implications for politics and society that should concern us all. (Link to original MIT TR page with paywall | Archived version)
“The WIRED Guide to Artificial Intelligence” (Tom Simonite, WIRED, 2023-02-08) General overview and timeline of the beginnings of AI as well as a summary of the current state of AI, the controversies surrounding GenAI, and the challenges for society due to all the hype. (WIRED.com link)
“The debate over understanding in AI’s large language models” (Melanie Mitchell & David C. Krakauer, PNAS, 2022-10-12) Detailed account of the major sides currently debating whether LLMs are capable of understanding language in any humanlike sense. Includes extensive list of references with links to related papers and research. (PNAS.org link)
“AI History Timeline” (interactive chart) (AI Watch / European Commission) Visual overview of the history of AI with selected important AI breakthroughs from 1950 to the present. (AI Watch link)
Focus: Environmental Impact
“The real cost of AI is being paid in deserts far from Silicon Valley” (book extract) (Karen Hao, Rest of World, 2025-05-26) Extract from Hao’s book, Empire of AI, focusing on the devastating impact that OpenAI’s reckless ventures have on Chile's mineral reserves, its water resources, and its indigenous communities. (Rest of World link)
“AI is draining water from areas that need it most” (Leonardo Nicoletti, Michelle Ma and Dina Bass, Bloomberg Technology, 2025-05-08) Facts and figures related to the immense water consumption of data centers, roughly two thirds of which are now in places with high to extremely high levels of water stress. (Link to original Bloomberg page with paywall | Archived version | LinkedIn post by author)
“We Went to the Town Elon Musk Is Poisoning” (video) (More Perfect Union, 2025-05-30) Short documentary about how Musk’s massive xAI data center is poisoning Memphis and its predominantly Black neighborhoods by burning enough gas to power a small city, with no permits and no pollution controls. (YouTube video link)
“The Unpaid Toll: Quantifying the Public Health Impact of AI” (Yuelin Han, Zhifeng Wu et al., UC Riverside, 2024-12-09) Research paper about the potential public health burden, specifically due to the degradation of air quality caused by AI’s lifecycle operations, which are valued at up to more than $20 billion per year for US data centers in 2030 and unevenly impact economically disadvantaged communities. (Arxiv.org link)
“Power Hungry: AI and our energy future” (Mat Honan (ed.), MIT Technology Review, 2025-05) Deep dive into AI’s energy requirements and its carbon debt, with detailed math on energy usage down to the prompt level. (Link to original MIT TR page with paywall | Archived version | LinkedIn post by editor)
Focus: Exploitation of Workers and the General Public
“The Exploited Labor Behind Artificial Intelligence” (Adrienne Williams, Milagros Miceli and Timnit Gebru, Noema Magazine, 2022-10-13) Detailed account (including various references to related pieces) of how AI systems are fueled by millions of underpaid gig workers, data labelers, content moderators etc., especially in the Global South, who are performing repetitive tasks under precarious labor conditions while the tech companies that have branded themselves “AI first” are making millions on the backs of those exploited workers. (Noema Magazine link)
“How AI companies exploit data workers in Kenya” (video) (Janosch Delcker & Mariel Müller, DW, 2024-12-11) Video report about the invisible workers behind the “AI revolution” who painstakingly tag the data needed to power the artificial intelligence systems many of us use. (DW.com link)
“Where Cloud Meets Cement – A Case Study Analysis of Data Center Development” (Hanna Barakat, Chris Cameron, Alix Dunn, Prathm Juneja and Emma Prest, The Maybe, 2025-04) Investigative reporting on five planned data centers around the world that are often framed as “economic opportunities” but in reality cause much harm to local communities through strain on the electrical grid, toxic emissions, and high water/energy consumption. (The Maybe link | LinkedIn post by author)
“Artificial Power: 2025 Landscape Report” (AI Now Institute, 2025-06-03) Detailed report on the state of play in the AI market and the stakes for the public, with the primary diagnosis being that the push to integrate AI everywhere grants AI companies and tech oligarchs power that goes far beyond their deep pockets, so we need to ask not how AI is being used by us but how it is being used on us. (AI Now Institute link | LinkedIn post by authors)
Focus: Criminal Justice
“AI + criminal legal system = bad” (Josie Duffy Rice & Hannah Riley, The Jump Line, 2025-06-11) Newsletter issue that zooms in on the increasing use of AI in policing and incarceration; includes various links to further reports as well as an interview with Matthew Guariglia of the Electronic Frontier Foundation. (The Jump Line on Substack link)
“Artificial Intelligence Is Putting Innocent People at Risk of Being Incarcerated” (Alyxaundria Sanford, Innocence Project, 2024-02-14) Report about how the increased use of AI by law enforcement is yet another example for the misapplication of forensic science that disproportionately affects marginalized/Black communities and has already led to several confirmed cases of misidentification due to facial recognition software. (Innocence Project link)
“AI Generated Police Reports Raise Concerns Around Transparency, Bias” (Jay Stanley, ACLU, 2024-12-10) Quick primer on why AI-generated police reports threaten to exacerbate existing problems and create new ones in law enforcement. (ACLU.org link)
Focus: Society/Education
“Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (Nataliya Kosmyna et al., MIT Media Lab, 2025-06-10) Study focusing on neural and behavioral consequences for people relying on LLM assistance for essay writing tasks, with the results showing that users had lower cognitive activity, struggled to accurately quote their own work, and consistently underperformed at neural, linguistic, and behavioral levels compared to the other study participants who did not rely on LLMs – thus raising concerns about the long-term educational implications of LLM reliance. (Arxiv.org link)
“AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” (Michael Gerlich) Study investigating the relationship and significant negative correlation between frequent AI usage and critical thinking skills, with a focus on cognitive offloading as a mediating factor and highlighting the potential cognitive costs of AI tool reliance. (MDPI.com link | LinkedIn post by author)
“Don’t believe the hype. AI myths and the need for a critical approach in higher education.” (Jürgen Rudolph, Fadhil Ismail, Shannon Tan and Pauline Seah, JALT, 2025-02-18) Editorial focusing on the pervasive AI/GenAI hype in higher education and eight myths that shape current discourse, making it clear that AI is not an autonomous, intelligent entity but a mere product that depends on often exploitative labour and data extraction practices and tends to exacerbate existing inequalities. (JALT link | LinkedIn post by author)
“Teachers Are Not OK” (Jason Koebler, 404 Media, 2025-06-02) Collection of quotes and first-hand accounts of teachers related to how schools are not prepared for ChatGPT and describing the negative impact GenAI is having on teaching and the educational sector. (404 Media link)
“Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.” (Ben Williamson, Alex Molnar and Faith Boninger, NEPC, 2024-03-05) Report on the need for stronger regulation and why AI in education is a public problem because it reinforces issues like bureaucratic opacity, threatens student privacy, furthers school commercialization, worsens inequalities, erodes teacher autonomy, and drives dangerous faith in magical technosolutions. (NEPC link | LinkedIn post by author)
“Against the Commodification of Education—if harms then not AI” (Dagmar Monett & Gilbert Paquet, JODDE, 2025-05-11) Paper calling for a change in direction with regard to the unbridled integration of AI/GenAI in educational systems so we can first deal with key concerns such as preserving academic integrity, ensuring the quality of information provided by GenAI systems, respecting IP rights, and limiting the influence of tech corporations, as well as answer critical questions about the future of education, the tools’ impact on students, and the implications for the teaching profession. (JODDE link)
“They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” (Kashmir Hill, The New York Times, 2025-06-13) Disturbing report on how GenAI chatbots can lead vulnerable people down conspiratorial rabbit holes and encourage distorted perceptions of reality and worse. (Link to original NYT article | Gift Article | Archived version)
“What AI thinks a beautiful woman looks like” (Nitasha Tiku & Szu Yu Chen, Washington Post, 2024-05-31) Illustrated report on the biases and stereotypes of GenAI systems that they inherited from the flawed data they were fed during their training. (Washington Post link without paywall)
Books
“The AI Con” (Emily M. Bender & Alex Hanna, 2025) Blurb: A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world. https://thecon.ai/
“Empire of AI” (Karen Hao, 2025) Blurb: From a brilliant longtime AI insider with intimate access to the world of Sam Altman’s OpenAI from the beginning, an eye-opening account of arguably the most fateful tech arms race in history, reshaping the planet in real time, from the cockpit of the company that is driving the frenzy. https://karendhao.com/
“Data Grab: The New Colonialism of Big Tech and How to Fight Back” (Ulises A. Mejias & Nick Couldry, 2024) Blurb: A compelling argument that the extractive practices of today’s tech giants are the continuation of colonialism—and a crucial guide to collective resistance. https://press.uchicago.edu/ucp/books/book/chicago/D/bo216184200.html
“Feeding the Machine: The Hidden Human Labour Powering AI” (James Muldoon, Mark Graham and Callum Cant, 2024) Blurb: A myth-dissolving exposé of how artificial intelligence exploits human labor, and a resounding argument for a more equitable digital future. https://www.bloomsbury.com/us/feeding-the-machine-9781639734979/
Newsletters/Podcasts
“Tech Won’t Save Us” About: Weekly conversations with experts to dissect the tech industry and the powerful people at its helm with the goal to provide insights that will shine a different light on the industry, make us reconsider our relationship to technology, and question the narratives we’ve been fed about it for decades. https://techwontsave.us/about
“Mystery AI Hype Theater 3000: The Newsletter” About: AI has too much hype. In this companion newsletter, linguist Prof. Emily M. Bender and sociologist Dr. Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They talk about everything "AI", from machine consciousness to science fiction, to political economy to art made by machines. https://buttondown.com/maiht3k/archive/
“Charting Gen AI” About: Key developments in GenAI and the impacts these are having on human-made media, as well as the ethics and behaviour of the AIs and calls for regulatory intervention to protect the rights of artists, performers, and creators. https://grahamlovelace.substack.com
“Where's Your Ed At” / “Better Offline” Newsletter and podcast by Ed Zitron, focusing on current developments related to AI/GenAI, the rot economy built by Big Tech, and the worrisome future that tech’s elite wants to build. https://www.wheresyoured.at / https://linktr.ee/betteroffline
Last update: 2025-06-20
0 notes
Text
Or you could just get the Vivaldi browser and enjoy its various tab-related features plus numerous other browser customization options without having to install any additional extensions at all. ;-)

37K notes
·
View notes
Text
ONLY MURDERS IN THE BUILDING + obligatory elevator scene 1.01 'True Crime' 2.01 'Persons of Interest' 3.01 'The Show Must . . .' 4.01 'Once Upon a Time in the West'
3K notes
·
View notes
Text
youtube
Call me Christopher Robin, gonna solve all of my problems With imaginary friends who are there when I need them And I've been seeing backsons, and I've been hearing demons And they say that they are here, because I'm pushing down my feelings Wrote a letter to my future self, what the hell I think you're gonna get better with a little self esteem And there's a little kid inside me trying to remind me That I am a Force Field I am a Force Field
Cloud Cult – "I Am A Force Field" (2024)
Listen to their full new album, "Alchemy Creek", on their website or YouTube channel. And if you can, consider buying a few tracks to support them.
2 notes
·
View notes
Text
Generative AI isn't creating new jobs, it isn't creating new ways to do your job, and it isn't making anybody any money — and the path to boosting revenues is unclear.
For a nice summary of the current state of the GenAI con, read Edward Zitron's latest post, "Pop-Culture" – well worth your time.
A few more key points from the post:
[...] A week and a half ago, Goldman Sachs put out a 31-page-report that includes some of the most damning literature on generative AI I've ever seen. [...] For Goldman to suddenly turn on the AI movement suggests that it’s extremely anxious about the future of generative AI, with almost everybody agreeing on one core point: that the longer this tech takes to make people money, the more money it's going to need to make.
[...] How does GPT – a transformer-based model that generates answers probabilistically based entirely on training data – do anything more than generate paragraphs of occasionally-accurate text? How do any of these models even differentiate when most of them are trained on the same training data that they're already running out of? The training data crisis is one that doesn’t get enough attention, but it’s sufficiently dire that it has the potential to halt (or dramatically slow) any AI development in the near future.
[...] Using generative AI and too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides. In essence, replacing humans with AI might break everything if you're one of those bosses that doesn't actually know what the fuck it is they're talking about.
[...] Any advantages that generative AI gives you can be "arbitraged away" because the tech can be used everywhere, and thus you can't, as a company, raise prices. In plain English: generative AI isn't making any money for anybody because it doesn't actually make companies that use it any extra money.
[...] Generative AI at best processes information when it trains on data, but at no point does it "learn" or "understand," because everything it's doing is based on ingesting training data and developing answers based on a mathematical sense or probability rather than any appreciation or comprehension of the material itself. LLMs are entirely different pieces of technology to that of "an artificial intelligence" in the sense that the AI bubble is hyping, and it's disgraceful that the AI industry has taken so much money and attention with such a flagrant, offensive lie.
#AI#GenAI#don't fall for this con#call this shit out when colleagues or bosses or 'influencers' promote 'AI' tools
1 note
·
View note
Text
On "Rizzoli & Isles" we had some women [directors], mostly men. Lot of times the women didn't come back. They were asked once, did not return. Why? It was a weird thing. Like, here you have this pretty female-centric show, and it was really hard… I just didn't understand it. And then I started to see reasons for it. Some of it was that you have an all-male crew, and they don't wanna be bossed around by a woman. Or a woman comes in and the way [she bosses] them around just doesn't sit right. It was true. I mean, this was a boys club.
– Sasha Alexander on Off Duty: An NCIS Rewatch video podcast (2024/06/04)
...
Today's entry in our ongoing series called Things That Could've Been Better With Fewer Men Involved...
#still miffed#so much potential wasted#so many testosterone clowns on that show#Rizzoli and Isles#Sasha Alexander
110 notes
·
View notes
Photo
UPDATE, March 2024: Lengoo just filed for bankruptcy. [Source in German]
Awwwww.
With AI translation service that rivals professionals, Lengoo attracts new $20M round https://ift.tt/3aO25tn
#AI#machine translation#XL8#me so saaaaad#their poorly organized workflow and low rates looked so promising!#(but genuinely sad for all those who were struggling and forced to accept jobs from them)#(hope you'll still get the money you're owed)
1 note
·
View note
Text
Ah, the irony of you ending on this fake anti-capitalism note – right after spouting the most capitalist bullshit that could be spouted in this context.
You see, this quote of yours, ...
Artists have always copied each other, and now programmers copy artists.
… this is you being a capitalism bootlicker par excellence. And also, this is you being rather clueless about both art in general and the technology behind GenAI models in particular.
The process of one human artist "copying" or getting inspired by another involves (among other things) two key components: feeling and time.
The "feeling" component fuels this whole process, whereas the "time" component adds a sense of urgency and value.
An artist doesn't just randomly pick the color palette of another artist, the brushwork of a second, and the preferred motif of a third, then mashes it all together and ta-daah: new art!
Instead, when we draw inspiration from others while creating our own art, we "copy" those parts and styles of existing works that specifically speak to us, the parts that make us feel joy, sorrow, hope, despair, and everything in between. And the reason why we create our own art is because we want to share with the world how we feel. We're looking for a connection, for people who feel like us, for someone who gets us so we'll feel less alone.
A machine doesn't have any of those feelings. It doesn't understand the joy of devouring a strawberry sundae on a hot summer day, and it doesn't know the pain of seeing the light leave a loved one's eyes on a foggy November night. It doesn't need the company of another machine to feel whole. It simply doesn't feel.
And an AI art machine isn't constrained by time either. You could "freeze" its code at any particular moment, copy it to another set of hardware, and let it continue its work as if nothing happened. You could also copy this code to a hundred additional sets of hardware, and then you have a hundred machines performing the same work.
You can't do that with a human artist.
It takes time to learn a craft. It takes time to perfect it. It takes time to teach everything you know to those following in your footsteps so they can continue your work. And when you run out of time, you can't just take a snapshot of your brain, implant it into another body, and then live and create for another decade. That's why an original Warhol sells for $195 million, while a poster of the same piece printed in bulk sells for $19.50.
Feeling and time – that's what defines human art.
When "programmers copy artists", as you so ignorantly put it, none of the above process happens. In fact, those "programmers" don't really copy any artists at all. And they shouldn't even be called "programmers", because a key differentiator of GenAI is that it does not need a program in the traditional sense anymore. This is not how GenAI works.
In very, very simple terms, GenAI is based on a bunch of math and a shitton of hardware and data. When a model is being trained, it doesn't look at a Picasso and say "oh, that's an oddly interesting way to draw human faces!" Instead, the artwork gets analyzed pixel by pixel, first to detect very simple shapes and edges, then to detect certain combinations of shapes (e.g. those that look like an eye), then to detect even more complex shapes (e.g. two eyes, a nose, and a mouth that result in a face), and then even more complex shapes and relationships – until the model has detected and learned a specific combination of features that are often found in Picasso's art. And then you can let the model analyze an image it hasn't seen before, and it will tell you with a certain level of certainty whether or not that is a Picasso, too (i.e., whether the image contains all the patterns typically found in a Picasso). And that's also how you can let AI generate a picture that looks like a Picasso.
This whole process is just a lot of math (and computing power). There is no creativity. No feeling. No artist getting inspired by other artists while trying to express something meaningful about the world we live in.
Once the model has learned the typical patterns found in a Picasso, it can share this knowledge with another model – essentially within an instant –, and then the second model can identify and generate fake Picassos for you with the same level of certainty. It doesn't even need to learn anything anymore. [Again, this is a very simplified explanation.]
And this is what makes every capitalist's pulse soar: a "worker" that doesn't require costly training or years of experience, doesn't need sleep or breaks, won't get less productive with age, and could be cloned quite easily at low cost when the business needs to be scaled to meet growing demand.
"Art" generated by AI is a highly commodified business process. The only thing this kind of AI really creates is more money in a small number of capitalists' bank accounts. Because, as mentioned, it takes a shitton of hardware and infrastructure to train the foundational models. That's why the usual suspects (Google/Alphabet, Microsoft, etc.) dominate the field and can generate revenue every time some downstream developer uses the respective model for whatever AI application they want to build (from chatbots to image/video generators and so much more).
This also means (for now) that we should have a good laugh and walk away whenever someone argues along the lines of "but these new AI tools make art accessible to everyone; they remove the gatekeepers; now everybody can create whatever and whenever they want!" It could be argued that the opposite is true.
Now there are even more powerful gatekeepers. Filthy-rich millionaires and billionaires whose mass production tools make it even harder for artists to earn a living with their craft. And they have an army of bootlickers who run around spouting bullshit like "artists have always copied each other, and now programmers copy artists." Bootlickers who engage in victim-blaming and casually recommend that "you should unionize and demand that your labor is compensated fairly". (Gee, thanks, this has never been tried before and will surely be an easy solution.) Bootlickers who put the onus on those who can barely pay their bills instead of being mad at the rich assholes who exploit them. Bootlickers who pseudo-intelligently proclaim that "this is not a new phenomenon" while acting in a way that upholds the status quo.
The true grift here is not Glaze (regardless of its usefulness or lack thereof) – it's the "GenAI art is art" con that you have fallen for.
Since you're such a fan of AI, let's ask ChatGPT for advice:
You've already ticked all the boxes for sentence #1 and #2. Wake up before you'll become a case study for sentence #3.
the darling Glaze “anti-ai” watermarking system is a grift that stole code/violated GPL license (that the creator admits to). It uses the same exact technology as Stable Diffusion. It’s not going to protect you from LORAs (smaller models that imitate a certain style, character, or concept)
An invisible watermark is never going to work. “De-glazing” training images is as easy as running it through a denoising upscaler. If someone really wanted to make a LORA of your art, Glaze and Nightshade are not going to stop them.
If you really want to protect your art from being used as positive training data, use a proper, obnoxious watermark, with your username/website, with “do not use” plastered everywhere. Then, at the very least, it’ll be used as a negative training image instead (telling the model “don’t imitate this”).
There is never a guarantee your art hasn’t been scraped and used to train a model. Training sets aren’t commonly public. Once you share your art online, you don’t know every person who has seen it, saved it, or drawn inspiration from it. Similarly, you can’t name every influence and inspiration that has affected your art.
I suggest that anti-AI art people get used to the fact that sharing art means letting go of the fear of being copied. Nothing is truly original. Artists have always copied each other, and now programmers copy artists.
Capitalists, meanwhile, are excited that they can pay less for “less labor”. Automation and technology is an excuse to undermine and cheapen human labor—if you work in the entertainment industry, it’s adapt AI, quicken your workflow, or lose your job because you’re less productive. This is not a new phenomenon.
You should be mad at management. You should unionize and demand that your labor is compensated fairly.
11K notes
·
View notes
Text
While we're at it…
Duolingo has never been a platform that paid translators fairly. In fact, right from the start, they have shown zero respect for the translation industry and for the skills required to be a good translator. Their initial business model involved a crowdsourced translation service where they'd let language learners translate texts and then sell those translations to their clients (e.g., see this 2015 TechCrunch article).
Furthermore, the growth of Duolingo as a language-learning platform was only possible due to lots of naive language nerds who worked for free and helped create all those exercises through the company's Volunteer Contribution Program. That was a classic techbro asshole move on Duolingo's part: appeal to the good intentions of people with a passion for the subject/product, keep them motivated with lots of shallow talk about how their work will contribute to a brighter future where people have free access to knowledge and education, blablabla – and while all those well-meaning dummies worked for free, the people behind Duolingo were cashing in the big checks, getting investors involved, and planning the further commercialization of their product.
And make no mistake: The grand gestures they're now making, such as honoring those volunteers with fancy awards and VIP access to special events, and even the promised belated financial compensation to be distributed among all volunteers are just tiny drops in the bucket that won't hurt the company at all. And they won't undo all the exploitation that has been going on there for years. The partial switch to AI is just another non-suprising move following a long tradition of similarly profit-driven moves.
But they're not the only jerks out there doing this. Two more examples: Facebook (of course, eyeroll) and TED Talks. The latter's subtitles are also created by volunteers. And not only that: Many of those volunteers are actual translators (often with proper training and all) at the beginning of their career who unfortunately think that's a great way to build their portfolio and get some of that awesome exposure. But it is not. It's just a shitty way of helping all those rich tech companies get richer and further devaluing the translation profession. (If you happen to be a newbie translator reading this and looking to build your portfolio, do pro bono translations for people and organizations who really need your help!)
Obviously, this scheme is found in other fields as well, with people in creative industries being particularly vulnerable and gullible. Whenever someone promises your work will serve a greater good or provide you with career-boosting exposure, take a deep breath and then a close look at what kind of business or product you're about to support with your free labor. 9 times out of 10 you should be asking for real compensation.
And if you're a user of such products, 9 times out of 10 you should stop using them (and you should definitely stop paying for them).
But of course, life is complicated. Even if you should stop doing something, it's not always possible. Or at least not right away. (For example, I still use Facebook because of some non-public groups only found there.) But there's something you certainly can and must do: Pay more attention. Find out where your money is going, or, in the case of free services: Who will get your data, and how can they profit from them? Who's getting paid, and who's not getting paid? What's the history of the company or product? And do you really need it?
At least in some cases you will realize that you don't need the service or product at all.
AI in the form of large language models (LLMs) and generative algorithms used for tools like ChatGPT, Midjourney etc. increases this dilemma because it makes it even more difficult to find and use products whose creation didn't involve a lot of people getting exploited. And this will be the case for at least a while until the techbro hype has died down and people will learn to appreciate the value (well-paid) humans bring to a product or service. Lots of companies are currently trying to cut costs and corners by integrating these new AI models into their workflows. For some, it has already backfired (just ask the law firm Levidow, Levidow & Oberman about their little ChatGPT whoopsie); others will still learn this lesson the hard way. And of course, things will get reaaally fun when there's so much AI-generated content that the models will start ingesting too much of it, thereby poisoning themselves. Grab your popcorn, folks!
On a more serious note though: AI itself isn't the problem. It's an umbrella term that comprises a multitude of different methods and strategies, some of which are extremely useful (for example, in early-stage cancer detection). And there are many people, companies, and organizations that try to integrate AI into their workflows in a careful, cautious manner. You're already using lots of things in your daily life that wouldn't be possible without AI. Even your use of Tumblr is likely enabled by AI because fast internet requires smart routing of all that data traffic.
So from now on, when you look behind the scenes of how a tool or service gets provided, the mere fact that some AI is involved shouldn't be a disqualifier. You need to dig deeper. What kind of AI? What purpose is it used for? Does it actually help humans work smarter or does it force them to work harder?
There are problematic people on both ends of the spectrum: techbro bootlickers praising our AI overlords on one end, and uninformed luddites waving "boycott AI" signs on the other end. But a solution and a way out of this mess can only be found somewhere in the middle.
(Much more could be said about the use of AI/MT in the translation industry, how it's currently evolving, and how often people (unknowingly) support the exploitation of translators... Maybe in a future post...)
Heads up to not use Duolingo or to cease using it
In December 2023 they laid off a huge percentage of their translation teams, replacing them with ai and having the remaining members review the ai translations to make sure the translations are “acceptable” (Note how they use the world acceptable and not accurate)



Link to the tweet that informed me of this:
https://x.com/rahll/status/1744234385891594380?s=46&t=a5vK0RLlkgqk-CTqc0Gvvw
If you’re a current user prepare for an uptick in translations errors as I’ve already seen people in the comments say they’ve noticed
#Duolingo#AI#XL8#special shoutout to efka-m#an unfinished email to you has been sitting in my drafts for ages#I promise I'll finish it some day#(maybe I should ask ChatGPT to help me? :-D)
44K notes
·
View notes
Link
In case some of you Rizzles folks are stuck at home bored with nothing else to do... I gave in to an itch, and the result is a little fic about Rizzoli & Isles in the times of the coronavirus:
R&I - 14 Days, 14 Nights, and 42 Rolls of TP
When one of her colleagues tests positive for the coronavirus and Jane has to self-quarantine as a precaution, Maura's house seems like the perfect location to hole herself up for a while. But as "social distancing" has never been part of her family's vocabulary, Jane needs to find a way to keep not just the virus but also the other Rizzolis at bay.
[Read complete fic at fanfiction.net...]
#apparently it only took a little pandemic to make me write fanfic again#why did no one stop me?#Rizzoli and Isles#Rizzles#shameless self plug#fanfic
28 notes
·
View notes
Text
“Who knows what would've happened here. I probably would've worked in a factory. Managed a factory. You might've--... Maybe we would've met. On a bus...”
#there's no excuse for not watching this show#(except maybe if you're a Russian spy and have other priorities...)#The Americans#Elizabeth Jennings#Philip Jennings#Keri Russell#Matthew Rhys#TV recs
97 notes
·
View notes
Photo


Liz: You seem calm. Why is that? Diane: I have no choice. Liz: That’s not true. You can panic. You can scream. You can throw something. Diane: Yeah. To what end? Liz: Breaking something. Diane: The world has gone insane, Liz. The news is satire—it’s not real. The people blowing up grizzly bears have been put in charge of grizzly bears. So I’ve decided the only way to stay sane is to focus on my little corner of it.… If I make my little corner of the world sane, then I won’t let the insanity win. That’s what I’ve learned. Liz: It’s one thing to know it. It’s another thing to do it. Diane: But I have to start somewhere. Why not today?
#if you're not watching this show we won't be friends#The Good Fight#TV recs#Diane Lockhart#Liz Lawrence#Christine Baranski#Audra McDonald
19 notes
·
View notes
Photo
The Leftovers :: Pilot
152 notes
·
View notes
Photo










#yup#with a special nod to all those selfish idiots who constantly get a new smartphone just because their contract offers that option#you deserve to live at a landfill for a week#environmentalism
70K notes
·
View notes
Photo

- The Handmaid’s Tale, Margaret Atwood
#finally got around to reading the book#I rarely highlight things in fiction books#but I wanted to color that whole page in bright warning red#The Handmaid's Tale#Margaret Atwood#prophetic stuff#book recs
2K notes
·
View notes
Photo
Yup. And ironically, all that bitching and moaning keeps those allegedly awful shows/people at the center of attention, while other shows/people that deliver quality work get buried and don’t find the audience they deserve. Hate-watchers are quite skilled at shooting themselves (and everybody else) in the foot.
#me when i see people talk about how much they hate a character, celeb, show, ship or whatever everyfreakingday?!? #instead of just blacklisting or staying away from it #and just focusing on what they actually enjoy
853 notes
·
View notes
Photo
GET TO KNOW ME MEME - Opening Credits
↳ The Americans, 2013-Present [5/7]
#if you're not watching this show they might as well put you into a Siberian labor camp for all I care#The Americans#Joseph Weisberg#Joel Fields#Keri Russell#Matthew Rhys#TV recs
510 notes
·
View notes