#detect AI-written content
Explore tagged Tumblr posts
Text
Uncover the Truth with Desklib’s AI Content Detector
In the rapidly evolving digital landscape, ensuring the authenticity of written content is more important than ever. Whether you're a student, educator, content creator, or professional, being able to distinguish between human-written and AI-generated content has become a necessity. That's where Desklib’s advanced AI Content Detector comes into play.
Why Choose Desklib’s AI Content Detection Tool?
Desklib’s AI Content Detector is designed to help you verify originality and maintain credibility in any form of written material. Here’s why it stands out:
1. Accurate AI Content Detection
Our tool uses cutting-edge technology to identify AI-written content with precision. It can detect text generated by popular AI writing tools, ensuring that your work remains authentic and free from potential plagiarism.
2. Comprehensive Plagiarism Checker
Desklib’s AI-powered plagiarism checker goes beyond traditional methods. It not only checks for duplicate content across the web but also detects AI-generated text patterns, providing you with an in-depth originality report.
3. User-Friendly Interface
With an intuitive and easy-to-navigate interface, Desklib’s AI detection tool is accessible to everyone. Upload your document, and within seconds, get a detailed analysis of your content.
4. Trusted by Professionals
Whether you’re a teacher evaluating student assignments or a content creator ensuring the originality of your blog posts, Desklib’s tool is trusted by professionals worldwide to maintain the highest standards of authenticity.
5. AI Detection for All Content Types
From academic papers and essays to blogs, articles, and marketing copy, Desklib’s tool caters to a wide range of content formats. No matter your industry, we’ve got you covered.
Features That Set Us Apart
Real-Time Analysis: Get instant results with actionable insights.
Multi-Language Support: Detect AI content in multiple languages, expanding your global reach.
Secure and Confidential: Your data’s privacy is our priority. We never store or share your content.
Detailed Originality Reports: Receive comprehensive reports that highlight areas of concern, making it easier for you to make necessary edits.
How Does the AI Detection Tool Work?
Using Desklib’s AI Content Detector is simple:
Upload Your Document: Drag and drop your file or paste the text directly into our tool.
Run the Analysis: Click on the "Analyze" button to begin the detection process.
Review the Report: Within moments, view a detailed breakdown of AI-generated content, plagiarism matches, and originality scores.
Who Can Benefit from Desklib’s AI Content Detector?
Students and Educators: Ensure academic integrity by identifying AI-generated essays and assignments.
Writers and Editors: Maintain originality in blogs, articles, and other creative works.
Businesses and Marketers: Verify that your promotional content is unique and trustworthy.
Researchers: Ensure that academic papers meet originality standards for publication.
Enhance Your Content Authenticity Today
Don’t let AI-generated text compromise your credibility. Desklib’s AI Content Detector empowers you to identify and eliminate AI-written content, ensuring that your work stands out for its authenticity and originality.
Visit Desklib AI Content Detector now and experience the power of next-gen AI detection technology.

#AI content detection#detect AI-written content#AI plagiarism checker#AI-generated content checker#AI text detector
0 notes
Text
We are focused 100% on creating stunning content with Chatgpt and then ensuring that it is 100% human looking to the search engines.
#how to make ai content undetectable#how to make chatgpt content undetectable#how to make chatgpt text undetectable#how to make chatgpt look human written#how to bypass ai detection
0 notes
Text
Let's Talk About Ir Abelas, Da'ean
As some of you may know, I am vehemently against the dishonest use of AI in fandom and creative spaces. It has been brought to my attention by many, many people (and something I myself have thought on many times) that there is a DreadRook fic that is super popular and confirmed to be written at least partially with AI. I have the texts to prove it was written (at least) with the help of the Grammarly Rewrite generative feature.
Before I go any further, let it be known I was friends with this author; their use of rewrite features is something they told me and have told many other people who they have shared their fic with. It is not however, at the time of posting this, tagged or mentioned on their fic on AO3, in any capacity. I did in fact reach out to the author before making this post. They made absolutely no attempt to agree to state the use of Rewrite AI on their fic, nor be honest or upfront (in my opinion) about the possibility of their fic being complete generative AI. They denied the use of generative AI as a whole, though they did confirm (once again) use of the rewrite feature on Grammarly.
That all said: I do not feel comfortable letting this lie; since I have been asked by many people to make this, this post is simply for awareness.
You can form your own opinion, if you wish to. In fact, I encourage you to do such.
Aside from the, once again, high volume word output of around 352K words in less than 3 months (author says they had 10 chapters pre-written over "about a month" before they began posting; they are also on record saying they can write 5K-10K daily) from November until now, I have also said if you are familiar with AI services or peruse AI sites like ChatGPT, C.AI, J.AI, or any others similar to these, AI writing is very easy to pick out.
After some intense digging, research, and what I believe to be full confirmation via AI detection software used by professional publishers, there is a large and staggering possibility that the fic is almost entirely AI generated, bar some excerpts and paragraphs, here and there. I will post links below of the highly-resourced detection software that a few paragraphs and an entire chapter from this fic were plugged into; you are more than welcome to do with this information what you please.
I implore you to use critical thinking skills, and understand that when this many pieces in a work come back with such a high percentage of AI detected, that there is something going on. (There was a plethora of other AI detection softwares used that also corroborate these findings; I only find it useful to attach the most reputable source.)
Excerpts:
82% Likely Written by AI, 4% Plagiarism Match
98% Likely Written by AI, 2% Plagiarism Match
100% Likely Written by AI, 4% Plagiarism Match
Some excerpts do in fact come back as 100% likely written by human; however, this does not mean that the author was not using the Grammarly Paraphrase/Rewrite feature for these excerpts.
The Grammarly Paraphrase/Rewrite feature does not typically clock as AI generative text, and alongside the example below, many excerpts from other fics were take and put through this feature, and then fed back into the AI detection software. Every single one came back looking like this, within 2% of results:


So, in my opinion, and many others, this goes beyond the use of the simple paraphrase/rewrite feature on Grammarly.
Entire Chapter (Most Recent):
67% Likely Written by AI
As well, just for some variety, another detection software that also clocked plagiarism in the text:
15% Plagiarism Match
To make it clear that I am not simply 'jealous' of this author or 'angry' at their work for simply being a popular work in the fandom, here are some excerpts from other fanfics in this fandom and in other fandoms that were ran through the same exact same detection software, all coming back as 100% human written. (If you would like to run my fic through this software or any others, you are more than welcome to. I do not want to run the risk of OP post manipulation, so I did not include my own.)
The Wolf's Mantle
100% Likely Human Written, 2% Plagiarism Match
A Memory Called Desire
99% Likely Human Written
Brand Loyalty
100% Likely Human Written
Heart of The Sun
98% Likely Human Written
Whether you choose to use AI in your own fandom works is entirely at your own discretion. However, it is important to be transparent about such usage.
AI has many negative impacts for creatives across many mediums, including writers, artists, and voice actors.
If you use AI, it should be tagged as such, so that people who do not want to engage in AI works can avoid engaging with it if they wish to.
ALL LINKS AND PICTURES COURTESY OF: @spiritroses
#ai critical#ai#fandom critical#dreadrook#solrook#rooklas#solas x rook#rook x solas#ir abelas da'ean#ao3#ancient arlathan au#grammarly#chatgpt#originality ai#solas#solas dragon age#rook#da veilguard#veilguard#dragon age veilguard#dragon age#dav#da#dragon age fanfiction#fanfiction#as a full disclaimer: I WILL BE WILLING TO TAKE DOWN THIS POST SO LONG AS THE FIC ENDS UP TAGGED PROPERLY AS AN AI WORK#i tried to do exactly as y'all asked last time#so if y'all have a problem w this one idk what to tell you atp#and see????? we do know how to call out our own fandom#durgeapologist
187 notes
·
View notes
Text
We Need to Talk About AI Detectors
Over the past few weeks there have been two posts in particular that have come out against two authors in the community that have resulted in near witch hunts for one, and essentially driving the other off the internet from all the bullying and hate received. These posts were related to the potential use of AI in fanfiction, as well as using supposed “AI Detectors” to support their claims. With the help of friends, we have been able to look into the AI claims that were made against both The Silence and The Song and Ir Abelas, Da’ean.
We were curious about how and why these posts were being flagged with high levels of “AI Probability” when the authors have been adamant (either in chats or in public) that they have never used generative AI for their work. So we did the most logical thing, put on our detective caps, and rolled up our sleeves. We would like to note that we do not wish to have philosophical discussions, we wish to have transparency and honesty.
Spoilers: We found inaccuracies almost IMMEDIATELY.
Firstly, we looked into the weakness of AI detectors, and read through online posts where people voiced their frustrations with detectors. One thing that we noticed was that the common denominator was that well written articles were being flagged as “Likely AI”, particularly with Originality, and that the solution was to either “dumb them down” or to remove punctuation such as commas, which immediately improved the score, tipping the scale to “Likely Original”.
For the second step, we ran some of our own works through Originality AI checker–works that were made prior to the creation of AI and generative AI. However, after punctuation was removed, this magically changed from 50% likelihood of AI to 100% original work. Again, these works were from before the dawn of generative AI, and therefore could not have been created by AI. For fun, we even ran the first chapter of Harry Potter through it–a novel that is objectively without AI, which still did not come out with results stating 100% original work. We then removed almost all of the punctuation from it, and it actually improved the originality score by 3% (from 95% to 98%).
Personal fic, before and after:
Next step, we ran our own scans through Originality and Quillbot.This includes full chapters of Ir Abelas, Da’ean, both with and without punctuation in Originality, and excerpts in Quillbot. Interestingly, the excerpts in Quillbot pinged as “0% likely AI”, and that is without any removal of punctuation. Across the board, the removal of punctuation from the chapters caused an immediate and dramatic increase in the score, from “100% likely AI” to “96% likely Original”. We have found that the more grammatically correct a work was, the more likely it was to be flagged as “AI”, much like how the freelance writers were complaining about.
Chapter 1 of Ir Abelas, before and after:
Chapter 1 through Quillbot:

Chapter 45 of Ir Abelas, before and after:
Chapter 45 without commas or double hyphens:
Chapter 45 through Quillbot:

Even the Ai detection websites caution against this:

To Durgeapologist, Fangbanger3000, and friends: If you do actually read this, I hope you realize that your posts have done more harm than good to the community. You are correct that AI is a potential threat to creative spaces, but you have gone about addressing it in the worst possible way. By creating multiple posts across platforms with the intent of creating a negative perspective toward certain authors and their fictions rather than the use of AI as a whole—not to mention the counter-accusations with personal attacks rather than focusing on the issue at hand—you are creating an environment that fosters negativity, bullying, and division—none of which are directions to take a sustainable and healthy community. AI Detection is the Wild West right now. There is no way to determine if something written is AI through the use of algorithms, and it requires the use of human intervention and careful comparison to previous works to be within a certain level of certainty that it is AI. Our hope is that in the future, you will take time, step back, and consider all possible sides before causing a stir in the community like this.
#dragon age#solavellan#solrook#dreadrook#dragon age the veilguard#ai#ai critical#dragon age fanfiction#originality ai detector#quillbot#solas#lavellan#dragon age rook#organized bullying is not okay#be kind to one another
170 notes
·
View notes
Text
I know I’m screaming into the void here but do not witch hunt people with AI accusations
As someone whose job for the last two years involved me reading and rereading essays and creative fiction written by my students (a group of writers notorious for using AI despite being told not to because they worry about their grades more than their skills) let me tell you straight up that detecting AI in any written work isn’t straightforward
AI detection softwares are bullshit. Even Turnitin, which is supposedly the best, has an error rate that is slowly increasing over time. They’re not reliable. The free ones online are even worse, trust me
“Oh but it’s so obvious!” Sure. If you’re trained to notice patterns and predictive repetitions in the language, sure. I can spot a ChatGPT student essay from a mile away. But only if they haven’t edited it themselves, or used a bunch of methods (Grammarly, other AIs, their friends, a “humanizer” software, etc) to obscure the ChatGPT patterns. And it’s easier with formulaic essays—with creative fiction it’s much harder.
Why?
Well because good creative fiction is a) difficult to write well and b) extremely subjective. ChatGPT does have notable patterns for creative writing. But it’s been trained on the writing that is immensely popular, writing that has been produced by humans. Purple prose, odd descriptions, sixteen paragraphs of setting where one or two could be fine, all of that is stylistic choices that people have intentionally made in their writing that ChatGPT is capable of predicting and producing.
What I’m saying is, people just write like that normally. There are stylistic things I do in to writing that other people swear up and down is an AI indicator. But it’s just me writing words from my head
So can we, should we, start witch hunts over AI use in fanfic when we notice these patterns? My answer is no because that’s dangerous.
Listen. I hate AI. I hate the idea of someone stealing my work and feeding it into a machine that will then “improve itself” based on work I put my heart and soul into. If I notice what I think is AI in a work I’ve casually encountered online, I make a face and I stop reading. It’s as simple as that. I don’t drag their name out into the public to start a tomato throwing session because I don’t know their story (hell they might even be a bot) and because one accusation can suddenly become a deluge
Or a witch hunt, if you will
Because accusing one person of AI and starting a whole ass witch hunt is just begging people to start badly analyzing the content they’re reading out of fear that they’ve been duped. People don’t want to feel the sting or embarrassment of having been tricked. So they’ll start reading more closely. Too closely. They’ll start finding evidence that isn’t really evidence. “This phrase has been used three times in the last ten paragraphs. It must be AI.”
Or, it could be that I just don’t have enough words in my brain that day and didn’t notice the repetition when I was editing.
There’s a term you may be familiar with called a “false positive.” In science or medicine, it’s when something seems to have met the conditions you’re looking for, but in reality isn’t true or real or accurate. Like when you test for the flu and get a positive result when you didn’t have the flu. Or, in this case, when you notice someone writing sentences that appear suspiciously like a ChatGPT constructed sentence and go “oh, yes that must mean it’s ChatGPT then”
(This type of argumentation/conclusion also just uses a whole series of logical fallacies I won’t get into here except to say that if you want to have a civil conversation about AI use in fandom you cannot devolve into hasty generalizations based on bits and parts)
I’m not saying this to protect the people using AI. In an ideal world, people would stop using it and return back to the hard work of making art and literature and so on. But we don’t live in that world right now, and AI is prevalent everywhere. Which means we have to be careful with our accusations and any “evidence” we think we see.
And if we do find AI in fandom spaces, we must be careful with how we handle or approach that, otherwise we will start accusing writers who have never touched AI a day in their life of having used it. We will create a culture of fear around writing and creating that stops creatives from making anything at all. People will become too scared to share their work out of fear they’ll be accused of AI and run off.
I don’t have solutions except to say that in my experience, outright accusing people of AI tends to create an environment of mistrust that isn’t productive for creatives or fans/readers. If you start looking for AI evidence everywhere, you will find it everywhere. Next thing you know, you’re miserable because you feel like you can’t read or enjoy anything.
If you notice what you think is AI in a work, clock it, maybe start a discussion about it, but keep that conversation open to multiple answers or outcomes. You’re not going to stop people from using AI by pointing fingers at them. But you might be able to inspire them to try writing or creating for themselves if you keep the conversation open, friendly, and encourage them to try creating for themselves, without the help of AI
120 notes
·
View notes
Text
Hello, everyone!
First off, I’m sorry for even having to post this, and I’m usually nice to everyone I come into contact with, but I received a startling comment on my newest fic, Paint-Stained Hands and Paper Hearts, where I was accused of pumping out the entire chapter solely using AI.
I am thirty-two years old and have been attending University since I was 18 YEARS OLD. I am currently working on obtaining my PhD in English Literature as well as a Masters in Creative Writing. So, there’s that.
There is an increasing trend of online witch hunts targeting writers on all platforms (fanfic.net, ao3, watt pad, etc), where people will accuse them of utilizing AI tools like ChatGPT and otherwise based solely on their writing style or prose. These accusations often come without concrete evidence and rely on AI detection tools, which are known to be HELLA unreliable. This has led to false accusations against authors who have developed a particular writing style that AI models may emulate due to the vast fucking amount of human-written literature that they’ve literally had dumped into them. Some of these people are friends of mine, some of whom are well-known in the AO3 writing community, and I received my first comment this morning, and I’m pissed.
AI detection tools work by analyzing text for patterns, probabilities, and structures that resemble AI-generated outputs. HOWEVER, because AI models like ChatGPT are trained on extensive datasets that include CENTURIES of literature, modern writing guides, and user-generated content, they inevitably produce text that can mimic various styles — both contemporary and historical. Followin’ me?
To dumb this down a bit, it means that AI detection tools are often UNABLE TO DISTINGUISH between human and AI writing with absolute certainty.
Furthermore, tests have shown that classic literary works, like those written by Mary Shelley, Jane Austen, William Shakespeare, and Charles Dickens, frequently trigger AI detectors as being 100% AI generated or plagiarized. For example:
Mary Shelley’s Frankenstein has been flagged as AI-generated because its formal, structured prose aligns with common AI patterns.
Jane Austen’s novels, particularly Pride and Prejudice, often receive high AI probability scores due to their precise grammar, rhythmic sentence structures, and commonly used words in large language models.
Shakespeare’s works sometimes trigger AI detectors given that his poetic and structured style aligns with common AI-generated poetic forms.
Gabriel Garcia Marquez’s Love in the Time of Cholera and One Hundred Years of Solitude trigger 100% AI-generated due to its flowing sentences, rich descriptions, and poetic prose, which AI models often mimic when generating literary or philosophical text.
Fritz Leiber’s Fafhrd and the Grey Mouser’s sharp, structured rhythmic prose, imaginative world building, literary elegance, and dialogue-driven narratives often trigger 100% on AI detectors.
The Gettysburg fucking Address by Abraham Lincoln has ALSO been miss classified as AI, demonstrating how formal, structured language confuses these detectors.
These false positives reveal a critical flaw in AI detection: because AI has been trained on so much human writing, it is nearly impossible for these tools to completely separate original human work from AI-generated text. This becomes more problematic when accusations are directed at contemporary authors simply because their writing ‘feels’ like AI despite being fully human.
The rise in these accusations poses a significant threat to both emerging and established writers. Many writers have unique styles that might align with AI-generated patterns, especially if they follow conventional grammar, use structured prose, or have an academic or polished writing approach. Additionally, certain genres— such as sci-fi, or fantasy, or philosophical essays— often produce high AI probability scores due to their abstract and complex language.
For many writers, their work is a reflection of years—often decades—of dedication, practice, and personal growth. To have their efforts invalidated or questioned simply because their writing is mistaken for AI-generated text is fucking disgusting.
This kind of shit makes people afraid of writing, especially those who are just starting their careers / navigating the early stages of publication. The fear of being accused of plagiarism, or of relying on AI for their creativity is anxiety-inducing and can tank someone’s self esteem. It can even stop some from continuing to write altogether, as the pressure to prove their authenticity becomes overwhelming.
For writers who have poured their hearts into their work, the idea that their prose could be mistaken for something that came from a machine is fucking frustrating. Second-guessing your own style, wondering if you need to change how you write or dumb it down in order to avoid being falsely flagged—this fear of being seen as inauthentic can stifle their creative process, leaving them hesitant to share their work or even finish projects they've started. This makes ME want to stop, and I’m just trying to live my life, and write about things I enjoy. So, fuck you very much for that.
Writing is often a deeply personal endeavor, and for many, it's a way to express thoughts, emotions, and experiences that are difficult to put into words. When those expressions are wrongly branded as artificial, it undermines not just the quality of their work but the value of their creative expression.
Consider writing habits, drafts, and personal writing history rather than immediate and unfounded accusations before you decide to piss in someone’s coffee.
So, whatever. Read my fics, don’t read my fics. I just write for FUN, and to SHARE with all of you.
Sorry that my writing is too clinical for you, ig.
I put different literary works as well as my own into an AI Detector. Here you go.
#arcane#ao3 fanfic#arcane fanfic#ao3#ao3 writer#writers on tumblr#writing#wattpad#fanfiction#arcane fanfiction
48 notes
·
View notes
Text
Reposting this for the anon who is clearly too obsessed and doesn't have a life outside of Tumblr. + Added a new statement too.
I deleted this post because I was under the impression the anon had already seen it—since they love to stalk my blog very in-depth. Luckily, I had written it on Google Docs, like I do with everything I post here, before posting it the first time. And now I’m posting it again because apparently, they didn’t get the memo and love to create fake accounts:
For the anon who’s too coward to use their real account and clearly doesn’t have a life:
I was going to ignore the first ask, but then you had the time, energy, and weird obsession to create a fake account just to send me another ask—and then a private message. So let me be clear
This is the first and last time I address this. Any further messages or asks about this will be deleted and blocked immediately. Tumblr is my safe space—stress and drama free—and I will block anyone who disturbs that for me. You really came onto my blog and did what—threatened me? You ran my writing through an unreliable AI checker and then had the audacity to message me about it? Do you really feel like it's your place to question how people write fanfiction? Why do you feel so entitled to an explanation from someone you don’t even know? To quote you: “DM me and explain why” — WHO are you? And where is this entitlement coming from?



Let me ask you this: Do you not have a life outside of Tumblr? Who takes time out of their day to check if what a stranger posted is “AI” or not? I saw another account getting the same kind of asks recently—was that you too? Are you going blog to blog checking F1 fics like a fanfic detective? If so: get a life, get a job, get a hobby, or better yet—touch grass.
And the audacity to make a fake account just to send another message? Coward behavior. I’ve blocked the first anon ask and now your little fake blog too. I’ll keep blocking every single one if you continue harassing me.
Don’t like what I post? Scroll past it. Block me. Ignore me. I truly do not care. I use Google Docs for all my fics—outline ideas, drafts, requests order. Since that seems hard to believe, here’s one example straight from my docs.
And since you clearly have free time, here are actual credible sources that prove AI checkers are not reliable and should never be used as evidence of anything:
Source



Source
Source
Source
This is especially relevant to me personally, because English is not my native language. I've studied it for over 15 years, l'm currently studying English at university, and I don't live in an English-speaking country. I didn't grow up in an English-speaking country, and I've worked hard to develop my vocabulary, grammar, and writing style. So if my writing sounds "too repetitive" or "too perfect to be written by a human" and gets flagged by some Al detector—that's not proof I used Al. It means I've worked hard to get to this level, even though my English might not always be perfect.
Source
Al that claims to create undetectable Al content or "human Al"
Or maybe you want to read more on Google Scholar:
There are so many sources to inform yourself—you just need to know how to use them.
And this is what really gets me: someone could use Al, lightly edit the output, or run it through one of those "humanize Al" generators and pass every detector with flying colors. Meanwhile, people like me get flagged and questioned for no reason.
Also, if I were actually using Al, I would've used one of those humanizing tools too—so people like you wouldn't harass me over what I post.
These days, it seems you don't even need facts—just a fake account and a superiority complex.
That's all I had to say. Goodbye, and good luck finding a personality.
April 7
A few days after I posted the above post, you went on someone’s blog — someone who had sent me an ask without using the anon option — and sent them an ask about me, as if I had committed a crime. Less than 24 hours ago, you created yet another fake account just to message me (as seen below) and tell me about one of your other accounts (also fake), despite my explicit statement that I would no longer entertain this obsessive behavior.

Let me be extremely clear: I do not owe strangers on the internet an explanation for my writing process — especially not those who appoint themselves as investigators and issue condescending ultimatums. I will not “contact you privately.” I will not “own up” to a false narrative you've built around flawed tools and obsessive pattern-tracking. You do not get to demand private confessions like you're running a tribunal.
I already said everything I had to say when I made that original post, but clearly it didn’t register, and you continue to target me. I looked at the account you mentioned in your message. To quote: “Some members of the group of us working on this project have gone through PhD programs or work in education and understand the inaccuracies and limitations of AI detection tools.”
So you're adults — or so you claim — with PhDs, yet you seem to be unemployed based on the amount of free time you have to analyze what strangers are posting on the internet. Especially posts that are over 2k words long.
Seriously, who has time to do this much? Because I highly doubt someone with an actual job and a life has this much time on their hands.
And as I said in my first post: block me if you don’t like my blog or what I post. It is really that simple.
LEAVE. ME. ALONE.
24 notes
·
View notes
Text
I saw a post the other day calling criticism of generative AI a moral panic, and while I do think many proprietary AI technologies are being used in deeply unethical ways, I think there is a substantial body of reporting and research on the real-world impacts of the AI boom that would trouble the comparison to a moral panic: while there *are* older cultural fears tied to negative reactions to the perceived newness of AI, many of those warnings are Luddite with a capital L - that is, they're part of a tradition of materialist critique focused on the way the technology is being deployed in the political economy. So (1) starting with the acknowledgement that a variety of machine-learning technologies were being used by researchers before the current "AI" hype cycle, and that there's evidence for the benefit of targeted use of AI techs in settings where they can be used by trained readers - say, spotting patterns in radiology scans - and (2) setting aside the fact that current proprietary LLMs in particular are largely bullshit machines, in that they confidently generate errors, incorrect citations, and falsehoods in ways humans may be less likely to detect than conventional disinformation, and (3) setting aside as well the potential impact of frequent offloading on human cognition and of widespread AI slop on our understanding of human creativity...
What are some of the material effects of the "AI" boom?
Guzzling water and electricity
The data centers needed to support AI technologies require large quantities of water to cool the processors. A to-be-released paper from the University of California Riverside and the University of Texas Arlington finds, for example, that "ChatGPT needs to 'drink' [the equivalent of] a 500 ml bottle of water for a simple conversation of roughly 20-50 questions and answers." Many of these data centers pull water from already water-stressed areas, and the processing needs of big tech companies are expanding rapidly. Microsoft alone increased its water consumption from 4,196,461 cubic meters in 2020 to 7,843,744 cubic meters in 2023. AI applications are also 100 to 1,000 times more computationally intensive than regular search functions, and as a result the electricity needs of data centers are overwhelming local power grids, and many tech giants are abandoning or delaying their plans to become carbon neutral. Google’s greenhouse gas emissions alone have increased at least 48% since 2019. And a recent analysis from The Guardian suggests the actual AI-related increase in resource use by big tech companies may be up to 662%, or 7.62 times, higher than they've officially reported.
Exploiting labor to create its datasets
Like so many other forms of "automation," generative AI technologies actually require loads of human labor to do things like tag millions of images to train computer vision for ImageNet and to filter the texts used to train LLMs to make them less racist, sexist, and homophobic. This work is deeply casualized, underpaid, and often psychologically harmful. It profits from and re-entrenches a stratified global labor market: many of the data workers used to maintain training sets are from the Global South, and one of the platforms used to buy their work is literally called the Mechanical Turk, owned by Amazon.
From an open letter written by content moderators and AI workers in Kenya to Biden: "US Big Tech companies are systemically abusing and exploiting African workers. In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern day slavery."
Deskilling labor and demoralizing workers
The companies, hospitals, production studios, and academic institutions that have signed contracts with providers of proprietary AI have used those technologies to erode labor protections and worsen working conditions for their employees. Even when AI is not used directly to replace human workers, it is deployed as a tool for disciplining labor by deskilling the work humans perform: in other words, employers use AI tech to reduce the value of human labor (labor like grading student papers, providing customer service, consulting with patients, etc.) in order to enable the automation of previously skilled tasks. Deskilling makes it easier for companies and institutions to casualize and gigify what were previously more secure positions. It reduces pay and bargaining power for workers, forcing them into new gigs as adjuncts for its own technologies.
I can't say anything better than Tressie McMillan Cottom, so let me quote her recent piece at length: "A.I. may be a mid technology with limited use cases to justify its financial and environmental costs. But it is a stellar tool for demoralizing workers who can, in the blink of a digital eye, be categorized as waste. Whatever A.I. has the potential to become, in this political environment it is most powerful when it is aimed at demoralizing workers. This sort of mid tech would, in a perfect world, go the way of classroom TVs and MOOCs. It would find its niche, mildly reshape the way white-collar workers work and Americans would mostly forget about its promise to transform our lives. But we now live in a world where political might makes right. DOGE’s monthslong infomercial for A.I. reveals the difference that power can make to a mid technology. It does not have to be transformative to change how we live and work. In the wrong hands, mid tech is an antilabor hammer."
Enclosing knowledge production and destroying open access
OpenAI started as a non-profit, but it has now become one of the most aggressive for-profit companies in Silicon Valley. Alongside the new proprietary AIs developed by Google, Microsoft, Amazon, Meta, X, etc., OpenAI is extracting personal data and scraping copyrighted works to amass the data it needs to train their bots - even offering one-time payouts to authors to buy the rights to frack their work for AI grist - and then (or so they tell investors) they plan to sell the products back at a profit. As many critics have pointed out, proprietary AI thus works on a model of political economy similar to the 15th-19th-century capitalist project of enclosing what was formerly "the commons," or public land, to turn it into private property for the bourgeois class, who then owned the means of agricultural and industrial production. "Open"AI is built on and requires access to collective knowledge and public archives to run, but its promise to investors (the one they use to attract capital) is that it will enclose the profits generated from that knowledge for private gain.
AI companies hungry for good data to train their Large Language Models (LLMs) have also unleashed a new wave of bots that are stretching the digital infrastructure of open-access sites like Wikipedia, Project Gutenberg, and Internet Archive past capacity. As Eric Hellman writes in a recent blog post, these bots "use as many connections as you have room for. If you add capacity, they just ramp up their requests." In the process of scraping the intellectual commons, they're also trampling and trashing its benefits for truly public use.
Enriching tech oligarchs and fueling military imperialism
The names of many of the people and groups who get richer by generating speculative buzz for generative AI - Elon Musk, Mark Zuckerberg, Sam Altman, Larry Ellison - are familiar to the public because those people are currently using their wealth to purchase political influence and to win access to public resources. And it's looking increasingly likely that this political interference is motivated by the probability that the AI hype is a bubble - that the tech can never be made profitable or useful - and that tech oligarchs are hoping to keep it afloat as a speculation scheme through an infusion of public money - a.k.a. an AIG-style bailout.
In the meantime, these companies have found a growing interest from military buyers for their tech, as AI becomes a new front for "national security" imperialist growth wars. From an email written by Microsoft employee Ibtihal Aboussad, who interrupted Microsoft AI CEO Mustafa Suleyman at a live event to call him a war profiteer: "When I moved to AI Platform, I was excited to contribute to cutting-edge AI technology and its applications for the good of humanity: accessibility products, translation services, and tools to 'empower every human and organization to achieve more.' I was not informed that Microsoft would sell my work to the Israeli military and government, with the purpose of spying on and murdering journalists, doctors, aid workers, and entire civilian families. If I knew my work on transcription scenarios would help spy on and transcribe phone calls to better target Palestinians, I would not have joined this organization and contributed to genocide. I did not sign up to write code that violates human rights."
So there's a brief, non-exhaustive digest of some vectors for a critique of proprietary AI's role in the political economy. tl;dr: the first questions of material analysis are "who labors?" and "who profits/to whom does the value of that labor accrue?"
For further (and longer) reading, check out Justin Joque's Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism and Karen Hao's forthcoming Empire of AI.
22 notes
·
View notes
Text
a personal post/reflection on ai use
some of you reading this may have seen the message i received accusing me of using ai to write my work. i wanted to take a moment to talk about how it made me feel and, more importantly, the impact that accusations like this can have on writers in general.
I won’t lie: seeing that message in my inbox, being told that the stories i spend hours writing aren't “real", that my effort and creativity don't belong to me, was really disheartening. then i had to defend my own writing against these accusations and that wasn’t exactly fun. and while i know i shouldn’t let it get to me, the truth is that it does, because I'm a real person!
It’s made me overthink everything i write. I already reread my fics multiple times before posting, checking for flow, consistency, and coherence, but now, i find myself second-guessing every sentence. Does this sound too robotic? Is my phrasing too formal or too stiff? Or maybe it’s not polished enough? Maybe it's too polished. What if i accidentally repeat a word or structure a sentence in a way that someone deems “ai-like”? Will i be accused of this again?
I want to be clear also that this isn’t about seeking sympathy. I just feel it's important to remind people that fanfic writers are real people with real emotions. We write because we love it, because we want to share stories for others to enjoy for free. And yet, there are people out there who treat “spotting ai” like some kind of witch hunt, who feel entitled to send accusations to complete strangers without any basis for it.
And I don't say this to be elitist, but for some context, I have a master’s degree in computer science. I work in tech every day. I specialize in machine learning. When I say there is no reliable way to tell whether a passage of text was written by ai or a human, i'm not just making shit up. ai detection tools are completely unreliable. they give false positives all the time, and they are, quite frankly, complete bullshit.
And I get that there are legitimate concerns about ai-generated work in creative spaces, especially when it comes to art, writing, and other forms of expression that people put their hearts into. I have taken ethics courses in ai for this reason. I understand why people are wary, and i’m not saying that those concerns aren’t valid. But this is exactly why we should be mindful of how we engage with content. If you don’t like something, if you suspect it was ai-generated and that bothers you, the best thing you can do is simply not engage. don’t read it, don’t share it, don’t support it.
But going out of your way to harass people, to send accusations without evidence, to act like you’re some kind of ai-detecting authority is not just absurd, but it’s harmful to real people because you will inevitably get it wrong!!!
At the end of the day, this is fan fiction. no one is paying for this. no one is being scammed. so why do people act like they need to police something that’s supposed to be fun, creative, and freely shared? if you love stories written by real people, support those writers. but please, stop making this space even more stressful for the people who are already here, giving their time and creativity to share something they love.
And if you still think making accusations about people using AI for their writing is the correct and virtuous thing to do, I invite you to read this online thread of freelance writers discussing the legitimate harm that has come to their livelihood due to the false positives of ai detection tools and false accusations.
22 notes
·
View notes
Text
I asked ai to tell me how to identify ai writing. This is what it gave me.
Identifying whether a story or piece of writing was generated by AI can be challenging, but there are certain signs and patterns to look for. Here are some methods that can help you determine if a story might be AI-generated:
1. Repetitive Phrasing or Ideas
AI models sometimes repeat phrases or concepts throughout the text, sometimes without any real need to. Look for:
Repeated sentences or ideas that don’t add to the progression of the story.
Redundant descriptions or overly detailed explanations of things that don’t move the plot forward.
2. Lack of Deep Characterization
AI-generated writing can struggle with developing rich, deep characters. While the story might be well-structured, characters often feel:
One-dimensional or lacking in emotional depth.
Motivations or actions that don’t feel organic, like they are just moving through the plot without real personal stakes.
3. Inconsistent Plot or Logic
AI can sometimes create stories that have plot holes or logic inconsistencies. For example:
Characters acting in ways that don’t align with their personalities or the established world.
Unexplained shifts in the narrative, like events happening without cause or consequence.
Abrupt changes in tone or setting without any clear transitions.
4. Too Perfect or Polished
AI writing can sometimes be unusually smooth or sterile. Look for:
Lack of imperfections like awkward phrasing, typos, or quirks that human writers often incorporate (unless the piece has been heavily edited).
Overly neutral or formal tone, especially if the writing is supposed to be casual or emotional.
5. Unusual Sentence Structure or Phrasing
AI sometimes produces strange sentence constructions or uses words in a way that doesn’t quite feel right. For instance:
Long, convoluted sentences with too many clauses.
Phrasing that sounds technically correct but doesn’t flow naturally for a human writer.
6. Overuse of Clichés or Generic Ideas
AI often relies on well-established patterns and may use clichés, especially in storytelling. Watch for:
Predictable or generic storylines with little originality.
Overuse of familiar tropes or archetypes without fresh or personal takes.
7. Lack of Emotional or Sensory Depth
AI-generated writing may struggle with conveying complex emotions or sensory experiences. For example:
Descriptions that feel flat or don't evoke strong imagery or feeling.
Emotional moments that seem shallow or don’t resonate, as AI often lacks the depth of human experience.
8. Over-Explanations
AI might over-explain things because it doesn’t always understand the nuances of human experience. Look for:
Excessive clarification of simple concepts.
Repeated explanations of emotions, actions, or events that should already be clear from the context.
9. Strange Dialogue
AI-generated dialogue can sound unnatural or stilted. Common signs include:
Conversations that feel too formal, disconnected, or robotic.
Characters talking in ways that don’t align with their age, background, or setting.
Repetitive dialogue patterns where characters say similar things too often.
10. Check for Metadata or Author Information
If you’re reading an online story, some platforms may indicate whether AI was used to generate it. Alternatively, some AI writers might leave identifiable traces in the metadata of the text.
If the text was published on a site with AI-generated content, that can also be a clue.
11. Look for Inconsistencies in Style or Tone
AI can struggle to maintain a consistent style or tone over a longer piece. For example:
Sudden shifts in style, such as a formal tone turning into a casual one without context.
Inconsistent use of language or vocabulary that feels out of place in the context of the story.
12. Run the Text Through AI Detection Tools
Several AI detection tools can analyze a piece of writing and determine if it was likely written by an AI. Some popular tools include:
GPTZero: Detects AI writing by analyzing patterns like perplexity and burstiness.
Turnitin: Known for plagiarism detection but also works with AI writing.
AI Writing Check: Another specialized tool to check for AI-generated content.
Conclusion
Identifying AI-generated writing is often about spotting patterns that differ from the natural flow of human creativity. If the writing feels off in terms of tone, style, depth, or logic, it might be worth investigating further.
Would you like to check a specific piece of writing for AI characteristics?
#thoughts?#books#writing#booklr#booklover#reading#writers#writing problems#writers on tumblr#writeblr#writers and poets#ai#anti ai#real writers#writerscommunity#books and reading
16 notes
·
View notes
Note
get rid of all AI content and have a filler image (“this image has been removed we are working on a new image” and a link to an official statement about the art and AI thing. Have an artist live stream or Q&A or something to highlight your artists and introduce your community to who will be working on the replacement images. Have this set of artists work as a sort of anti-AI art team and reinforce your stance on AI. Tell your community you will not be utilizing AI for art or writing and have an official anti-AI initiative . Everything written should be run through AI detection software and all art should be done by trusted artists who do not use AI. It should be the mission of the site and its team to convince us as the paying customer that AI will never be used again. There should not be events going on until all AI generated material is replaced. A public apology should be issued and a timeframe of when all art will be done should be given to us. It has been several months and only a few characters have been replaced so far. If this were a pressing issue to CJ, it WOULD be getting done proper but he doesn’t actually believe it’s as big of a deal as his paying customers do
☁️
12 notes
·
View notes
Note
think i found another ai fic... one chapter was 26% ai another 21% one chapter was "probably human written" but still... and it's written on anon...
i appreciate you so much samantha and all the work and effort and time you put into your amazing writing, you're amazing💖💖💖
I learned recently that some folks use chatgpt or the likes to edit their fics. This is a terrible idea but I do think that it might contribute to some of the results we're seeing. It's both difficult and inaccurate to confirm ai generation when the ai detection result isn't paired with other factors like frequently posting high word counts, or dull monotone writing, or absolutely perfect grammar etc. So people should definitely stop using chatgpt for spell checks.
Something else that might trigger a positive ai result is the use of tools like Grammarly and so on, which I've mentioned before that I have been using for years for spag. But they recently (?) introduced a generative ai element that rewrites content for you or that generates a new sentence on the spot. This does however result in a positive ai detection because well, the ai did it.
Do we stop using these types of tools now? I don't think that's necessary and there are probably minimal checkers left that have no integrated ai at all. Most spag checkers including Word, use some kind of non-genative ai to alert you to errors in a more evolved way than before. (Google Docs' spag checker just got stupider as it "evolved" btw. What an absolute dumpster fire.)
BUT be careful how you use it, don't let it reconstruct your work, don't let it automagically write or fix a sentence for you, and don't rely on it to produce flawless content, there is no such thing. Use your brain, ask for a beta reader to assist you, research the things you don't know. Teach yourself to write better. Use the tool for its initial purpose--to check your spelling and grammar. The ai features can usually be switched off in settings. That being said, basic spag checks using these tools shouldn't equate ai generation but it will probably depend on the tool used to detect it.
I want to add that we definitely should not check every fic we're interested in reading for ai. I think that will make the fandom experience terrible and unenjoyable for everyone. Read it in good faith but keep an eye out for stuff like posting large amounts of words on a schedule that is not humanly possible, the writing style, the tone, other use of ai by the person etc. We've been reading fanfic for years, we know when something is off. Block if you suspect it's ai generated.
People who use ai to 'write' fics have no place in fandom spaces.
It's going to become increasingly difficult to detect these things though, since there is also a feature to "humanize" the ai slop 🤢 and I don't know what the way forward is but I do know it's not running every fic through an ai detector. They're not entirely accurate either. The only reason I resorted to an ai detector with that person I initially caught out, was because the tag was clogged with their constant posting and I knew there was no fucking way they were posting that much naturally. The detector just confirmed what I suspected anyway.
I read a fic recently by an Anon author and I thought I was so good and sexy. I really hope it's not the same person you're talking about. I'm not going back to check because my kudo and comment are already on there. I also doubt an ai can write such filthy, steaming smut 😂
And thank you, Anon, for your kind words. Truly appreciate it. 💕
12 notes
·
View notes
Text
Several of the most prominent alt-weekly newspapers in the United States are running search-engine-optimized listicles about porn performers, which appear to be AI-generated, alongside their editorial content.
If you pull up the homepage for the Village Voice on your phone, for example, you’ll see reporting from freelancers—longtime columnist Michael Musto still files occasionally—as well as archival work from big-name former writers such as Greg Tate, the Pulitzer Prize–winning music critic. You’ll also see a tab on its drop-down menu labeled “OnlyFans.” Clicking on it pulls up a catalog of listicles ranking different types of pornographic performers by demographic, from “Turkish” to “incest” to “granny.” These blog posts link out to hundreds of different OnlyFans accounts and are presented as editorial work, without labels indicating they are advertisements or sponsored.
Similar content appears on the websites of LA Weekly, which is owned by Street Media, the same parent company as the Village Voice, as well as the St. Louis–based alt-weekly the Riverfront Times. Although there is a chance some of these posts could be written by human freelancers, the writing bears markers of AI slop.
According to AI detection startup Reality Defender, which scanned a sampling of these posts, the content in the articles registers as having a “high probability” of containing AI-generated text. One scanned example, a Riverfront Times story titled “19 Best Free Asian OnlyFans Featuring OnlyFans Asian Free in 2024,” concludes with the following sentence, exemplary in its generic horny platitudes: “You explore, savor, and discover your next favorite addiction, and we’ll be back with more insane talent in the future!”
“We’re seeing an ever-increasing part of old media be reborn as AI-generated new media,” says Reality Defender cofounder and CTO Ali Shahriyari. “Unfortunately, this means way less informational and newsworthy content and more SEO-focused ‘slop’ that really just wastes people’s time and attention. Tracking these kinds of publications isn’t even part of our day to day, yet we’re seeing them pop up more and more.”
LA Weekly laid off or offered buyouts to the majority of its staff in March 2024, while the Riverfront Times laid off its entire staff in May 2024 after it was sold by parent company Big Lou Media to an unnamed buyer.
The Village Voice’s sole remaining editorial staffer, R.C. Baker, says he is not involved with the OnlyFans posts, although it appears on the site as editorial content. “I handle only news and cultural reporting out of New York City. I have nothing to do with OnlyFans. That content is handled by a separate team that is based, I believe, in LA,” he told WIRED.
Likewise, former LA Weekly editor in chief Darrick Rainey says he, too, had nothing to do with the OnlyFans listicles when he worked there. Neither did his colleagues in editorial. “We weren’t happy about it at all, and we were absolutely not involved in putting it up,” he says.
Former employees are disturbed to see their archival work comingling with SEO porn slop. “It’s wrenching in so many ways,” says former Riverfront Times writer Danny Wicentowski. “Like watching a loved home get devoured by vines, or left to rot.”
This is a new twist in the grim growing world of AI slop. WIRED has reported on a variety of defunct news and media outlets that have been resurrected by new owners and stuffed with AI-generated clickbait, from a small-town Iowa newspaper to the beloved feminist blog the Hairpin. In the case of the alt-weeklies and OnlyFans listicles, the clickbait is appearing alongside actual editorial content, both archival and new.
It is unclear how this effort has been coordinated between the sites, or whether there are several parallel efforts ongoing to produce OnlyFans-centric listicles. LA Weekly and the Village Voice are both owned by the same parent company, Street Media, and some of their OnlyFans content is identical. Meanwhile, the Riverfront Times publishes its OnlyFans blogs under the byline “RFT staff.”
Street Media owner Brian Calle did not respond to WIRED’s requests for comment. Chris Keating, the Riverfront Times’ former owner, says he is bound by a confidentiality agreement and cannot name the new buyer, but that he “does not believe” Calle is part of the purchasing company controlling the new Riverfront Times.
Daniela LaFave, an Austin-based SEO expert who is bylined on the majority of the Village Voice OnlyFans blog posts as well as some of the LA Weekly posts, confirmed to WIRED that she is the same person named as the author. She declined to answer whether she used AI tools to create the posts.
Another frequent byline on the Village Voice and LA Weekly posts, “Jasmine Ramer,” has published 910 articles primarily for these two outlets in the past year, according to the public relations platform Muck Rack. (Sample headlines: “Top OnlyFans Sluts 2024” and “Top 10 Finnish OnlyFans & Hottest Finnish OnlyFans 2023.”) There is a profile on LinkedIn listed as a senior staff writer at LA Weekly for an Austria-based woman named Jasmine Ramer, but there is little other digital footprint for the writer. When Reality Defender analyzed the profile photo on Ramer’s LinkedIn account, it found it was likely AI-generated. There is also at least one other account using the same photo claiming to be a digital marketing executive in the UK. (WIRED did not receive a response when it asked Ramer for comment via LinkedIn.)
OnlyFans is an online porn behemoth, one which has spawned numerous cottage industries, like professional proxy chatters who impersonate the platform’s stars. There are marketing agencies devoted to promoting OnlyFans creators, and many social platforms from Reddit to X are swarmed with bots trying to entice potential customers. These efforts are known as “OnlyFans funnels.”
Risqué sex ads have played a major role in the rise and fall of some alt-weeklies. The founders of Village Voice Media, which once owned the Village Voice, LA Weekly, and the Riverfront Times as well as other US-based alt-weeklies, created the classified website Backpage.com in 2004 to compete with Craigslist. It created a lucrative revenue stream, buoying many titles for years, but ginned up major controversy for hosting sex ads.
Vice President Kamala Harris, serving as California attorney general at the time, dubbed the company “the world’s top online brothel” in 2016 and arrested its founders and CEO for facilitating prostitution. With this recent history in mind, the decision to lean into sexual advertorial is especially brash.
It may be that these alt-weeklies are creating these blog posts in an effort to drum up web traffic to their sites, which could in turn help boost digital ad sales. They may also be accepting money from the accounts or from representatives of the accounts promoted, which would mean the posts were unlabeled advertorial. “Online ads, print ads, they all dried up,” Rainey says. “But this OnlyFans stuff is there.”
“OnlyFans has no financial arrangement with these outlets,” an OnlyFans spokesperson who identified herself only as “Brixie” told WIRED via email.
“I think the creators are paying,” says Luka Sek, SEO manager for an OnlyFans promotion company called SocialRise. “An agency that handles multiple models, or someone doing the marketing for such agencies.”
Whatever the reason, it marks a grim new pit stop for declining media publications, one in which blatant SEO bait sits side by side with culturally valuable archival journalistic work and, in the case of the Village Voice, ongoing contemporary reportage.
Tricia Romano, a former Village Voice writer who recently published an oral history of the newspaper, The Freaks Came Out to Write, sees the arrival of AI slop as keeping with the recent deterioration of alt-weeklies. “This is the logical dystopian conclusion,” she says. “But who’s reading it?”
30 notes
·
View notes
Text
Much of the internet is bots.
There are more bots every day. Every site that's in any way bot-trainable has bots scraping posts and chats.
We have AI. It can auto-recognise voices. It can mine text for information which it can categorise and file, but cannot process it or use to further its own ideas. It can compile and compartmentalise and label everything ever written, everything being spoken, all words in the world if we let it. But it can't think.
Neither can the bots on Reddit, Facebook, Twitter. They can write arguments. They can repeat arguments flawlessly. They can argue with themselves and you wouldn't know it. They are as advanced as ChatGPT.
It's 2010. Much of the internet is bots.
Reddit is a cesspit. I don't use reddit, except to find useful information. Many users are bots. There are more bots joining everyday.
We have AI. It can autorecogise voices. It can create AI generated photos that are indistinguishable from humans most of the time. A human trained to detect AI-generated faces who had spent ten+ hours learning this skill could identify an AI generated photo 90% of the time. But no one does this, and most of the tiny profile pictures are of dead people anyway.
On Reddit there are people piloting bots to generate targeted discourse. The bots analyse arguments and reply as if they were users. The good answers attract upvotes and downvotes and bots iterate accordingly. A New Zealand journalist notices they are being used for election interference and writes a book on it. We say 'That's terrible!'.
We do not read the book.
We interact with bots every day. We ignore most of them. We are used to these low-quality accounts by now. Because of how many there are, the bots often interact with themselves. It is like a cluster theory of bots, like colliding molecules in the air.
It's 2015. Much of the internet is bots. There are more bots everyday.
Reddit is a cesspit. I have a reddit account for hobbies and history and tv shows and local content. I make posts there, and they get karma. There is a lot of noise. Posting comments in the big subs get you more karma. I spend hours writing answers to ethical social dilemmas in Am I The Asshole?. I get better at ethical social dilemmas. I get better at writing answers.
It's 2022. Much of the internet is bots. Except for Gen Z, we use the internet how we've been using it since 2010. We play stupid games that take more of your money and time. We use the same social media sites. We post memes and vent and chat. We comment. We talk the same politics. We have the same politics. We are the same as we were a decade ago.
So are the social media sites.
We have AI. It can autorecognise voices. It can mine text for information which it can categorise and file, but cannot process or use to further its own ideas. It can compile and compartmentalise and label everything ever written, everything being spoken, all words in the world if we let it.
It can't think. But doesn't need to.
We are 'using' AI via ChatGPT, feeding instructions and questions and conversation into it. It is frequently wrong about easy-to-google answers, even though its conversation is perfect. We laugh at it.
We are worried that students will use chat GPT to write university essays. It is able to do this easily and mostly unidentifiably with a mere modicum of human editing. Some students still do not manage to evade human detection. But many do. Even when the humans are looking for it.
Reddit is a cesspit. I help start a NZ politics sub. Reddit has bots so good it's impossible to fully distinguish between bots and human, even when looking closely at profiles or trends.
I am banned from Reddit.
It's 2024. Much of the internet is bots.
14 notes
·
View notes
Text
Prometheus Gave the Gift of Fire to Mankind. We Can't Give it Back, nor Should We.
AI. Artificial intelligence. Large Language Models. Learning Algorithms. Deep Learning. Generative Algorithms. Neural Networks. This technology has many names, and has been a polarizing topic in numerous communities online. By my observation, a lot of the discussion is either solely focused on A) how to profit off it or B) how to get rid of it and/or protect yourself from it. But to me, I feel both of these perspectives apply a very narrow usage lens on something that's more than a get rich quick scheme or an evil plague to wipe from the earth.
This is going to be long, because as someone whose degree is in psych and computer science, has been a teacher, has been a writing tutor for my younger brother, and whose fiance works in freelance data model training... I have a lot to say about this.
I'm going to address the profit angle first, because I feel most people in my orbit (and in related orbits) on Tumblr are going to agree with this: flat out, the way AI is being utilized by large corporations and tech startups -- scraping mass amounts of visual and written works without consent and compensation, replacing human professionals in roles from concept art to story boarding to screenwriting to customer service and more -- is unethical and damaging to the wellbeing of people, would-be hires and consumers alike. It's wasting energy having dedicated servers running nonstop generating content that serves no greater purpose, and is even pressing on already overworked educators because plagiarism just got a very new, harder to identify younger brother that's also infinitely more easy to access.
In fact, ChatGPT is such an issue in the education world that plagiarism-detector subscription services that take advantage of how overworked teachers are have begun paddling supposed AI-detectors to schools and universities. Detectors that plainly DO NOT and CANNOT work, because the difference between "A Writer Who Writes Surprisingly Well For Their Age" is indistinguishable from "A Language Replicating Algorithm That Followed A Prompt Correctly", just as "A Writer Who Doesn't Know What They're Talking About Or Even How To Write Properly" is indistinguishable from "A Language Replicating Algorithm That Returned Bad Results". What's hilarious is that the way these "detectors" work is also run by AI.
(to be clear, I say plagiarism detectors like TurnItIn.com and such are predatory because A) they cost money to access advanced features that B) often don't work properly or as intended with several false flags, and C) these companies often are super shady behind the scenes; TurnItIn for instance has been involved in numerous lawsuits over intellectual property violations, as their services scrape (or hopefully scraped now) the papers submitted to the site without user consent (or under coerced consent if being forced to use it by an educator), which it uses in can use in its own databases as it pleases, such as for training the AI detecting AI that rarely actually detects AI.)
The prevalence of visual and lingustic generative algorithms is having multiple, overlapping, and complex consequences on many facets of society, from art to music to writing to film and video game production, and even in the classroom before all that, so it's no wonder that many disgruntled artists and industry professionals are online wishing for it all to go away and never come back. The problem is... It can't. I understand that there's likely a large swath of people saying that who understand this, but for those who don't: AI, or as it should more properly be called, generative algorithms, didn't just show up now (they're not even that new), and they certainly weren't developed or invented by any of the tech bros peddling it to megacorps and the general public.
Long before ChatGPT and DALL-E came online, generative algorithms were being used by programmers to simulate natural processes in weather models, shed light on the mechanics of walking for roboticists and paleontologists alike, identified patterns in our DNA related to disease, aided in complex 2D and 3D animation visuals, and so on. Generative algorithms have been a part of the professional world for many years now, and up until recently have been a general force for good, or at the very least a force for the mundane. It's only recently that the technology involved in creating generative algorithms became so advanced AND so readily available, that university grad students were able to make the publicly available projects that began this descent into madness.
Does anyone else remember that? That years ago, somewhere in the late 2010s to the beginning of the 2020s, these novelty sites that allowed you to generate vague images from prompts, or generate short stylistic writings from a short prompt, were popping up with University URLs? Oftentimes the queues on these programs were hours long, sometimes eventually days or weeks or months long, because of how unexpectedly popular this concept was to the general public. Suddenly overnight, all over social media, everyone and their grandma, and not just high level programming and arts students, knew this was possible, and of course, everyone wanted in. Automated art and writing, isn't that neat? And of course, investors saw dollar signs. Simply scale up the process, scrape the entire web for data to train the model without advertising that you're using ALL material, even copyrighted and personal materials, and sell the resulting algorithm for big money. As usual, startup investors ruin every new technology the moment they can access it.
To most people, it seemed like this magic tech popped up overnight, and before it became known that the art assets on later models were stolen, even I had fun with them. I knew how learning algorithms worked, if you're going to have a computer make images and text, it has to be shown what that is and then try and fail to make its own until it's ready. I just, rather naively as I was still in my early 20s, assumed that everything was above board and the assets were either public domain or fairly licensed. But when the news did came out, and when corporations started unethically implementing "AI" in everything from chatbots to search algorithms to asking their tech staff to add AI to sliced bread, those who were impacted and didn't know and/or didn't care where generative algorithms came from wanted them GONE. And like, I can't blame them. But I also quietly acknowledged to myself that getting rid of a whole technology is just neither possible nor advisable. The cat's already out of the bag, the genie has left its bottle, the Pandorica is OPEN. If we tried to blanket ban what people call AI, numerous industries involved in making lives better would be impacted. Because unfortunately the same tool that can edit selfies into revenge porn has also been used to identify cancer cells in patients and aided in decoding dead languages, among other things.
When, in Greek myth, Prometheus gave us the gift of fire, he gave us both a gift and a curse. Fire is so crucial to human society, it cooks our food, it lights our cities, it disposes of waste, and it protects us from unseen threats. But fire also destroys, and the same flame that can light your home can burn it down. Surely, there were people in this mythic past who hated fire and all it stood for, because without fire no forest would ever burn to the ground, and surely they would have called for fire to be given back, to be done away with entirely. Except, there was no going back. The nature of life is that no new element can ever be undone, it cannot be given back.
So what's the way forward, then? Like, surely if I can write a multi-paragraph think piece on Tumblr.com that next to nobody is going to read because it's long as sin, about an unpopular topic, and I rarely post original content anyway, then surely I have an idea of how this cyberpunk dystopia can be a little less.. Dys. Well I do, actually, but it's a long shot. Thankfully, unlike business majors, I actually had to take a cyber ethics course in university, and I actually paid attention. I also passed preschool where I learned taking stuff you weren't given permission to have is stealing, which is bad. So the obvious solution is to make some fucking laws to limit the input on data model training on models used for public products and services. It's that simple. You either use public domain and licensed data only or you get fined into hell and back and liable to lawsuits from any entity you wronged, be they citizen or very wealthy mouse conglomerate (suing AI bros is the only time Mickey isn't the bigger enemy). And I'm going to be honest, tech companies are NOT going to like this, because not only will it make doing business more expensive (boo fucking hoo), they'd very likely need to throw out their current trained datasets because of the illegal components mixed in there. To my memory, you can't simply prune specific content from a completed algorithm, you actually have to redo rhe training from the ground up because the bad data would be mixed in there like gum in hair. And you know what, those companies deserve that. They deserve to suffer a punishment, and maybe fold if they're young enough, for what they've done to creators everywhere. Actually, laws moving forward isn't enough, this needs to be retroactive. These companies need to be sued into the ground, honestly.
So yeah, that's the mess of it. We can't unlearn and unpublicize any technology, even if it's currently being used as a tool of exploitation. What we can do though is demand ethical use laws and organize around the cause of the exclusive rights of individuals to the content they create. The screenwriter's guild, actor's guild, and so on already have been fighting against this misuse, but given upcoming administration changes to the US, things are going to get a lot worse before thet get a little better. Even still, don't give up, have clear and educated goals, and focus on what you can do to affect change, even if right now that's just individual self-care through mental and physical health crises like me.
#ai#artificial intelligence#generative algorithms#llm#large language model#chatgpt#ai art#ai writing#kanguin original
9 notes
·
View notes
Text
What to do abt the AI theft
I may not approve of it but I do want to continue easy access to reading my fic
So I’m debating on making it user only or just giving up and letting AI steal my work
I don’t care abt my own personal writing getting stolen bc I’ve only ever written this one fic and I don’t think I’ll be making several more, maybe one or two. So I don’t think it’ll be enough content that they’re stealing my specific creative writing style. They can’t steal my creativity, I wanted The Scarlet Witch in a Peter in Gotham fic and AI can’t make the silly chaotic plot line that I made. They don’t have my brain so they can take my measly writing.
But it does upset me that all my favourite moments I’ve already written will be recycled and butchered. So making it user only may be the good choice, because even if I believe otherwise I can’t be 100% sure that AI won’t be detected in my future school works bc my writing habits was stolen.
I wish the world will stop revolving around AI and schools will learn that AI detection isn’t cutting it anymore. But whatever. So I’ll let yall decide for me. Should I keep it free for all, limit it to users only, or private it until it’s safe from getting stolen again.
3 notes
·
View notes