#chatgpt is just fancy autofill
Explore tagged Tumblr posts
Text
one time out of desperation i used chatgpt to try and figure out who the author of a really old short story was (i thought maybe something programmed to process full sentences would have better luck bc i was pretty sure the short title of the story was confusing the keyword search engines) and after giving me 4 wrong answers in a row I asked why it thought the 4th author wrote the short story when there was no evidence linking the two together at all and it basically replied: "you're right!! there is no evidence to suggest this author wrote this short story :)" and when I asked why it would tell me he had anyway it did the auto-fill reply version of "hehe sorry :)"
so anyway in case you still thought ai search engines had any kind of value, they dont. they are not capable of processing and analysising a question. it's not even intelligent enough to say "sorry, insufficient data" if it doesnt have an answer because it doesnt have an articifical intelligence capable of recognizing what real evidence or data is or why it should matter in its response.
#jump on the Hate Ai train now but specifically the “hate autogenerated response” train#because ill say it once and ill say it again - ai doesnt exist. it hasnt been invented yet#chatgpt is just fancy autofill
62 notes
·
View notes
Text
So I think the actual problem is the coffee shop au is a false equivalence.
In fandom when two or ten thousand people write a coffee shop au, they're creating different stories. The authors might each use different characters as a protagonist, or have the same characters inhabiting different roles such as customer, barista, or manager.
Authors aren't stealing ideas anymore than star trek is stealing from star wars just because they're both set in space.
An instance where an idea is actually stolen is something like plagiarism; somebody copying the entire text of a fic, or changing small details while keeping all of the story beats.
Most people would consider this wrong. Even if it's not common enough that people focus on combating it, generally people agree that it is dishonest to lift someone else's idea and to represent it as your own.
Imagine somebody else takes someone else's fic and then swaps out the words for a bunch of synonyms. This would be plagiarism, and not only would it be wrong, it would be against the terms of service of AO3.
chatGPT written fic is wrong because it is automated plagiarism.
Instead of one person plagiarizing one fic by hand substituting the words, a person has written a code and then the same or another person has fed it a bunch of fic, and then plagiarized from all of them.
Intuitively, this argument seems false; after all, the resulting text from chatGPT reads like it has more in common with the 10,000 coffee shop aus than a single thinly paraphrased plagiarism fic. Because the writing of chatGPT more authentically feels like the result of an author, emotionally it feels correct to treat it like something written by an author.
Most current smartphones have a text predicter function. If you start a text or an email with "hello" and then hit the first option suggested by the autofill, you can often get a perfectly functional message.
That's what chatGPT is. It's a fancy text predicter. This fact is obscured because it has a different dataset and a more advanced algorithm.
We understand that google smartfill or whatever doesn't actually have an understanding of what a greetings or sentence structure is, because it often produces nonsense. It's a bit harder to realize with chatGPT.
A human can read a fanfiction and be inspired and write their own story. Or they can create a plagiarized replication. But the fact that a person can do either is dependent on a human's ability for conceptual understanding. Humans understand things like "characters" or "jobs," and so can interact with those building blocks instead of the mere words and rearrange them into something that is now changed enough that it is its own work.
Because chatGPT is only a text predicter, it can only plagiarize.
There's a second component to your question, which is not just "Why is chatGPT different," but also "why is people's reaction to chatGPT different?"
Some of that reaction is simple, you pointed out that humans will write for "popularity/status/clout/money." If clicks, attention, dollars, are going towards chatGPT produced work, then the humans will see a decrease in their returns for what they write for. Since humans don't start writing fic because an initiated code command tells us to, this decrease in rewards will result in people writing less fic.
And chatGPT is a plagiaristic text predicter. It cannot create its new ideas, only move its existing data into new patterns. If people stop writing fic, there's no new data, and no "new" types of fic.
chatGPT's mass proliferation would be the death of fandom.
Let's talk about why someone would respond more to being partially ripped off by an AI as opposed to completely ripped off by a person.
I've been talking about the most clear cut cases of plagiarism for the purposes of the discussion, but of course in reality, what counts as plagiarism can be very muddied.
Because humans have the ability to both create new works and to plagiarize, they also have the ability to do both at the same time. So the line can become fuzzy for what fic is plagiarized enough to be banned by the terms of service.
So I believe plagiarism is wrong, but I have to make separate decisions when it comes to whether any particular work counts as plagiarism, and whether that work deserves to be reported and that author punished.
When I'm dealing with other humans, there are social implications to think about. Having too broad a definition of plagiarism might result in a culture where authors are punished for genuine mistakes. So while I believe plagiarism is wrong, on a case by case basis there are reasons I might want a specific, arguably plagiarized, work, to remain on the archive.
chatGPT makes this both more and less simple. If there is a work that is labeled as being produced by an AI (and not edited by a person) then I know enough of the process to know it was inherently plagiarism.
However, if I'm reading a work I only suspect was made by chatGPT, then I could be making sure a nonprofit isn't paying to host plagiarism. Or I could have misunderstood and targeted someone with poor language or English skills.
I hope this answers your questions. It's quite late where I am and I'm a bit tired, so definitely feel free to ask follow up if I was unclear.
Okay can someone actually give me an answer to this question....
Why, if 10,000 other fans 'steal' your idea for a coffee shop AU and use it in their almost identical coffee shop AU, that's totally okay and just how fanculture works.
But if the AI spits out a coffee shop romance that reads like a traditional coffee shop AU, that's crossing a line?
Why, if a bunch of other fans steal your idea, is that just the process of human creativity at work, but if a machine generates something similar, that's a bridge too far?
I literally don't get how the one is different from the other (and frankly if either was the outrage, I'd think it be the people who are consciously deciding to rip you off rather than the machine that is just reproducing large scale linguistic patterns and not singling your work out because it literally CANNOT single out pieces of work to rip off individually because that's not how AIs work)
#my post now#ai#chatGPT#thanks for asking this! i had a lot of fun teasing out this explanation#lots of ai discussion gets caught up on ill defined and emotionally immediate concepts#like sentience and authorial intention/consent#i hope i gave a response that stands up
293 notes
·
View notes