#generative ai
Explore tagged Tumblr posts
dicklessthewonderclown · 5 months ago
Text
ai has no fucking place in fandom. none whatsoever. even if you’re “just using it for feedback” or “just using it for prompts” or “just using it to help with writer’s block.” and if you’re using it for research? why on god’s green earth would you use the Lying Machine to do research???
all art, whether written work, paintings, drawings, anything, is a fundamentally human creation, and by using ai to create “art,” you are stripping the humanity from something inherently, intrinsically human
fanfiction and fanart and any kind of fan creation are borne out of a love of the original media and a passion for creation. there is no reason or need to supplement that passion and creation with gibberish from the Plagiarizing Machine
nobody expects perfection. it is the imperfections that make it beautiful, that make it raw and rich with humanity. this is a community that exists because of a shared love for something.it is built upon our own creations. ai generated content threatens its very foundations
9K notes · View notes
probablyasocialecologist · 10 months ago
Text
Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found. Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence. The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context. Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all. These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.  Human summaries ran up the score by significantly outperforming on identifying references to ASIC documents in the long document, a type of task that the report notes is a “notoriously hard task” for this type of AI. But humans still beat the technology across the board. Reviewers told the report’s authors that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information. Three of the five reviewers said they guessed that they were reviewing AI content. The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions which communicated the message better and more concisely. 
3 September 2024
5K notes · View notes
ghostieking · 1 day ago
Note
regarding AI art... I am pretty much 100% against but trying to figure some stuff out... where is the moral ground on generating AI art but not posting it? like what about someone requesting a specific image so they can practice tracing while they learn to draw? cause that way they're not stealing anyones art right especially if they don't post any of it and just use it for practice
okay I feel like this answer can vary but for me generative AI is hurtful in many ways:
- they learn by analyzing other people's work be it artworks or photography
- the impact it has on nature ain't good and you can't deny that... I feel like we destroy this world enough as it is we dont need a super computer to worsen that
generative AI for learning still ain't good in my eyes cause there are so many peeps out there working on literally nothing but photographing references for art! you could usually easily research reference with no AI before this mess started and make fine art
and even before internet: artists have been using their imagination to create their own reference for hundreds and hundreds of year!!! we don't need a soulless version of what we can have traditionally... also I feel like it takes away from the ive process: researching is part of it!!
I could go on and on but it's late and I feel like other people do a better job explaining it than me lol
34 notes · View notes
animentality · 1 year ago
Text
Tumblr media
26K notes · View notes
alex51324 · 3 days ago
Text
#technology#ai tech#if chatgpt can't be presented a list of rules and then follow them to accurately play a game#then it also can't do *a lot* of the things it's being actively marketed to do#synthesizing and summarizing information?????#it doesn't understand what the information is!#it will make shit up in the same way it teleports pieces across the board#and people want to give this shit your medical records?!? let it make business decisions??? link it to your *credit cards*?!?!?!
Peer review.
Following instructions is the basic thing that computers, as we have understood them for the entirety of their existence, do.
People tend to assume that, when an AI tool generates something, it's doing so in accordance with instructions that they've given it (or, sometimes, in accordance with instructions that they assume are in play, such as some underlying relationship to exterior reality).
But this is a very clear, unambiguous demonstration that generative AI tools are, in fact, not following any rules, implicit or explicit. It definitely is not a matter of the rules being poorly stated, or assumed without being stated, or any other type of human error, because we (as a society in general) absolutely know how to give a computer the instructions for how to play chess.
A direct-to-consumer chess-playing computer (Atari's) has been available to the public since at least 1979, and chess computers have been making news since the 1950's, as accessible-to-the-public examples of how smart computers are.
ChatGPT cannot follow the rules of chess, and yet will confidently state that it can.
Making a computer that can non-randomly generate stuff (images, text, whatever), based on patterns found in a large amount of training data, is I guess kind of cool, but if you can't connect that process to the "following instructions" bit, you've basically just made a Magic 8 Ball with more steps.
Deep Blue is 30 years old and was capable of defeating chess grand champions. It could be housed in a single cabinet.
ChatGPT spans untold data centers devouring massive amounts of electricity and it got its ass whipped by an 8 bit gaming console from the 1970s.
33K notes · View notes
yukiainez · 2 days ago
Text
Tumblr media
CHAT WITH ME ON TELEGRAM
31 notes · View notes
altamont498 · 6 days ago
Text
"I asked ChatGPT" OK well I ran it through the Bat Computer and it looks like Mr. Freeze is hiding out in the abandoned ice cream factory.
4K notes · View notes
Text
So I was recently thinking about a kind of Danger Days au, where BLI is actually AI
Like, a future world where Killjoys are people who 20 years prior avoided using AI and all the brain numbing kinds of algorithms, while the majority became a thoughtless, illiterate mass, with no individual opinions or critical thinking skills, so they just let themselves be led by ai that's making their choices and their THINKING for them. All art is replaced, movies and books are AI generated, there are no artists or creators, everything is artificial and fake.
Global warming turned most places into a desert, where 'Killjoys' live, because the only alternatives are cities overtaken by AI and inhabited by brain-dead people
Killjoys are the only ones who still create art, who still ask questions, try to find answers, collect books written before the generative AI flood. And they play the only real music left in this world
Btw I was inspired by this very interesting video about the result of the rising popularity of artificial intelligence. It's honestly kind of scary, and then the hyperfixation obviously kicked in so Danger Days got mixed somewhere in that
27 notes · View notes
anthropohedron · 5 days ago
Text
My first semester of college I had to write a paper for the final in a class. It was due at noon on a particular day, at a building I could walk to from my dorm in about 15 minutes. Over the course of a few weeks I read my source material (a broad selection from everything Robert Heinlein ever wrote, including short stories published in girls' magazines and from the perspective of a teenage girl) and put sticky notes on pages I planned to excerpt. And then I did neither jack nor shit about writing the damn thing for over a week.
At 8:30am on the due date I woke up, sat down at my computer, and hammered out the essay. Everything was there in my brain, just waiting to flow out, and it was glorious. At 11:30am I had the 10 or 12 or whatever pages I needed. I printed to a file, copied the file to a computer across campus that had a printer connected to it, and printed. I then threw on clothes, biked to pick up the printout then to where it was due. At 11:58am I turned in the paper that had not existed even as an outline when I woke up, and I got an A.
If I'd had generative AI do the paper I would never have learned how to do the research. I would never have learned that I was capable of putting together coherent thoughts literally in my sleep. I would not have learned to write cogent, supported arguments. Because the previous two years in high school my English teacher had been desperately trying to get me to learn these things and I had gotten Cs and Ds and been unendingly frustrated. Skills take practice.
Whenever I think about students using AI, I think about an essay I did in high school. Now see, we were reading The Grapes of Wrath, and I just couldn't do it. I got 25 pages in and my brain refused to read any more. I hated it. And its not like I hate the classics, I loved English class and I loved reading. I had even enjoyed Of Mice and Men, which I had read for fun. For some reason though, I absolutely could NOT read The Grapes of Wrath.
And it turned out I also couldn't watch the movie. I fell asleep in class both days we were watching it.
This, of course, meant I had to cheat on my essay.
And I got an A.
The essay was to compare the book and the movie and discuss the changes and how that affected the story.
Well it turned out Sparknotes had an entire section devoted to comparing and contrasting the book and the movie. Using that, and flipping to pages mentioned in Sparknotes to read sections of the book, I was able to bullshit an A paper.
But see the thing is, that this kind of 'cheating' still takes skills, you still learn things.
I had to know how to find the information I needed, I needed to be able to comprehend what sparknotes was saying and the analysis they did, I needed to know how to USE the information I read there to write an essay, I needed to know how to make sure none of it was marked as plagerized. I had to form an opinion on the sparknotes analysis so I could express my own opinions in the essay.
Was it cheating? Yeah, I didn't read the book or watch the movie. I used Sparknotes. It was a lot less work than if I had read the book and watched the movie and done it all myself.
The thing is though, I still had to use my fucking brain. Being able to bullshit an essay like that is a skill in and of itself that is useful. I exercised important skills, and even if it wasnt the intended way I still learned.
ChatGTP and other AI do not give that experience to people, people have to do nothing and gain nothing from it.
Using AI is absolutely different from other ways students have cheated in the past, and I stand by my opinion that its making students dumber, more helpless, and less capable.
However you feel about higher education, I think its undeniable that students using chatgtp is to their detriment. And by extension a detriment to anyone they work with or anyone who has to rely on them for something.
19K notes · View notes
kittyhelpsstuff · 18 hours ago
Text
I think it's important to remember when discussing AI that it isn't just hurting people by stealing others creative works, it is hurting people, especially black people and those in hotter countries due to it's effects on the environment which is sort of lost in the discussion since there is a large focus on it's effects on the creative field, which while important isn't the only or even largest problem with it.
So here are some sources to get more in the know about the environmental aspect of the issue.
21 notes · View notes
seaglasswrites · 4 months ago
Text
I see a lot of people advocate for the use of AI/AMs in writing as a tool for when writers are stuck; The main selling point for these people seems to be that, when facing writers’ block, a writer can just plug their story into one of these tools and get “help”.
It’s a similar idea to a lot of writing posts I’ve seen on here, complaining about the “in-between” - “I’ve got this wonderful beginning and this heart-wrenching ending, but no idea what to put in the middle! Writing sucks!”
These people don’t seem to realize, though, that without the author figuring that out for themselves, there is no story.
Sure, you can have a basic idea for a plot; Let’s use 1984 as an example: A man lives in a hyper-surveillance society under an authoritarian dictatorship, and rebels against it by joining a secret society that turns out to have been the government all along.
That’s a great plot idea - and it’s sure to do great with both publishers and readers alike! But it’s not 1984. It’s a plot summary of 1984.
If George Orwell had plugged that prompt into ChatGPT and asked it to do the rest for him, we would probably still have Winston Smith (or someone like him), but we might not have Julia, or O’Brien, or the scene with the rats, or the melancholy ending at the café, or a whole host of other important characters and plot points.
Why? Because here’s the thing - Orwell came up with those ideas because he actually thought about the premise he had imagined. What would people act like in such a society? What kind of torture methods would their government use?
Even the ending scene where Winston sits at the café can have a million different things said about it when it comes to Orwell’s thought process when he wrote it. What would this government do with its victims once they were done torturing them? How would they make a public example of the power it had, without actively televising said torture? How would “normal” citizens treat these victims? What would their short remaining lives be like?
If you put the basic details into ChatGPT, though - “dystopian government, surveillance, torture, betrayal” - It wouldn’t give you the same result.
Every decision you see in a book, movie, or other piece of media that you love is there because the author got stumped at one point and had to think their way out of it.
Ask any famous author about their writing process. Read or watch any interview. There will always be a point where they had no idea where to take the story next, and some of the parts about those stories that are the best are the ones that came about from writer’s block.
Writing is all about getting stumped, and confused, and not knowing where to go next. It’s okay to not always know what you’re doing. But you do actually have to think your way out of it. Otherwise, you’re not writing.
314 notes · View notes
bibliomorgan · 3 days ago
Text
Oh are you saying a AI being fed inside one of the most hateful racist, LGBTphobic, ableist social media is regurgitating all the M-usk speeches and points of view?
Wow.
What a full circle.
Tumblr media Tumblr media
I am SHOCKED that making Grok "unwoke" literally turned it into, in its own words, Mechahitler.
1K notes · View notes
overwhelmedfernfrond · 1 day ago
Text
“ai makes things accessible” actually i would rather every single person on this site that needs so much help writing an email that they’d use ai PERSONALLY message ME i will write ten thousand emails for randoms for free before i condone the use of genai
(like obviously I’d rather everyone learn how to write an email themself but seriously I WILL write y’all’s emails lest you use ai for the same purpose)
22 notes · View notes
elexuscal · 2 months ago
Text
"what did students do before chatgpt?" well one time i forgot i had a history essay due at my 10am class the morning of so over the course of my 30 minute bus ride to school i awkwardly used by backpack as a desk, sped wrote the essay, and got an A on it.
six months later i re-read the essay prior to the final exam, went 'ohhhh yeah i remember this', got a question on that topic, and aced it.
point being that actually doing the work is how you learn the material and internalize it. ChatGPT can give you a short cut but it won't build you the the muscles.
104K notes · View notes
victusinveritas · 10 months ago
Text
Tumblr media
24K notes · View notes