#llm ai
Explore tagged Tumblr posts
monstersandmaw · 2 days ago
Text
If em dashes are apparently a sign of AI writing, then I want you to know that I use them. A lot. And none of my writing has ever been — nor ever will be — created with the 'assistance' of AI.
I do not support the use of AI in the Arts, and I do not consent to any of my work being fed into a large language model to regurgitate out some soulless drivel.
102 notes · View notes
revscarecrow · 2 months ago
Text
Just saw this article and figure it's worth a post since I wasn't clear of the power use damage angle when the block evading anon was challenging it.
149 notes · View notes
macrolit · 2 months ago
Text
Can we start a dialogue here about AI? Just to start, I’ll put a few things out there.
As a student, did you ever use AI to do your work for you? If so, to what extent?
Are you worried about AI replacing your job?
What are your other concerns about AI?
54 notes · View notes
vergess · 2 months ago
Note
Do you consent to your writing being used to train an llm so that I can get more chapters of the things you haven't updated in years?
Lol sure, have fun, goodness knows my completion rate is abyssmal! Personally, I've found the more mainstream models tend to do a bad job capturing my style and word choices, so I recommend using a model designed for fiction (eg character/roleplay models) if you have one.
Also just a tip: my corpus of fiction writing is not large enough to train a model on, as far as I know. You most likely mean using my writing as context prompt and asking the model to "continue this work." That's also fine with me.
In fact, if you generate anything cool, please let me know, I'd love to take a look!
If there's a particular fic you wanted an ending to, I'm also happy to give you my plot point notes too, just drop me the title in an ask.
20 notes · View notes
wu-sisyphus-gang · 16 days ago
Text
Write anti LLM test questions!
For example:
State the fredholm alternative for the following integral equation:
u(x) + \int_{0}^{1} (literally any polynomial in x and u(t)) dt = 1
LLMs can't handle this level of abstraction.
Your blood is weak! You're all weak!
12 notes · View notes
lectorel · 16 days ago
Text
Tumblr media
Ah, yes. Fuck over sharing, feed your data directly into the privacy invasion machine instead.
11 notes · View notes
brokenhandsmedia · 21 days ago
Text
A lot of people are really invested in the idea that there will one day be artificial intelligence of godlike power. This leaves aside the fact that we don't even have real artificial intelligence.
Much like the push-button kitchen, the automobile, and the washing machine allowed middle-class people to experience a version of the upper class's freedom from cooking, taking care of horses, and washing clothes, the LLMs that we have today give you access to something previously reserved for the wealthy: you, too, can have a yes man, toadie, or enabler.
Unfortunately, this is the least useful thing in the world.
11 notes · View notes
cyborgized · 1 month ago
Text
Tumblr media Tumblr media
11 notes · View notes
zolabeano · 2 months ago
Text
two months clean off character ai. we rollin
17 notes · View notes
bug-the-chicken-nug · 2 months ago
Text
preliminary note: if this resonates with you at all. I would very much appreciate you reblogging it. a problem with so much anti ai sentiment is like. for such a self-professed pro-human movement. it sure does have a lot of misanthropy implicitly baked in.
which in turn comes off like its basically kinda constantly running the risk of turning more and more artists into dogmatic elitists
because it essentially kinda constantly risks "artists are intrinsically pure angels who deserve everything and are always inherently superior to Icky Computer. but normal people are icky dumb irresponsible sheep who are always totally selfish and will blindly do whatever the rich tech companies say! surely there is no nuance! surely, no one can care about two things at once, or have a coherent reason to not be wholly and rigidly against something with flaws! if they say they do, they just have no integrity! they're making excuses and trying to normalize oppressing us, so it's secretly okay to give in to black and white us vs them thinking after all! won't anyone think of The Artist?! you know, even though a huge portion of 'anyone' is all the people I just shunted into my out group and am now completely blocking and ignoring anyway?"
"won't anyone stand up for the Little Guy, even though, in this instance, a ton of we Little Guys keep showing worrying early precursor signs of temporarily embarrassed millionaire syndrome and claiming humble goals, but lowkey wanting to be the new Big Guys over at least Some people in the process? so by our own logic, actually you should shun and ignore us and reverse the same black and white thinking on us for having these faults?"
oh, but you probably realize: that would be a bad faith thing to do, now that it's about *you*. this whole argument is unfair, reductive, and insensitive, now that it's about *you*.
Yeah. You're right. It is veering into hyperbolic overgeneralization, huh? It is very convenient and cherrypicked, isn't it?
And yet so many people in this crowd never try to consider if *they're* being unfair, reductive, and insensitive too.
but tbh it Does smack of like. Strong precursors to TERF logic, doesn't it? A proverbial "Little" group becoming so frothingly mad at the "Big" group that they become self-righteous, and shunned and reviled by many of those they said they wanted to protect, all while now coming to see those same people as enemies and continually becoming more bitter and insular.
it also smacks of often feeling like a fresh new iteration of the underlying stuff behind that issue i personally had in middle/high school, where I was always shy and lonely in a way that spiralled into a weird and lowkey self-absorbed inferiority/superiority complex
"I hate mostly everyone and I'm smarter than them, but also I still deep down REALLY want them to be my friends and validate me and prove themselves Worthy, all while I do barely any outward work in challenging my perceptions of them, meeting them halfway, or positively enticing them to behave that way, and instead just continue waiting for them to miraculously decide to do all the work themselves, because I'm Entitled to that, even though I'm also wildly insecure and think I suck"
with an insidious extra trick because this time you can go "ohrr but we keep telling them to roleplay with us instead of char.ai!"
as if you Don't immediately block them the moment they're honest about it or sharing anything positive or even just suspiciously neutral at all about it?
as if you can at all actually guarantee that across the board, your RP communities are always inclusive, welcoming, non-judgmental, and accessible?
as if you don't love to just blame the ai folk for not "trying hard enough" to find you or put themselves out there... as you then follow that sentiment up with a self-deprecating reblog of a hugely popular post about how a ton of you are soooo bad at replying to people?
like. i'm sorry. really. your mental illness is not your fault. this part especially is a low blow on my end. but this behavior is still one of the things that erodes accessibility and feelings of inclusiveness while also risking hurting the feelings of others. you still have to do your best to work on it instead of just apologizing or joking about it over and over. particularly because you kind of brought that responsibility upon yourself the moment you starting tooting your side's own horn about how much more "fulfilling" and "positive" it is.
otherwise, you come off like you want way too much praise and credit for the bare minimum.
And some of you probably thought "but im NOT self-absorbed, you have me all wrong, i'm doing this in the name of All Artists!"...
as if there aren't like an assload of harmful dogmas that are self-evidently harmful even though they are, just like this, still motivated by genuine care for a large group of people, and coinciding hatred of a large group of people who seem to pose a threat to them. (racism, for one. much of it is in fact a loving desire to "protect" your entire group, and your perceived ideas of its best interests.)
and you'll notice that the vast majority of those dogmas are self-destructive, irrational, overly fixating on punishing and isolating the threat and all who are even vaguely too sympathetic (hence a major reason for their self-destructive tendency) and reliant on cherry-picking, even though the goal is pretty noble when you reduce it to sound similar to the prevalent anti-ai ethos. (does "protect the children!" ring a bell? does it sound all that dissimilar to "protect artists!" on the face of it?)
7 notes · View notes
wolveria · 2 months ago
Note
I heard you have very strong opinions about AI. Care to talk more about it? And have you met anyone who did like AI?
I’ve said and shared all I think I need to say about it. Instead of me trying to convince someone who probably doesn’t want to listen, why don’t you defend generative AI?
Why don’t you provide evidence we need AI and absolutely can’t live without it? Why don’t you explain to me why AI needs to be shoved in our faces every moment without our consent? Why don’t you justify the resource use and its cost against the future impacts of culture, society, and our humanity?
But, you’re right. Why would anyone have strong opinions about AI.
10 notes · View notes
thespacesay · 3 months ago
Text
i've been seeing that post going around about how university wifi should block gen AI sites. i get the frustration, but like. bans are a bandaid shoved into a gaping wound imho. i think universities should have more education about genAI and LLMs and why they are not a magical solution
10 notes · View notes
catvampire · 7 days ago
Text
As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.
And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do." [...]
The situation is so novel, in fact, that even ChatGPT's maker OpenAI seems to be flummoxed: when we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response.
An extremely grim, but necessary read, I recommend reading it in full. Trigger warnings for descriptions of various mental breakdowns and attempted suicide.
5 notes · View notes
evanderlfragoso · 1 month ago
Text
This is such a, "uh yeah, duh," thing, although it's nice to see researchers putting a magnifying lens on it, even though it won't affect policy because the people making the rules just don't care. The diminishing of critical thinking (at least speaking in regards to the U.S.A.) has basically been an essential part of our consumerist, capitalist-centric society. The people at the top don't care to have an intelligent populace really, they mostly care about profits and keeping people at the bottom right where they are, struggling while making them money while barely making a living. This is America.
5 notes · View notes
teledyn · 2 months ago
Text
We need a sci-fi series like Star Trek TNG, but instead of Data being super logical as were the computers of that era, our updated Data is an LLM and Starfleet foists these on starship commanders because, 52% of the time, they are brilliant.
Note: this must not be a comedy.
5 notes · View notes