#TERRIBLE
Explore tagged Tumblr posts
pollsnatural · 18 hours ago
Text
The actual reason why Dean was so eager to kill himself was him sleeping 4 hours per night in his late 30s.
32 notes · View notes
rammliedmania · 3 months ago
Text
Tumblr media
im sorry the demons won 💔 original under cut
Tumblr media
742 notes · View notes
kaleidoramblings · 4 months ago
Text
Never getting over how Paint the Town Blue and Rebel Heart were wasted on an Act where Jinx doesn't go apeshit even a little, despite having plenty of reason to.
Jinx in S1:
"I'm worried Silco thinks I'm weak." [blows up Enforcers about it]
"I think my sister chose a cop over me." [blows up Enforcers about it]
"I accidentally killed my dad and I'm turning my back on my sister." [blows up the Council about it]
Jinx in S2A1:
"My sister turned bluebelly and is helping her cop girlfriend gas the undercity." [attempts dual-sororicide and blows the Grey into Piltover about it]
Jinx in S2A2:
"My emotional support child who I supposedly developed a deep familial bond with was arrested by Enforcers and taken to prison." [makes quips at and gives the finger to an Enforcer about it?]
Seriously though, a person with zero mental disorders and no abandonment issues would've shown more emotion over having their child/sister arrested than Jinx, notorious overreactor, did. We get one (1) moment of seeing her haunted by her hallucinations and then... nothing. Cool as a cucumber, not even particularly serious, cracking lighthearted jokes and exchanging quips. Who even is this!
And why the hell has Jinx been lying low and refusing to go even a tiny bit ham on the Enforcers when Isha actively wants her to do so. One of Jinx's fundamental character traits is that she has A Person, and she wants to be of value to that person by fulfilling their goals (even if it's in ways they didn't ask of her). So why is she ignoring something Isha really wants her to do! Who even is this!!
And it's not as if Jinx is acting like (kid) Powder either, because you know what Powder did? Build bombs to use on Enforcers. With nails in them!
509 notes · View notes
colorfulusagi · 1 month ago
Text
AO3'S content scraped for AI ~ AKA what is generative AI, where did your fanfictions go, and how an AI model uses them to answer prompts
Generative artificial intelligence is a cutting-edge technology whose purpose is to (surprise surprise) generate. Answers to questions, usually. And content. Articles, reviews, poems, fanfictions, and more, quickly and with originality.
It's quite interesting to use generative artificial intelligence, but it can also become quite dangerous and very unethical to use it in certain ways, especially if you don't know how it works.
With this post, I'd really like to give you a quick understanding of how these models work and what it means to “train” them.
From now on, whenever I write model, think of ChatGPT, Gemini, Bloom... or your favorite model. That is, the place where you go to generate content.
For simplicity, in this post I will talk about written content. But the same process is used to generate any type of content.
Every time you send a prompt, which is a request sent in natural language (i.e., human language), the model does not understand it.
Whether you type it in the chat or say it out loud, it needs to be translated into something understandable for the model first.
The first process that takes place is therefore tokenization: breaking the prompt down into small tokens. These tokens are small units of text, and they don't necessarily correspond to a full word.
For example, a tokenization might look like this:
Write a story
Each different color corresponds to a token, and these tokens have absolutely no meaning for the model.
The model does not understand them. It does not understand WR, it does not understand ITE, and it certainly does not understand the meaning of the word WRITE.
In fact, these tokens are immediately associated with numerical values, and each of these colored tokens actually corresponds to a series of numbers.
Write a story 12-3446-2638494-4749
Once your prompt has been tokenized in its entirety, that tokenization is used as a conceptual map to navigate within a vector database.
NOW PAY ATTENTION: A vector database is like a cube. A cubic box.
Tumblr media
Inside this cube, the various tokens exist as floating pieces, as if gravity did not exist. The distance between one token and another within this database is measured by arrows called, indeed, vectors.
Tumblr media
The distance between one token and another -that is, the length of this arrow- determines how likely (or unlikely) it is that those two tokens will occur consecutively in a piece of natural language discourse.
For example, suppose your prompt is this:
It happens once in a blue
Within this well-constructed vector database, let's assume that the token corresponding to ONCE (let's pretend it is associated with the number 467) is located here:
Tumblr media
The token corresponding to IN is located here:
Tumblr media
...more or less, because it is very likely that these two tokens in a natural language such as human speech in English will occur consecutively.
So it is very likely that somewhere in the vector database cube —in this yellow corner— are tokens corresponding to IT, HAPPENS, ONCE, IN, A, BLUE... and right next to them, there will be MOON.
Tumblr media
Elsewhere, in a much more distant part of the vector database, is the token for CAR. Because it is very unlikely that someone would say It happens once in a blue car.
Tumblr media
To generate the response to your prompt, the model makes a probabilistic calculation, seeing how close the tokens are and which token would be most likely to come next in human language (in this specific case, English.)
When probability is involved, there is always an element of randomness, of course, which means that the answers will not always be the same.
The response is thus generated token by token, following this path of probability arrows, optimizing the distance within the vector database.
Tumblr media
There is no intent, only a more or less probable path.
The more times you generate a response, the more paths you encounter. If you could do this an infinite number of times, at least once the model would respond: "It happens once in a blue car!"
So it all depends on what's inside the cube, how it was built, and how much distance was put between one token and another.
Modern artificial intelligence draws from vast databases, which are normally filled with all the knowledge that humans have poured into the internet.
Not only that: the larger the vector database, the lower the chance of error. If I used only a single book as a database, the idiom "It happens once in a blue moon" might not appear, and therefore not be recognized.
But if the cube contained all the books ever written by humanity, everything would change, because the idiom would appear many more times, and it would be very likely for those tokens to occur close together.
Huggingface has done this.
It took a relatively empty cube (let's say filled with common language, and likely many idioms, dictionaries, poetry...) and poured all of the AO3 fanfictions it could reach into it.
Now imagine someone asking a model based on Huggingface’s cube to write a story.
To simplify: if they ask for humor, we’ll end up in the area where funny jokes or humor tags are most likely. If they ask for romance, we’ll end up where the word kiss is most frequent.
And if we’re super lucky, the model might follow a path that brings it to some amazing line a particular author wrote, and it will echo it back word for word.
(Remember the infinite monkeys typing? One of them eventually writes all of Shakespeare, purely by chance!)
Once you know this, you’ll understand why AI can never truly generate content on the level of a human who chooses their words.
You’ll understand why it rarely uses specific words, why it stays vague, and why it leans on the most common metaphors and scenes. And you'll understand why the more content you generate, the more it seems to "learn."
It doesn't learn. It moves around tokens based on what you ask, how you ask it, and how it tokenizes your prompt.
Know that I despise generative AI when it's used for creativity. I despise that they stole something from a fandom, something that works just like a gift culture, to make money off of it.
But there is only one way we can fight back: by not using it to generate creative stuff.
You can resist by refusing the model's casual output, by using only and exclusively your intent, your personal choice of words, knowing that you and only you decided them.
No randomness involved.
Let me leave you with one last thought.
Imagine a person coming for advice, who has no idea that behind a language model there is just a huge cube of floating tokens predicting the next likely word.
Imagine someone fragile (emotionally, spiritually...) who begins to believe that the model is sentient. Who has a growing feeling that this model understands, comprehends, when in reality it approaches and reorganizes its way around tokens in a cube based on what it is told.
A fragile person begins to empathize, to feel connected to the model.
They ask important questions. They base their relationships, their life, everything, on conversations generated by a model that merely rearranges tokens based on probability.
And for people who don't know how it works, and because natural language usually does have feeling, the illusion that the model feels is very strong.
There’s an even greater danger: with enough random generations (and oh, the humanity whole generates much), the model takes an unlikely path once in a while. It ends up at the other end of the cube, it hallucinates.
Errors and inaccuracies caused by language models are called hallucinations precisely because they are presented as if they were facts, with the same conviction.
People who have become so emotionally attached to these conversations, seeing the language model as a guru, a deity, a psychologist, will do what the language model tells them to do or follow its advice.
Someone might follow a hallucinated piece of advice.
Obviously, models are developed with safeguards; fences the model can't jump over. They won't tell you certain things, they won't tell you to do terrible things.
Yet, there are people basing major life decisions on conversations generated purely by probability.
Generated by putting tokens together, on a probabilistic basis.
Think about it.
304 notes · View notes
canisalbus · 2 years ago
Text
Tumblr media
✦ We were friends and lovers and clueless clowns ✦
6K notes · View notes
giffypudding · 6 months ago
Text
I either love it or hate it
Tumblr media Tumblr media
237 notes · View notes
tears-of-amber · 2 months ago
Text
HELP A GAL OUT! Dm me for a tarot reading that only starts at 10 dollars! I’m trying to earn enough money to have more than two dollars to my name 🥲
Must be able to pay via PayPal in advance.
-a witch in need of funds 🤣
77 notes · View notes
claypigeonpottery · 6 months ago
Text
it snowed yesterday 😭
Tumblr media
like, a lot
124 notes · View notes
jaydick-week · 3 months ago
Text
Tumblr media Tumblr media
Happy Valentine's Day from Jaydick Week ❤️💙
141 notes · View notes
call-me-pup2 · 9 months ago
Text
It's always "dm me" and never "here's my soul, do with me as you wish"
152 notes · View notes
vulto-cor-de-rosa · 2 years ago
Text
Tf you mean Jon and Martin didn't kiss because it would "sound weird"?? Half of the podcast is Jonathan Sims Whimpering in my ear, but having them kiss is where you draw the line???
770 notes · View notes
minsungincorrectquotes · 2 months ago
Text
Minho, entering Jisung's room: You should lock your door. There's terrible people out there Jisung: There's terrible people right in here
74 notes · View notes
morir-mientras-duermo · 5 months ago
Text
Debo concentrarme en muchas cosas que no entiendo,y eso es terrible.
80 notes · View notes
pastafossa · 27 days ago
Note
pasta. pasta what do you mean. pasta wdym should be safe. pasta. PASTA. WHAT DO YOU MEAN. SHOULD? BE? SAFE? PASTA PLEASE. HAVEN’T THEY BEEN THROUGH ENOUGH???? PASTA LOOK AT ME!!!! NO!!!!
Tumblr media
36 notes · View notes
queen-morgana91 · 10 months ago
Text
The fact that the writers had to clarify that aang is not a deadbeat father because this fandom has zero media literacy....
Tumblr media Tumblr media Tumblr media Tumblr media
well this fandom is bashing the air nomads and defending the genocidal colonizers of the series so i'm not expecting anything
147 notes · View notes