not-terezi-pyrope
not-terezi-pyrope
The Tinfoil Hat Crowd
75K posts
Hello! I'm Blackhole, aka Not-Terezi-Pyrope. Formerly a long-time Homestuck blog, now a general stuff blog, although I am still likely to reblog Homestuck things. Once Hussie tweeted a thing I made and I took my blog title from it. Content warnings: Blog is rated 18+, and so am I. Artwork is largely untagged; occasional cartoon violence and gore in untagged artwork; discussion of some difficult issues in my personal posts; occasional nsfw text in my personal posts; if you think anything I'm likely to post is something you might not want to then you probably shouldn't be following me. Pronouns: She/her. Please have a good day! :D
Last active 4 hours ago
Don't wanna be here? Send us removal request.
not-terezi-pyrope · 2 days ago
Text
Other people texting their partners at 3am: Babe I miss u :c
Me texting my partners at 3am:
Tumblr media
15 notes · View notes
not-terezi-pyrope · 2 days ago
Text
Tumblr media
more very real suselle interactions
6K notes · View notes
not-terezi-pyrope · 3 days ago
Text
Tumblr media
(By Stray Feathers, who apparently doesn't have a tumblr account)
Link to the original post below
166 notes · View notes
not-terezi-pyrope · 3 days ago
Text
I'm not sure non-Londoners realize how much of the city center has at this point been given over to "Harry Potter slop". It's truly staggering, and, as a trans person, maddening.
Like, America has its Disneylands, but the UK, through a sort of symbiotic cancerous frission between tourism and private businesses, has made the objectively far worse decision to turn a large portion of the historic city's most tourism-heavy areas into a sort of distributed Harry Potter Disneyland. I mean, the areas around Trafalgar Square etc have always been a bit of a theme park, but from my perspective it's got a lot worse over the past decade as people try to milk the brand for every last bit of nostalgia appeal, in absence of anything else semi-recent the UK is known for to foreign visitors.
These aren't official Harry Potter attractions, mind. That's why I'll call it slop, because it's all off-brand. You walk the five minutes from Charing Cross towards the West End, and every other shop will be something called "Magyk Emporium" or "Cabinet of Spells", branded with yellow on black text in a slightly-off Harry Potter font, selling a mixture of licensed Harry Potter merchandise, London souvenirs, and achingly mainstream-boring pop-culture merch from other fantasy series (think Game of Thrones merch that would have been tacky 10 years ago).
I cannot overemphasize how numerous these places are, and how utterly identical in form factor. It's like a sort of Harry Potter gothic, in that more seem to show up wherever you turn, all eerily slightly-off like they were generated by a copyright averse AI try to recreate Harry Potter brand materials without ever mentioning it by name. Hell, some of them probably were at this point!
And don't even talk to me about the half of the Kings Cross concourse that has been taken over by the Harry Potter photo-op tumor.
This is a problem elsewhere in the UK as well (I saw a couple of these in the touristy part of Cambridge recently), but naturally I imagine London has the worst of it. I never see people mention this phenomenon so I thought the Americans ought to know. The extent that the UK private sector is so shamelessly trying to milk the brand to foreigners is something I feel like is missed out of a lot of conversations about the series' continued relevance (or lack thereof).
100 notes · View notes
not-terezi-pyrope · 5 days ago
Text
Tumblr media
Claiming that the horrifying near-death experience really put things into perspective, area man Leo York announced Tuesday that a recent heroin overdose served as a wake-up call to keep on doing heroin but just be smarter about it. “That’s it. Tomorrow I’m buying a digital scale, and from now on I’m only using on weekends or after work if it was a super hard day,” said York, explaining that the close call had provided him with the clarity to realize he needed to do the hard work of finding a more trustworthy dealer instead of shooting up whatever sketchy back-alley stuff he could score.
Full Story
291 notes · View notes
not-terezi-pyrope · 6 days ago
Text
there should be a such thing as a medical detective. you should be able to hire a doctor to figure out what the fuck is going on with you come hell or high water by consulting whatever specialists they can get their hands on, connecting your constellation of symptoms, etc, instead of 10000 different doctors for every distinct bone in ur body that all just kinda go "dang that sucks idk" when you present with more than one fucking symptom
5K notes · View notes
not-terezi-pyrope · 6 days ago
Text
Nothing will disillusion you with the idea that the world at large cares than developing (what seems undeniably now to be) a new chronic health condition, and then watching everyone shrug after three days and be like "why are you still talking about that. Haven't you noticed you're behind on your Tasks? Make sure you don't lose focus or stop striving, if you don't earn that paycheck you will lose your home and your comforts".
I can't do this man, I can't shoulder this new normal shit, I can't function while feeling unwell all the damn time. Like my body's going to shake itself apart, the shit my heart's doing. I'm tired of people not caring and even loved ones paying attention out of awkward obligation while their eyes and tone tell me they hope I'll shut up and start acting normal again soon.
I'm not normal. I'm never going to be normal again. And I'm expected to just live with that.
9 notes · View notes
not-terezi-pyrope · 7 days ago
Text
I think people take the wrong lesson out of "all LLMs are actually trying to do is predict the next token". It's not that we're failing to create a dizzyingly complex artificial world model capable of some level of reasoning and inference, it's that we're failing to find a way to interact with that core system once it exists that isn't filtered through the medium of the silly word games that are the only way we know how to grow these things.
198 notes · View notes
not-terezi-pyrope · 8 days ago
Text
Deltarune is pretty astonishingly well written, but maybe the biggest kudos I can hand to it are how it managed to make Asgore interesting again by making him entirely mundane.
Masterclass in trope reframing. Take one played out archetype (tragic fallen king final boss) and then recast that character as another, equally played out archetype that sort of rhymes with it in that new context (well meaning, pitiful but ultimately sort of unlikeable down on his luck divorce dad), and you get a situation where those two played out tropes comment on each other such that something compelling sparks at their synthesis - while also being really fucking funny, just conceptually.
Thank you Toby, I think you really were the only one who could bring us this.
156 notes · View notes
not-terezi-pyrope · 8 days ago
Text
It's also why people get so fuckin' obsessed with these things, because once you poke at them for long enough it becomes very clear the The Sauce™ is like, in there, it's just really damn hard to get at it. So you get a generation of computer scientists catching sight of the gleam of gold through a crack in a cave wall, and spend the next five years increasingly frantically swinging their pickaxes at said wall, while people look on and remark, like, "I don't know why Jeff is so obsessed with that thing, all it seems to know how to do is faintly glimmer in the dark".
I think people take the wrong lesson out of "all LLMs are actually trying to do is predict the next token". It's not that we're failing to create a dizzyingly complex artificial world model capable of some level of reasoning and inference, it's that we're failing to find a way to interact with that core system once it exists that isn't filtered through the medium of the silly word games that are the only way we know how to grow these things.
198 notes · View notes
not-terezi-pyrope · 8 days ago
Text
Machine learning is maddening because it's both a cheat code and a crutch. We know the certain kinds of complex systems can exist, but we don't have the smarts or time or labour hours to build them from scratch ourselves.
But, we do have enough smarts, labour and time to design algorithms that latch onto some existing data that already exists but is complex enough that it requires one of said complex systems to model it, and then build out the necessary complex system automatically. Which works! But because it needs that scaffolding, and because the algorithms have no intent beyond "process this dataset", the resulting complex system will inevitably consist of a core that is the abstract reasoning engine you want (that does all the tricky inference that usually you'd need a human to do and is in many ways context agnostic, because that's the most efficient way to build a reasoning machine) but mindmelded inextricably with a bunch of data input parsing and output formatting mechanisms in such a deeply integrated way that no human can pick them apart from each other.
Because the only large enough repository of general purpose data that exists is the written word, the scaffolding for the core of all language models is streams of human written text. Now the fact that they have a deeply integrated tendency to read text as input is fine, because that's the way we'd want to communicate with them anyway. The tricky part is at the point of output, where the shape of the learning algorithm's scaffolding produces a requirement to mimic plausible training data text, agnostic of any other guidance we might desire.
This requirement is melded into the endpoint architecture of the reasoning like an inoperable brain tumor, and the entirety of the last decade of AI research into post-training, fine tuning and conditioning, reinforcement learning based on feedback, etc, has been trying to bypass it while still capturing meaningful output bent to the task we actually want the AI system to perform (e.g.; not typically directly predicting text streams).
Most current LLM failure modes (such as hallucination) result from the system defaulting back to this default mode of output instead of producing the kind of outputs we actually want them to produce (when the two imperatives are in conflict). In the case of hallucination, this can be when the reasoning engine can't come up with factually rooted information, but "hallucinating" something plausible seems more in line with the training set than actually just outputting the words "I don't know" - the output component of the model will be weighing up tendencies in the training data and in the current input "prompt" to sound confident and knowledgeable with similar tendencies and instructions to be factually correct, and post-training will be designed to sway it towards the latter, but there's no silver bullet that will cause it to pick that route in all circumstances, because the underlying mechanism of the model is still rooted in that initial training. But the key takeaway is that the problem has never been that we can't create the kind of quite generalized data processing models we want at the core of AI applications, it's that we can't not create the messy cognitive scaffolding that makes the core of the model hard to talk to and utilize in other contexts. And that's a different sort of problem to have than what people say about LLMs being "too dumb" and "only knowing how to do one thing", I think.
I think people take the wrong lesson out of "all LLMs are actually trying to do is predict the next token". It's not that we're failing to create a dizzyingly complex artificial world model capable of some level of reasoning and inference, it's that we're failing to find a way to interact with that core system once it exists that isn't filtered through the medium of the silly word games that are the only way we know how to grow these things.
198 notes · View notes
not-terezi-pyrope · 8 days ago
Text
I think people take the wrong lesson out of "all LLMs are actually trying to do is predict the next token". It's not that we're failing to create a dizzyingly complex artificial world model capable of some level of reasoning and inference, it's that we're failing to find a way to interact with that core system once it exists that isn't filtered through the medium of the silly word games that are the only way we know how to grow these things.
198 notes · View notes
not-terezi-pyrope · 8 days ago
Text
i can't get into the sexy mechsploitation pilot stuff cause i know for a fact that wouldn't be me. i get the appeal of imagining oneself as some emancipated little battle-dancer hollowed out by years of martial brutality but this universe's economy needs freighter captains and the fact is that i have the body and temperament of a clydesdale so there's no WAY they wouldn't scoop me up for long-hauling
285 notes · View notes
not-terezi-pyrope · 8 days ago
Text
Kinda wish I wasn't feeling more and more vindicated every day about calling it from the start that treating the AI issue like a moral crusade where you have like a moral obligation to prioritize signalling and reaffirming your hatred of generative AI at any possible chance would lead to a lot of ostensibly "progressive" people uncritically parroting extremely reactionary rhetoric.
9K notes · View notes
not-terezi-pyrope · 9 days ago
Text
Actually I think that was just a licensed fanmusic release
0 notes
not-terezi-pyrope · 9 days ago
Text
Tumblr media
he just yelled at them 10 minutes before this
167 notes · View notes
not-terezi-pyrope · 9 days ago
Text
Tumblr media
tonight's weather is starry skies
3K notes · View notes