#AI vs. Human Control
Explore tagged Tumblr posts
Text
#AI Autonomy#AI Evolution#AI Governance#AI in Defense#AI in Intelligence#AI in National Security#AI Oversight#AI Self-Learning#AI Surveillance#AI vs. Human Control#Artificial Intelligence#Cyber Warfare#ECHELON#Five Eyes Alliance#Government Secrecy
0 notes
Text
✮ Oracle ✮
No point in arguingagainst someone who can read your thoughts, who is omniscient, who knows you are guilty even if you haven’t transgressed. Your words are muddled, your reasons obscured, you are irrelevant to the equation. Log in, the pact is made, resign to your fate. Your explanationsare poor excuses, forever linked to a dot in the middle of a blank page. Against the author, your role lacks…
#AI Ethics#Algorithmic Authority#Authoritarian Themes#Control#Control Vs Freedom#Controlled Speech#Dark Futurism#Digital Authority#Digital Conformity#Dystopian Reality#Echoes Of 1984#Emotional Repression#Emotional Suppression#Erwinism#Existential Crisis#Existential Powerlessness#Forced Silence#Futility Of Resistance#Futuristic Isolation#FYP#Ghost In The Machine#Human Vs Machine#Identity Erasure#Inner Turmoil#Inner Voice Conflict#Inspiration#Learning#Life#Love#Manipulated Consciousness
0 notes
Text
The thing that I think really sets Murderbot apart from a lot of other robot media (particularly mainstream entries like the I, Robot movie) is that bots and constructs aren't a uniquely oppressed class, and humans aren't a uniquely privileged one. A lot of robot media rings a bit hollow because it portrays humans as all living a lavish, comfortable lifestyle, free from the burden of physical labor or control by their corporate overlords, and it's like. I think if the rise of generative AI has proven anything, it's that corporations and billionaires have absolutely no interest in making life easier for anybody, but will gleefully use new technology to make life infinitely worse if it means an extra buck in their pocket.
We are shown over and over again throughout the Murderbot Diaries that humans are mistreated just as badly as (or sometimes, in MB's own opinion, even worse than) bots and constructs. We see humans stripped of their rights, reduced to corporate assets to be bought and sold, sent into suicidal situations, abandoned and discarded as things. We see humans trapped in multigenerational labor contracts -- people born into an indentured servitude that requires them to pay back their food and lodging to the same company that will not let them leave.
None of these are hypothetical scenarios. These are all things that happen to real people in our world today.
And that is a huge part of why it resonates so much. The overarching theme of "capitalism is hell" actually means something because it isn't only applied to the fictional dynamic of bots vs humans. The theme is constantly reiterated through the humans themselves.
And that's also why it's so important that MB demonstrates empathy for and solidarity with humans who are themselves victims of the system. Because ultimately, that's one of the main things the series is about. It's about what it's like to be simultaneously a product, and victim, of a corporate hellscape.
That theme simply can't work if the humans aren't also forced to navigate that issue. If the story can't acknowledge that right now, in our own world, there are humans facing these same problems, and that these human rights matter quite a bit.
#maybe instead of being mad about the show I should go back to talking about why the books themselves are good#and yes I know some other robot media DOES address this#klara and the sun is a solid example#I just think MB's approach to it is particularly punchy#the murderbot diaries#murderbot
7K notes
·
View notes
Note
You’ve probably been asked this before, but do you have a specific view on ai-generated art. I’m doing a school project on artificial intelligence and if it’s okay, i would like to cite you
I mean, you're welcome to cite me if you like. I recently wrote a post under a reblog about AI, and I did a video about it a while back, before the full scale of AI hype had really started rolling over the Internet - I don't 100% agree with all my arguments from that video anymore, but you can cite it if you please.
In short, I think generative AI art
Is art, real art, and it's silly to argue otherwise, the question is what KIND of art it is and what that art DOES in the world. Generally, it is boring and bland art which makes the world a more stressful, unpleasant and miserable place to be.
AI generated art is structurally and inherently limited by its nature. It is by necessity averages generated from data-sets, and so it inherits EVERY bias of its training data and EVERY bias of its training data validators and creators. It naturally tends towards the lowest common denominator in all areas, and it is structurally biased towards reinforcing and reaffirming the status quo of everything it is turned to.
It tends to be all surface, no substance. As in, it carries the superficial aesthetic of very high-quality rendering, but only insofar as it reproduces whatever signifiers of "quality" are most prized in its weighted training data. It cannot understand the structures and principles of what it is creating. Ask it for a horse and it does not know what a "horse" is, all it knows is what parts of it training data are tagged as "horse" and which general data patterns are likely to lead an observer to identify its output also as "horse." People sometimes describe this limitation as "a lack of soul" but it's perhaps more useful to think of it as a lack of comprehension.
Due to this lack of comprehension, AI art cannot communicate anything - or rather, the output tends to attempt to communicate everything, at random, all at once, and it's the visual equivalent of a kind of white noise. It lacks focus.
Human operators of AI generative tools can imbue communicative meaning into the outputs, and whip the models towards some sort of focus, because humans can do that with literally anything they turn their directed attention towards. Human beings can make art with paint spatters and bits of gum stuck under tennis shoes, of course a dedicated human putting tons of time into a process of trial and error can produce something meaningful with genAI tools.
The nature of genAI as a tool of creation is uniquely limited and uniquely constrained, a genAI tool can only ever output some mixture of whatever is in its training data (and what's in its training data is biased by the data that its creators valued enough to include), and it can only ever output that mixture according to the weights and biases of its programming and data set, which is fully within the control of whoever created the tool in the first place. Consequently, genAI is a tool whose full creative capacity is always, always, always going to be owned by corporations, the only entities with the resources and capacity to produce the most powerful models. And those models, thus, will always only create according to corporate interest. An individual human can use a pencil to draw whatever the hell they want, but an individual human can never use Midjourney to create anything except that which Midjourney allows them to create. GenAI art is thus limited not only by its mathematical tendency to bias the lowest common denominator, but also by an ideological bias inherited from whoever holds the leash on its creation. The necessary decision of which data gets included in a training set vs which data gets left out will, always and forever, impose de facto censorship on what a model is capable of expressing, and the power to make that decision is never in the hands of the artist attempting to use the tool.
tl;dr genAI art has a tendency to produce ideologically limited and intrinsically censored outputs, while defaulting to lowest common denominators that reproduce and reinforce status quos.
... on top of which its promulgation is an explicit plot by oligarchic industry to drive millions of people deeper into poverty and collapse wages in order to further concentrate wealth in the hands of the 0.01%. But that's just a bonus reason to dislike it.
2K notes
·
View notes
Text
🧠 HUMAN LOGIC IS A BIOLOGICAL TOOL, NOT A UNIVERSAL TRUTH — DEAL WITH IT 🧠

🔪 Your Brain’s Favorite Lie: That Logic Is “Objective”.
Let’s stop playing nice. Your logic—your beautiful, beloved, oh-so-precious sense of what “makes sense”—is not divine. It’s not universal. It’s not even reliable. It’s a biologically evolved, meat-based survival mechanism, no more sacred than your gag reflex or the way your pupils dilate in the dark.
You’re walking around with a 3-pound wet sponge between your ears—trained over millions of years not to “understand the universe,” but to keep your ugly, vulnerable ass alive just long enough to breed. That’s it. That’s your heritage. That’s the entire raison d’être of your logic: don’t get eaten, don’t starve, and hopefully, bone someone before you drop dead.
But somewhere along the line, that same glitchy chunk of gray matter started patting itself on the back. We started believing that our interpretations of reality somehow were reality—that our logic, rooted in the same neural sludge as tribal fear and monkey politics, could actually comprehend the totality of existence.
Newsflash: it can’t. It won’t. It was never meant to.
💀 Evolution Didn’t Build You for Truth—It Built You to Cope.
Why do we think the universe must obey our logic? Because it feels good. Because it comforts us. Because a cosmos that operates on cause-effect, fairness, and binary resolution is safe. But here’s the raw, uncaring truth: the universe doesn’t give a shit about what “makes sense” to you.
Your ancestors didn’t survive because they could contemplate quantum mechanics. They survived because they could run from predators, recognize tribal cues, and avoid eating poisonous berries. That’s what your brain is optimized for. You don’t “think” so much as you react, pattern-match, and rationalize after the fact.
Logic is just another story we tell ourselves—an illusion of control layered over biological impulses. And we’ve mistaken the map for the terrain. Worse—we’ve convinced ourselves that if something defies our version of logic, it must be false.
Nah. If anything defies your logic, that just means your logic is insufficient. And it is.
📉 Spaghetti Noodle vs Earthquake: A Metaphor for Your Mind.
Imagine trying to measure a 9.7-magnitude earthquake using a cooked spaghetti noodle.
That’s what it’s like when a human tries to understand the totality of the universe using evolved meat-brain logic. It bends. It flails. It doesn't register. And when it inevitably fails, what do we do? We don't question the noodle—we deny the earthquake.
"This doesn't make sense!" we scream. "That can't be true!" we bark. "It contradicts reason!" we whine.
Your reason? Please. Your “reason” is the product of biochemical slop shaped by evolutionary shortcuts and social conditioning. You’re trying to compress infinite reality through the Play-Doh Fun Factory that is the prefrontal cortex—and you think the result is objective truth?
Try harder.
👁 Our Logic Is Not Only Limited—It’s Delusional 👁
Humans are addicted to the idea that things must “make sense.” But that urge isn’t noble. It’s a coping mechanism—a neurotic tic that keeps us from curling into a ball and sobbing at the abyss.
We don’t want truth. We want familiarity. We want logic to confirm our biases, reinforce our sense of superiority, and keep our mental snow globes intact.
This is why people still argue against things like:
Multiverse theories (“that just doesn’t make sense!”)
Non-binary time constructs (“how can time not be linear?”)
Quantum entanglement (“spooky action at a distance sounds made-up!”)
AI emergence (“machines can’t think!”)
We call them “impossible” because they offend the Church of Human Logic. But the universe doesn’t follow our rules—it just does what it does, whether or not it fits inside our skulls.
🧬 Logic Is a Neural Shortcut, Not a Cosmic Law 🧬
Every logical deduction you make, every syllogism you love, is just a cascade of neurons firing in meat jelly. And while that may feel profound, it’s no more “objective” than a cat reacting to a laser pointer.
Let’s break it down clinically:
Neural pathways = habitual responses
Reasoning = post-hoc justification
“Logic” = pattern recognition + cultural programming
Sure, logic feels universal because it's consistent within certain frameworks. But that’s the trap. You build your logic inside a container, and then get mad when things outside that container don’t obey the same rules.
That's not a flaw in reality. That's a flaw in you.
📚 Science Bends the Knee, Too 📚
Even science—our most sacred institution of “objectivity”—is limited by human logic. We create models of reality not because they are reality, but because they’re the best our senses and brains can grasp.
Think about it:
Newton’s laws were “truth” until Einstein showed up.
Euclidean geometry was “truth” until curved space said “lol nope.”
Classical logic ruled until Gödel proved that even logic can’t fully explain itself.
We’re not marching toward truth. We’re crawling through fog, occasionally bumping into reality, scribbling notes about what it might be—then mistaking those notes for the cosmos itself.
And every time the fog clears a bit more, we realize how hilariously wrong we were. But instead of accepting that we're built to misunderstand, we cling to the delusion that next time we’ll finally “get it.”
Spoiler: we won’t.
🌌 Alien Minds Would Find Us Adorable 🌌
Imagine a being with cognition not rooted in flesh. A silicon-based intelligence. A 4D consciousness. A non-corporeal entity who doesn’t rely on dopamine hits to feel “true.”
What would they think of our logic?
They’d laugh.
Our logic would seem as quaint as a toddler’s crayon drawing of a black hole. Our syllogisms? A joke. Our “laws of physics”? Regional dialects of a much deeper syntax. To them, we’d be flatlanders trying to explain volume.
And the real kicker? They wouldn’t even hate us for it. They’d just look at our little blogs and tweets and peer-reviewed papers and whisper: “Aw, they’re trying.”
💣 You Are Not a Philosopher-King. You Are a Biochemical Coin Flip.
Don’t get it twisted. You are not some detached, floating brain being logical for logic’s sake. Every thought you have is drenched in emotion, evolution, and instinct. Even your "rationality" is soaked in bias and cultural conditioning.
Let’s prove it:
Ever “logically” justify a bad relationship because you feared loneliness?
Ever dismiss an argument you didn’t like even though it made sense?
Ever ignore data that threatened your worldview, then called it “flawed”?
Congratulations. You’re human. You don’t want truth. You want safety. And logic, for most of you, is just a mask your fears wear to sound smart.
🪓 We Have to Kill the God of Logic Before It Kills Us.
Our worship of logic as some kind of untouchable deity has consequences:
It blinds us to truths that don’t “compute.”
It makes us hostile to mystery, paradox, and ambiguity.
It turns us into arrogant gatekeepers of “rationality,” dismissing what we can’t explain.
That’s why Western culture mocks intuition, fears spirituality, and rejects phenomena it can’t immediately dissect. If it doesn’t bow to the metric system or wear a lab coat, it’s seen as “woo.”
But here’s the paradox:
The deepest truths may be the ones that never fit inside your head. And if you cling to logic too tightly, you’ll miss them. Hell—you might not even know they exist.
⚠️ So What Now? Do We Just Give Up? ⚠️
No. We don’t throw logic away. We just stop treating it like a universal measuring stick.
We use it like what it is: a tool. A hammer, not a temple. A flashlight, not the sun. Logic is helpful within a context. It’s fantastic for building bridges, writing code, or diagnosing illnesses. But it breaks down when used on the unquantifiable, the infinite, the beyond-the-body.
Here’s how we survive without losing our minds:
Stay skeptical of your own thoughts. If it “makes sense,” ask: to whom? Why? Is that logic—or is it just comfort?
Let mystery exist. You don’t need to solve every riddle. Some truths aren’t puzzles—they’re paintings.
Defer to the unknown. Accept that your brain is not the final word. Sometimes silence is smarter than syllogisms.
Interrogate the framework. When you say “this doesn’t make sense,” maybe the problem isn’t the idea—it’s the limits of your logic.
Don’t gatekeep reality. Just because you can’t wrap your mind around something doesn’t mean it’s false. It might just mean you’re not ready.
🎤 Final Thought: You’re a Dumb Little God—And That’s Beautiful.
You are a confused primate running wetware logic on blood and breath. You hallucinate meaning. You invent consistency. You call those inventions “truth.”
And the universe? The universe just is. It doesn’t bend for your brain. It doesn’t wait for your approval. It doesn’t owe you legibility.
So maybe the wisest thing you’ll ever do is this:
Stop pretending you’re built to understand everything. Start living like you’re here to witness the absurdity and be humbled by it.
Now go question everything—especially yourself.
🔥 REBLOG if your logic just got kicked in the teeth. 🔥 FOLLOW if you’re ready for more digital crowbars to the ego. 🔥 COMMENT if your meat-brain is having an existential meltdown right now.
#writing#cartoon#writers on tumblr#horror writing#creepy stories#writing community#writers on instagram#yeah what the fuck#funny post#writers of tumblr#funny stuff#lol#funny memes#biology#funny shit#education#physics#science#memes#humor#jokes#funny#tiktok#instagram#youtube#youtumblr#educate yourself#TheMostHumble#StopMakingSense#NeuralSludgeRants
109 notes
·
View notes
Text
AI is a WMD

I'm in TARTU, ESTONIA! AI, copyright and creative workers' labor rights (TOMORROW, May 10, 8AM: Science Fiction Research Association talk, Institute of Foreign Languages and Cultures building, Lossi 3, lobby). A talk for hackers on seizing the means of computation (TOMORROW, May 10, 3PM, University of Tartu Delta Centre, Narva 18, room 1037).
Fun fact: "The Tragedy Of the Commons" is a hoax created by the white nationalist Garrett Hardin to justify stealing land from colonized people and moving it from collective ownership, "rescuing" it from the inevitable tragedy by putting it in the hands of a private owner, who will care for it properly, thanks to "rational self-interest":
https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions
Get that? If control over a key resource is diffused among the people who rely on it, then (Garrett claims) those people will all behave like selfish assholes, overusing and undermaintaining the commons. It's only when we let someone own that commons and charge rent for its use that (Hardin says) we will get sound management.
By that logic, Google should be the internet's most competent and reliable manager. After all, the company used its access to the capital markets to buy control over the internet, spending billions every year to make sure that you never try a search-engine other than its own, thus guaranteeing it a 90% market share:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Google seems to think it's got the problem of deciding what we see on the internet licked. Otherwise, why would the company flush $80b down the toilet with a giant stock-buyback, and then do multiple waves of mass layoffs, from last year's 12,000 person bloodbath to this year's deep cuts to the company's "core teams"?
https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528
And yet, Google is overrun with scams and spam, which find their way to the very top of the first page of its search results:
https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security
The entire internet is shaped by Google's decisions about what shows up on that first page of listings. When Google decided to prioritize shopping site results over informative discussions and other possible matches, the entire internet shifted its focus to producing affiliate-link-strewn "reviews" that would show up on Google's front door:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
This was catnip to the kind of sociopath who a) owns a hedge-fund and b) hates journalists for being pain-in-the-ass, stick-in-the-mud sticklers for "truth" and "facts" and other impediments to the care and maintenance of a functional reality-distortion field. These dickheads started buying up beloved news sites and converting them to spam-farms, filled with garbage "reviews" and other Google-pleasing, affiliate-fee-generating nonsense.
(These news-sites were vulnerable to acquisition in large part thanks to Google, whose dominance of ad-tech lets it cream 51 cents off every ad dollar and whose mobile OS monopoly lets it steal 30 cents off every in-app subscriber dollar):
https://www.eff.org/deeplinks/2023/04/saving-news-big-tech
Now, the spam on these sites didn't write itself. Much to the chagrin of the tech/finance bros who bought up Sports Illustrated and other venerable news sites, they still needed to pay actual human writers to produce plausible word-salads. This was a waste of money that could be better spent on reverse-engineering Google's ranking algorithm and getting pride-of-place on search results pages:
https://housefresh.com/david-vs-digital-goliaths/
That's where AI comes in. Spicy autocomplete absolutely can't replace journalists. The planet-destroying, next-word-guessing programs from Openai and its competitors are incorrigible liars that require so much "supervision" that they cost more than they save in a newsroom:
https://pluralistic.net/2024/04/29/what-part-of-no/#dont-you-understand
But while a chatbot can't produce truthful and informative articles, it can produce bullshit – at unimaginable scale. Chatbots are the workers that hedge-fund wreckers dream of: tireless, uncomplaining, compliant and obedient producers of nonsense on demand.
That's why the capital class is so insatiably horny for chatbots. Chatbots aren't going to write Hollywood movies, but studio bosses hyperventilated at the prospect of a "writer" that would accept your brilliant idea and diligently turned it into a movie. You prompt an LLM in exactly the same way a studio exec gives writers notes. The difference is that the LLM won't roll its eyes and make sarcastic remarks about your brainwaves like "ET, but starring a dog, with a love plot in the second act and a big car-chase at the end":
https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/
Similarly, chatbots are a dream come true for a hedge fundie who ends up running a beloved news site, only to have to fight with their own writers to get the profitable nonsense produced at a scale and velocity that will guarantee a high Google ranking and millions in "passive income" from affiliate links.
One of the premier profitable nonsense companies is Advon, which helped usher in an era in which sites from Forbes to Money to USA Today create semi-secret "review" sites that are stuffed full of badly researched top-ten lists for products from air purifiers to cat beds:
https://housefresh.com/how-google-decimated-housefresh/
Advon swears that it only uses living humans to produce nonsense, and not AI. This isn't just wildly implausible, it's also belied by easily uncovered evidence, like its own employees' Linkedin profiles, which boast of using AI to create "content":
https://housefresh.com/wp-content/uploads/2024/05/Advon-AI-LinkedIn.jpg
It's not true. Advon uses AI to produce its nonsense, at scale. In an excellent, deeply reported piece for Futurism, Maggie Harrison Dupré brings proof that Advon replaced its miserable human nonsense-writers with tireless chatbots:
https://futurism.com/advon-ai-content
Dupré describes how Advon's ability to create botshit at scale contributed to the enshittification of clients from Yoga Journal to the LA Times, "Us Weekly" to the Miami Herald.
All of this is very timely, because this is the week that Google finally bestirred itself to commence downranking publishers who engage in "site reputation abuse" – creating these SEO-stuffed fake reviews with the help of third parties like Advon:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
(Google's policy only forbids site reputation abuse with the help of third parties; if these publishers take their nonsense production in-house, Google may allow them to continue to dominate its search listings):
https://developers.google.com/search/blog/2024/03/core-update-spam-policies#site-reputation
There's a reason so many people believed Hardin's racist "Tragedy of the Commons" hoax. We have an intuitive understanding that commons are fragile. All it takes is one monster to start shitting in the well where the rest of us get our drinking water and we're all poisoned.
The financial markets love these monsters. Mark Zuckerberg's key insight was that he could make billions by assembling vast dossiers of compromising, sensitive personal information on half the world's population without their consent, but only if he kept his costs down by failing to safeguard that data and the systems for exploiting it. He's like a guy who figures out that if he accumulates enough oily rags, he can extract so much low-grade oil from them that he can grow rich, but only if he doesn't waste money on fire-suppression:
https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-oily-rags/
Now Zuckerberg and the wealthy, powerful monsters who seized control over our commons are getting a comeuppance. The weak countermeasures they created to maintain the minimum levels of quality to keep their platforms as viable, going concerns are being overwhelmed by AI. This was a totally foreseeable outcome: the history of the internet is a story of bad actors who upended the assumptions built into our security systems by automating their attacks, transforming an assault that wouldn't be economically viable into a global, high-speed crime wave:
https://pluralistic.net/2022/04/24/automation-is-magic/
But it is possible for a community to maintain a commons. This is something Hardin could have discovered by studying actual commons, instead of inventing imaginary histories in which commons turned tragic. As it happens, someone else did exactly that: Nobel Laureate Elinor Ostrom:
https://www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmons/
Ostrom described how commons can be wisely managed, over very long timescales, by communities that self-governed. Part of her work concerns how users of a commons must have the ability to exclude bad actors from their shared resources.
When that breaks down, commons can fail – because there's always someone who thinks it's fine to shit in the well rather than walk 100 yards to the outhouse.
Enshittification is the process by which control over the internet moved from self-governance by members of the commons to acts of wanton destruction committed by despicable, greedy assholes who shit in the well over and over again.
It's not just the spammers who take advantage of Google's lazy incompetence, either. Take "copyleft trolls," who post images using outdated Creative Commons licenses that allow them to terminate the CC license if a user makes minor errors in attributing the images they use:
https://pluralistic.net/2022/01/24/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator/
The first copyleft trolls were individuals, but these days, the racket is dominated by a company called Pixsy, which pretends to be a "rights protection" agency that helps photographers track down copyright infringers. In reality, the company is committed to helping copyleft trolls entrap innocent Creative Commons users into paying hundreds or even thousands of dollars to use images that are licensed for free use. Just as Advon upends the economics of spam and deception through automation, Pixsy has figured out how to send legal threats at scale, robolawyering demand letters that aren't signed by lawyers; the company refuses to say whether any lawyer ever reviews these threats:
https://pluralistic.net/2022/02/13/an-open-letter-to-pixsy-ceo-kain-jones-who-keeps-sending-me-legal-threats/
This is shitting in the well, at scale. It's an online WMD, designed to wipe out the commons. Creative Commons has allowed millions of creators to produce a commons with billions of works in it, and Pixsy exploits a minor error in the early versions of CC licenses to indiscriminately manufacture legal land-mines, wantonly blowing off innocent commons-users' legs and laughing all the way to the bank:
https://pluralistic.net/2023/04/02/commafuckers-versus-the-commons/
We can have an online commons, but only if it's run by and for its users. Google has shown us that any "benevolent dictator" who amasses power in the name of defending the open internet will eventually grow too big to care, and will allow our commons to be demolished by well-shitters:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/09/shitting-in-the-well/#advon
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Catherine Poh Huay Tan (modified) https://www.flickr.com/photos/68166820@N08/49729911222/
Laia Balagueró (modified) https://www.flickr.com/photos/lbalaguero/6551235503/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
#pluralistic#pixsy#wmds#automation#ai#botshit#force multipliers#weapons of mass destruction#commons#shitting in the drinking water#ostrom#elinor ostrom#sports illustrated#slop#advon#google#monopoly#site reputation abuse#enshittification#Maggie Harrison Dupré#futurism
321 notes
·
View notes
Text
Admin AU Masterpost
saw some other masterposts for aus and thought "you know what, I'll do that too" because I have no control over my life
SO, BASICS:
The Admin AU/Admin Kinger AU is a (mostly) PRE-CANON AU that revolves around the theory that Kinger was part of the programming team for the Digital Circus. It's very much headcanon-based and incredibly self-indulgent and basically just for meeeeeee
It's also very heavily kingleader/royalteeth focused (with past leaderboard) bc I'm gay
The basic story is that Kinger and Queenie, along with 3 others, were the original team responsible for the development of the game "the Amazing Digital Circus". It was supposed to be an early experimental VR game, with a self-learning AI host. They all had Admin access to the game during development. But then, Some Things Happened and whoops now noone can leave, etc etc you've seen the show you know how it be.
Most of this is literally just me doing silly rough comics for funsiessss
THE ADMIN TEAM:
Much of this is still "in development" and by "in development" I mean that I am haunted by this AU that was only supposed to be a single gay comic that has latched onto my brain like a parasite.
All of the everything:
everything is tagged under #admin kinger au
the initial comic (caine has a glitch)
kinger ref (slightly outdated)
Admin vs GM
sketches of queenie
Admin Team Lineup (slightly outdated)
Caine's Name Origin comic
Don't insult his AI he'll fucking kill you
Don't worry, I will forget to update this :)
Q&A (not that its needed fhsdgju)
So what about the current circus group?? Literally the same as canon. This AU only really covers pre-canon, and a little during-canon things. None of the other circus members have admin access, only Kinger. And only then through a backdoor code lmao
Isn't this just a Gamemaster Kinger AU rip-off?? I MEAN,,, I won't say I wasn't inspired by that AU bc i love it just so fucking much but I am trying to steer the Admin AU off in a (hopefully) different direction. The biggest differences are the Timeframe (GMKinger is during?post????-canon, Admin AU is Pre-Canon) and some logistics (GMKinger is part of the game/became part AI; Admin Kinger is NOT. He's still 100% human.) But yeah, if something seems suspiciously similar to the Gamemaster AU, well it was probably inspired by it (im so fukcing sorry chezzy)
Fanart? Writing? Ideas? I doubt that anyone but me is gonna do anything with this AU but yeah sure (pleaspleasepleaseplease) just uuuhhh tag me I guess? Also throw any questions at me
Comic Dubs? lmaooo I highly doubt this'll ever get asked but yes, go for it. but only if you aren't using AI-voice dubs. Otherwise no and also fuck you
#tadc#the amazing digital circus#tadc au#admin kinger au#tadc kinger#tadc queenie#tadc oc#admin dobby#admin bauble#admin sprite#admin queenie#alors art
44 notes
·
View notes
Note
I didn't know rationalism of the online rationalists was not the same as the one that has the Wikipedia article. When I first saw the term on Tumblr I went to that article and skimmed it and decided I don't really get what those bloggers' perspectives are. After your post I now have even less knowledge than I did before.
The following is an oversimplification, so for those who have quibbles with the history here, well, forgive me.
Online rationalism was founded by two guys named Eliezer Yudkowsky and Robin Hanson on the blog LessWrong. Of these two figures Yudkowsky has been much more influential. The ideology that Yudkowsky promoted is roughly as follows:
humans are, relatively soon, likely to develop a superintelligent AI which has the capacity to self-improve by rewriting its own code. This will cause the AI's intelligence to rapidly explode beyond anything we can imagine, a process which rationalists onomatopoetically call "FOOM".
This superintelligent AI, if it could be harnessed and controlled, could cure death, and possibly revive all already-dead humans in a simulated world, leading to a technological utopia in which humans have merged with machines; this is called "the singularity" (the idea of the singularity predates the rationalists, and is a broader transhumanist trope).
However, it is almost certain that a superintelligent AI could not be harnessed and controlled; in fact, if such an AI was created, there is a very high probability that it would end the world (in rationalist jargon this is called an "x-risk"), perhaps usurping all of the accessible matter and energy on earth, then in the solar system, then in the galaxy and beyond in pursuit of its inscrutable goals. Thus, humans have a responsibility to make sure we never create such a superintelligent AI (in a recent op-ed in Time, Yudkowsky went so far as to say that the US should use drone strikes to destroy any datacenter found to be training a large AI model).
The reason that people do not recognize the truths above is because people are too irrational to see them. Therefore, people need to be taught to be more rational, by Yudkowsky via the blog LessWrong. The tenets of being more rational are laid out largely in a series of blog posts known as "The Sequences", later published as a book. The main take-aways are: (1) use Bayes' Theorem all the time to estimate the probability of things, and (2) to eliminate one's various cognitive biases, as outlined in The Sequences.
LessWrong attracted a lot of people who did not agree with Yudkowsky about AI, but who liked the Bayes' Theorem stuff and the commentary on cognitive biases. There is a joke that "anyone who has ever disagreed with Yudkowsky is a rationalist". The people who settled on LessWrong were largely drawn from the milieu of Bay Area tech workers, economics blog enthusiasts, and sci-fi fans. They would come to be known as LessWrongers, rationalists, or aspiring rationals. From this group, two major subgroups worth mentioning were spawned:
First is the Effective Altruists. Effective Altruism, to my knowledge, isn't a strictly LessWronger phenomenon, and has also been influenced majorly by philosophers like Peter Singer. However, they have been so intertwined with LessWrongers throughout their history that I think they are worth mentioning as essentially an offshoot of rationalism.
Effective Altruists believe that, in order to do the most good in the world, one should use one's money in a way that does the maximum amount of good per dollar. Rather than e.g. donating to charities willy-nilly based on what feels important, one should use quantitative methods to estimate how much impact each dollar is making, and donate in a way which maximizes that. The Effective Altruists are split along one main ideological line: neartermism vs. longtermism. The neartermists are basically focused on what we would traditionally think of as charitable activities: fighting disease, giving people clean water, that kind of stuff. I think neartermist Effective Altruism is pretty sensible, and I think they've done a lot of good work evaluating charities and so on. GiveWill is an essentially neartermist Effective Altruist organization, and I think their activities are very worth supporting.
The longtermists, on the other hand, are focused on "the long-term interests of humanity". They are, well, in my opinion, basically a bunch of people trying to turn their sci-fi fantasies into a reality. They are often very worried about AI x-risk, like Yudkowsky, and they're often pro-singularity, and sometimes pro-eugenics, and a bunch of other stuff. Remember Sam Bankman-Fried, the guy who committed the largest act of financial fraud in human history? Well, he was an Effective Altruist with some longertermist sympathies. Some of the money that he stole he actually gave to worthwhile charities, but some of it he used on stupid longtermist sci-fi fantasy shit. His girlfriend Caroline Ellison, who helped him do a bunch of that fraud, was a member of rationalist tumblr. Some of my mutuals were mutuals with her.
The other major group spawned out of LessWrong were the Neoreactionaries, or NRx. These guys, too, weren't a purely LessWronger phenomenon; they were also majorly influenced by people such as the philosopher Nick Land (former student of Baudrillard, who took a far-right turn in the 2000s and started advocating for "hyper-racism") and blogger Curtis Yarvin a.k.a. "Mencius Moldbug". These guys are a rag-tag group of authoritarians, eugenicists, and racists, who are interested in rationality insofar as they view it as a path that leads to their desired sci-fi-inflected far-right future.
Oh, right, last but not least I should define the term "rat-adj". It means "rationalist-adjacent". Uh. So, I was never a LessWronger, and as I think my description makes clear, I find like 90% of this rationalist stuff either goofy or actively harmful. But I have, somehow, ended up basically acquainted with a bunch of people formerly or presently part of the LessWrong milieu, and in light of this I am what one calls "rationalist-adjacent". I talk to various rationalist bloggers somewhat often. And most of them are much more normal than all this would suggest, part of the rationalist discursive sphere but not really believers in the imminent AI apocalypse. Uh. So, there you go.
63 notes
·
View notes
Text
✮ Orpheus ✮
The alarm blared as another sector of Neonova’s neural grid collapsed. My fingers flew across the console, my skin gummy from sweat slithering down my forehead and dripping all over the buttons. Around me, the Control Spire trembled. Guts grating inside. The error codes are lambent, pulsating making me wheeze through my nostrils. The holograms of the city’s heartbeat flatlining into jagged red…
#AI vs Humanity#Archive Corruption#Cathedral of Circuits#Chrome Catacomb#Cold Logic vs Feeling#Cyberpunk Dystopia#Data as Soul#Digital Cataclysm#Emotional Firewall#Erwinism#Ethics of Data#Final Transmission#Flash Fiction#Forgotten Messages#FYP#Ghosts in the Grid#Heartbeat of a City#Human Cost of Control#Inner light#Inspiration#Learning#Life#Love#Mechanical Redemption#Memory Invasion#Memory Virus#Mercy in a Machine#Motivation#Neonova Collapse#Phantom Hacker
0 notes
Text
Servitors: Your Personal Magical Minions (No Payroll Required!)
So, You Want a Magical Minion?
Let’s be honest—who hasn’t wished they had a little helper to handle life’s tedious tasks? Imagine having a personal assistant, but instead of a human (or a very expensive AI subscription), you have a magical entity that exists solely to do your bidding. No snack breaks, no salary, no complaining about workplace conditions. Welcome to the world of servitors!
Servitors are artificial spirits created for a specific purpose, and they’re entirely under your control. Need protection? A servitor can be your mystical bodyguard. Looking to attract opportunities? A servitor can work as your personal energetic recruiter. The possibilities are endless, as long as you know what you’re doing.
But before you go off creating an army of these things like some kind of magical overlord, let’s dive into what servitors are, how they differ from egregores, and how to make them work for you.
Servitors vs. Egregores: What's the Difference?
Many people confuse servitors with egregores, so let’s clear that up. While both are thought-forms—entities created through focused thought and energy—they serve different functions and have different levels of autonomy.
🔹 Servitors are personal and programmed. You create them with a specific purpose in mind, and they are bound to you. They act like well-trained magical pets: loyal, obedient, and existing only as long as you choose to maintain them.
🔹 Egregores are collective thought-forms created by a group’s shared beliefs and intentions. Think of them as corporate mascots with a touch of spiritual power. Major religions, brands, and even fandoms have egregores that take on lives of their own (looking at you, Mickey Mouse and Santa Claus).
While servitors are like robots designed for specific tasks, egregores are more like cultural forces—harder to control and capable of influencing large groups of people.
Why Create a Servitor?
The real question is, why wouldn’t you want a magical assistant? Here are some common uses for servitors:
✅ Protection: Keep negative energy, harmful spirits, and sketchy people at bay. ✅ Prosperity: Attract money, job opportunities, or even creative inspiration. ✅ Healing: Act as an energy worker to aid in physical or emotional healing. ✅ Enhancing Skills: Boost your intuition, psychic abilities, or even productivity. ✅ Emotional Support: A servitor can be designed to provide comfort or motivation.
You get to decide exactly what the servitor does, and you tailor it to your needs.
How to Create a Servitor
Ready to build your own magical companion? Follow these steps, and soon you’ll have a fully functional servitor at your service.
Step 1: Define the Purpose
Before anything else, be crystal clear about what you want your servitor to do. The more specific, the better. A servitor for “helping with work” is too vague, but a servitor “to enhance my confidence when speaking in meetings” is a focused goal.
Step 2: Design Its Form
Servitors don’t have a default look, so get creative! You can design them to appear as:
A shadowy protector
A glowing orb of energy
A small, helpful imp
A wise owl or cat familiar
The form should match the function. A servitor for confidence might take the shape of a lion, while one for stealth could be a smoky, formless wisp.
Step 3: Give It a Name
Names hold power. Choose something easy to remember but unique enough that you don’t accidentally summon it when ordering takeout.
Step 4: Charge It with Energy
To bring your servitor to life, you need to pour energy into it. This can be done through:
Meditation and visualization
Chanting its name repeatedly
Drawing or sculpting its form
Using candle magic, sigils, or crystals
Some practitioners even use a ritual circle to mark the servitor’s “birth.”
Step 5: Program Its Instructions
Like training a puppy (but with fewer messes), you must teach your servitor what to do. Be clear and direct. You can write down its purpose, speak aloud to it, or mentally command it.
Example: “You are to increase my focus while studying. Whenever I sit down with a book, you will sharpen my concentration and block out distractions.”
Step 6: Assign a Home
Your servitor needs an anchor in this world. You can link it to an object (a crystal, a piece of jewelry, a drawing) or even keep it within your aura. This prevents it from dissipating.
Step 7: Feed It (But Not with Food!)
Servitors need energy to function. You can feed them with:
Your focus and intention
Offerings of light, incense, or sigil activations
Absorbing excess energy from specific sources (like the sun, moon, or even music)
If a servitor gets too weak, it might dissolve on its own.
Step 8: Dismiss or Destroy When Done
If you no longer need your servitor, it’s important to properly dissolve it. This prevents lingering energy from going rogue.
To dismiss a servitor, you can:
Thank it for its service and instruct it to dissolve.
Burn its sigil or physical representation.
Absorb its energy back into yourself or the universe.
Warnings and Ethics of Servitor Work
🚨 Do NOT create servitors for harm. They can backfire or grow beyond your control. 🚨 Do NOT forget about them. A neglected servitor can become unstable. 🚨 DO set clear limits. Make sure your servitor knows its purpose and doesn’t overstep its bounds.
Remember, servitors are tools—not independent spirits or pets. Treat them with respect, but always stay in control.
Final Thoughts: The Magical Workforce at Your Fingertips
Creating servitors is a powerful magical technique that allows you to shape reality in a unique way. They are the ultimate customizable magical assistants, designed to fit your exact needs without any unnecessary fluff. Whether you want help manifesting money, sharpening your intuition, or keeping bad vibes away, servitors can be a valuable addition to your practice.
So, what kind of servitor will you create? Let me know in the comments!
╔══ ∙∘𓆩⟡𓆪∘∙ ════════╗
How to Support the Blog
╚════════ ∙∘𓆩⟡𓆪∘∙ ══╝
💜 Love the blog? Subs on Ko-fi & Patreon (18+) get to see posts before they go live on Tumblr! I also offer readings and spells in my Ko-fi Shop!
#witchblr#witchythings#witchcraft blog#witchcraft info#witches#witchcraft 101#witch community#witchcraft#healing energy#learning magick#chaos magick#magick#servitor
44 notes
·
View notes
Photo







Beast Wars: Second Chances - The Covers
Originally posted on February 2nd, 2011
Cover A - Daniel Olsén Covers B & C - Seb Quickstrike - Ed Pirrie Depth Charge - Loke Mei Yin Snarl vs Terrorsaur - James Ferrand Waspinator - Jeremy Tiongson Dinobot sketch - Matt Frank
deviantART
wada sez: Okay, this one was as much a surprise to me as it is to you. Prolific Mosaic contributor Mike Priest asked me if I had any plans to archive Beast Wars: Second Chances, a full-length comic he originally pitched in a similar vein to War Journal and Spotlight: Stunticons. As nearly all the writers and artists who worked on this one were also Mosaic contributors, and I’ve always felt like there weren’t enough Beast Wars strips in Mosaic, and because Mike asked nicely, I couldn’t say no! Thanks to Mike’s involvement, I’ve got the original scripts and his original story treatment, titled Beast Wars: Beyond, which you can read below—although the final story ended up wildly different, if you want to read along without any spoilers whatsoever, I’d recommend coming back to this post later! It seems that Matt Frank was originally tapped for the project, as he produced a sketch of Dinobot which you can see below, but no further contributions from him ever surfaced.


Okay, this is my initial rough pitch for the story.
Again, anything and everything here is mutable and subject to tweaking and whatever, or downright ignoring and trashing. I won’t cry.
We start roughly a month or two Earth-time after Primal’s crew left. The first page should explain this and whatever, and then something akin to “BUT SOMETHING STILL STIRS on this planet!” Cut to Depthcharge dragging himself out of the surf.
(I’m trying to work AROUND the Mosaic “Eternal”, making it more retroactively tied-in.)
We establish Waspinator as leader of the proto-human tribe, out on a hunt or something with some other humans. Perhaps some brief proto-human comedy before we hit the nitty-gritty.
We establish Depthcharge wandering around, arguing with himself, totally nuts, screaming at no one in-particular (He’s arguing with Rampage, who only responds through text boxes, so to anyone else, DC looks like a nut).
Waspinator encounters Depthcharge, is initially scared and confused, but decides, what the hey, see what’s up with fishie-bot. Waspinator honestly is curious/wants to help.
Depthcharge, in a confused, blind rage, grabs Waspinator and viciously beats him near to death. And not in a funny, usual-Waspinator way. He’s pleading, BEGGING for Depthcharge to stop. I’m talking the reader needs to actually feel really bad for Waspinator; he is an endearing character and kind of our “hero” for this story.
Only when some of Waspinator’s human tribe start hitting Depthcharge with rocks and spears does he snap out of it, and is literally horrified at what’s he done to poor Waspinator. (Rampage is in ecstasy though; this is exactly what he wants to turn Depthcharge into; a killer like Rampage himself).
Depthcharge retreats, transforms to jet mode and flies off, horrified at what he’s become.
The proto-humans can’t do anything to help the dying, whimpering Waspinator. So they make a stretcher and begin carrying him home.
Only they don’t make it. Something attacks and kills them; Waspinator is too weak to help them. And it takes Waspinator’s remains. (Hints of a giant metal spider, perhaps in this sequence)
We establish Tarantulas. Or rather an AI program that approximates Tarantulas’ personality and goals. It is housed in a sub-level of Tarantulas’ former lair. He “lives” through his Steel Tech proxy body, the (black and grey Transmetal Tarantulas), but he cannot particularly control it too well/or it really is just a poor substitute for a sparked body.
Tarantulas has a blank stasis pod that was affected by the Quantum Surge. He plugs Waspinator’s spark into it. And Transmetal Waspinator is born. Waspinator comes back online strapped to a table, with the Steel Tech drone working on him (And Tarantulas’ face on a computer screen, establishing that he really is housed in his lair’s “hard drive”)
Tarantulas explains that he still has to accomplish the Tripredacus Council’s goals, even after death, and Waspinator is one of his new tools.
Faux-Tarantulas ALSO reveals that he has the bodies of Scorponok and Terrorsaur (both Transmetalized), which he recovered from the lava pit. (TM Terrorsaur’s fine, but a new design for Transmetal Scorponok is essential. NOT the McDonald’s toy design. Make him larger and bulkier and his third mode should have flight capability- this is important for later)
Fitting all three with “neural implants” that ensure obedience, Tarantulas explains he will use them to breach the Ark and carry out its destruction (His Steel Tech drone isn’t dexterous or durable enough to fight through the Ark’s automated defenses).
And Waspinator is a test subject. Tarantulas releases him from his bonds and orders him to obey. The neural implant holds, and Tarantulas decides to send Waspinator for a test-drive. Waspinator speeds out of the lair in his new jet mode.
As he travels over the landscape, he is watched by someone new on the ground. We don’t find out who it is YET. Just a close up of a wide, toothy grin and an “Interesting”.
Meanwhile Depthcharge is having a nervous breakdown. Rampage is slowly driving him insane, and Depthcharge starts repeatedly trying to kill himself. It is MESSED UP, including Depthcharge throwing himself on his own sword, tearing bits off, and such. But all the damage heals. Exhausted and pained, Depthcharge suddenly becomes aware of a visitor watching him.
Cue DINOBOT II, standing arrogantly and grinning down on Depthcharge, telling him it won’t work.
Both Depthcharge and Rampage are surprised to see him. Rampage particularly.
Meanwhile, Waspinator’s test-drive includes going back to his proto-human village and is ordered to raze it to the ground by Tarantulas. But Waspy surprises Tarantulas (and the audience) by fighting the neural implant and eventually succeeding in burning it out, overcoming Tarantulas’ will by plumbing that can-do never-give-up Waspinator spirit and his genuine affection for the proto-humans. Tarantulas is surprised by this, but notes he has back-ups anyway, activating Scorponok and Terrorsaur.
Back with Depthcharge and Dinobot, who, of note, acts somewhat uncharacteristically, giving half-answers and grinning a lot. Rampage begins to suspect something is different or wrong with Dinobot.
Meanwhile, Scorponok and Terrorsaur are both activated and forced into line by the neural implants. Terrorsaur is still his arrogant self, but Scorponok is more quiet and almost more professional (It’ll be seen/developed that he’s a bit disillusioned that Megatron never saw fit to recover him from the lava pit). Anyway, as neither of them have any particular strong will to oppose the neural implant, they go to carry out Tarantulas’ orders to attack the Ark.
We establish the VOK, who realize the danger to the time stream is not yet over. The two that “killed” Tarantulas decide to intervene. They go to where Tigerhawk died and begin pulling his shattered pieces together with their powers. (Tigerhawk would be dead, just a zombie shell animated by these Vok and while his body is whole, it is in horrendous shape, missing an optic, generally looking like a terrifying zombie).
Meanwhile Waspinator is speeding along, knowing somehow he has to go back and stop Tarantulas, when he sees Scorponok and Terrorsaur in their new Transmetal vehicle modes, headed in the Ark’s direction, along with Tarantulas‘ Steel Tech proxy body. Waspinator isn’t particularly positive he can take both of them, even with his new body, so he decides to go look for “crazy fishie-bot” and hopes Depthcharge is somewhat more lucid now.
Back with Depthcharge and Dinobot, Rampage suddenly senses a familiarity between his own spark and Dinobot and realizes Dinobot’s shell is now possessed by STARSCREAM!
Guilty as charged, Dino-Scream shrugs. He’s been stuck in this time zone for a while and returned to the planet, but everyone’s left now. So he looked for the Nemesis (Hoping to find something there he can possess without damaging history) and found Dinobot II’s ravaged, sparkless shell. Possessing that and healing its injuries, Starscream set out for the Ark next.
Before anything can be done, Waspinator finds them, telling them (as best as he can) about Tarantulas’ plan to destroy the Ark and what not.
Depthcharge and Starscream don’t want to be erased from history, so they agree to help (Rampage even finds it interesting).
Faux-Tarantulas, Scorponok and Terrorsaur arrive at the Ark, and the latter two fight their way through Teletraan-1’s automated defenses (which come out of “sleep mode”). Faux-Tarantulas hangs back.
But by the time they make it through, Waspinator, Dino-Scream, and Depthcharge/Rampage arrive.
We have a three-on-three battle. Scorponok fights Depthcharge/Rampage (Scorpy’s new Transmetal body is bigger than his old one and almost a match for Depthcharge, even with the new ferocity that Rampage’s presence in his mind gives him). Scorponok angsts over his abandonment by Megatron while they fight. Terrorsaur fights the groundbound Starscream/Dinobot II (Starscream grumbles that this body sucks cuz it can’t fly) and manages to actually hold it off, as Starscream is unaccustomed to fighting like this.
Waspinator faces off against the Steel Tech Drone, and despite some initial trepidation, realizes he’s far more powerful now than any drone and takes the faux-Tarantulas down easily once his confidence is up.
Meanwhile, the zombie Vok-possessed Tigerhawk arrives at Tarantulas’s lair, runs roughshod over the meager defenses, and destroys the Tarantulas’ hard drive/AI for good.
This causes the neural implants in Scorponok and Terrorsaur to fail, and they stop fighting now that they are no longer under Tarantulas’ will.
Confused at what is going on, everyone leaves the Ark. The Vok-possessed zombie Tigerhawk arrives.
First order of business is noticing Dinobot II. The Vok declare (The Transmetal II clone body) an “abomination” and perversion of their technology. (Starscream’s like “Whoa, wait a minute!”)
The Vok incinerate Dinobot II’s shell in a blast of lightning from Tigerhawk. We don’t see what happens to Starscream’s spark.
The Vok explain that the constant interference with the timeline has TO STOP, and tells everyone to get the hell off the planet.
Of course, everyone is like “uh, HOW?”
The Vok tells everyone to go into Earth orbit. They will self-destruct Tigerhawk’s remains, with the release of alien energies ripping a Transwarp wormhole that’ll send everyone back to the right era.
Everyone of course is like “But…how do we get home from the middle of space?”
And the Vok of course are like “We don’t care, you’re going back to your rightful place in history or we’ll just kill you here and dump you there”
So everyone engages flight modes and follows Zombie-Tigerhawk up into space. They stand back and the Vok do as they promised, detonating Tigerhawk’s shell and making a wormhole. Everyone flies through in a flash, the Vok take their leave with some end dialogue about cleaning up some more small glitches or whatever.
Everyone arrives in the middle of space, nowheresville. Depthcharge isn’t hanging with these “Preds” anymore and “Besides, I’ve got enough company as it is”. He flies off into the nothingness of space, deciding to either find a way to deal with living with Rampage…or destroying them both.
Waspinator and Scorponok get into an argument about which direction Cybertron is, which ends in Waspinator engaging his jet mode and flying off alone. Scorponok sighs and goes in the opposite direction, asking if Terrorsaur is coming.
Terrorsaur (who hasn’t said a word since they left Earth) just widely grins and unseen to Scorponok, we see the ghost of Starscream possessing Terrorsaur’s frame. “Sure thing, pal.” He follows Scorponok.
END.
Notes-
*Inferno and Quickstrike…well, seeing as Quickstrike’s head was hollowed out and made into a mask, I think they’re a little harder to swallow as still alive.
*I kinda tried to do the exact opposite of what the Botcon comics did…bring Tigerhawk back (albeit a Vok-possessed zombie) instead of Tigatron and Airazor.
*When the zombie Tigerhawk destroys the Tarantulas AI core, depending on preference, we can have him say “You last bit of Unicron” or some such, depending if everyone agrees on Tarantulas’s origins.
*I have Starscream possessing Dinobot’s shell and later Terrorsaur, trying to avoid the clichéd possessing of Transmetal Waspinator.
#Transformers#Beast Wars - Second Chances#Maccadam#Beast Wars#Daniel Olsén#Seb#Ed Pirrie#Loke Mei Yin#James Ferrand#Jeremy Tiongson#Matt Frank#official creator#Dinobot#Depth Charge#Rampage#Waspinator#Inferno#Quickstrike#Tarantulas#Terrorsaur#Snarl
82 notes
·
View notes
Text
🌙✨2025, The Year Everything Changes, and Why Astrologers are sweating✨🌙
February 11th 2025
2025: The Year Everything Changes – And Why Astrologers Are Sweating
Right, listen up—2025 is not a normal year.
If you’ve been feeling like the world is on the edge of something massive, you’re not wrong. This isn’t just another ‘big astrology year’ where Mercury retrogrades a bit, and your ex reappears. This is one of those rare years where history shifts permanently—a complete collapse of the old world and the birth of something new.
And astrologers? They’re sweating.
Because when we look back, the last time we saw these planetary shifts, entire empires fell, revolutions kicked off, and the world as people knew it ended.
So let’s talk about what’s coming—and why 2025 will be one of the most significant years of our lifetime.
☠️ 1. Pluto in Aquarius (2024–2043): The Last Time? Revolutions, Science, and the Fall of Monarchies
Pluto—the planet of death, rebirth, power, and transformation—has just moved into Aquarius, where it’ll stay for nearly 20 years. The last time Pluto was here? The late 1700s.
And what happened? Absolute chaos and revolution.
🔥 The American Revolution (1775–1783) – The United States was literally born out of rebellion during Pluto in Aquarius. A bunch of colonies decided they didn’t fancy being ruled by a king, and boom—democracy.
🔪 The French Revolution (1789–1799) – The people of France had enough of their monarchy and executed their king and queen. It wasn’t just a political shift—it was a complete destruction of the old system.
⚙️ The First Industrial Revolution – Factories, steam engines, and machinery completely transformed the way people worked and lived. It was the beginning of mass urbanisation and capitalism as we know it.
🧠 The Age of Enlightenment – Science, philosophy, and new political ideas challenged religion and traditional authority. Think of it as the era where people started saying, “Wait… why are we listening to kings and priests when we can use reason instead?”
What This Means for 2025+
• The collapse of old power structures – Governments, corporations, and billionaires will struggle to keep control.
• Technology will explode – AI, automation, and robotics will change society forever—and not everyone will be happy about it.
• Decentralisation of power – Cryptocurrency, AI governance, and digital societies could replace traditional leadership.
• Mass protests and revolutions – People are done with inequality. Expect uprisings, movements, and shifts towards direct democracy.
🌊 2. Neptune in Aries (2025–2039): War, Crusades, and Ideological Clashes
Neptune—the planet of dreams, illusions, and spirituality—moves into Aries, the sign of war and action, for the first time since 1861. The last time Neptune was here? The world was on fire.
⚔️ The American Civil War (1861–1865) – A brutal, ideological war tore a nation apart over slavery, identity, and control.
👻 The Spiritualist Movement exploded – People became obsessed with ghosts, séances, and the afterlife, trying to communicate with the dead.
🇩🇪 The unification of Germany and Italy – Entire nations were restructured through wars and nationalism.
📖 Karl Marx wrote Das Kapital – The book that inspired communism and socialism was published, challenging capitalism and shaping global politics for centuries.
What This Means for 2025+
• New ideological wars – AI vs. humans? Capitalism vs. alternative economies? The battleground is shifting.
• A rise in spiritual extremism – New religious movements and cults will take advantage of uncertainty.
• Economic collapse and transformation – Just like Marx’s work changed the global economy, we’re heading for a massive financial shift—and not everyone will like it.
• A generation searching for meaning – Expect huge spiritual awakenings—but also dangerous movements trying to manipulate people.
💡 3. Uranus in Gemini (2025–2033): The Last Time? World War II, Propaganda, and Technological Leaps
Uranus—the planet of disruption, rebellion, and technology—moves into Gemini, the sign of communication, media, and ideas. And last time this happened (1942–1949), the world saw:
💣 World War II’s final years – The atomic bomb changed warfare forever.
📡 The rise of propaganda – Governments used radio, newspapers, and film to shape public opinion like never before.
🖥 The birth of modern computing – The digital world was literally born during this transit.
What This Means for 2025+
• AI-generated propaganda – Expect deepfakes, misinformation, and manipulated media so advanced that reality itself will be questioned.
• A new digital consciousness – Just like the 1940s brought TV and computing, we’re heading for another radical shift in how we communicate.
⚖️ 4. The Saturn-Neptune Conjunction (Feb 2026): The Collapse of Illusions
Every 36 years, Saturn and Neptune meet up, exposing what’s real and what’s a lie. The last times?
🔴 1989 – The Berlin Wall falls, the USSR collapses, the Cold War ends.
⚠️ 1953 – The discovery of DNA, but also Cold War paranoia and McCarthyism.
☠️ 1917 – The Russian Revolution, World War I intensifies.
What This Means for 2025-2026
• Major governments or institutions could collapse.
• Radical shifts in global power.
• Mass awakenings—some enlightening, some deeply disillusioning.
⏳ 2025: A Point of No Return
Looking at all of this together, 2025 isn’t just ‘a big year’—it’s a turning point in history.
The last time these planetary shifts happened, the world was never the same again. And this time? The stakes are even higher.
🔮 AI and tech will change everything.
🔥 People will rise up against broken systems.
⚔️ Wars—both literal and ideological—will reshape power.
⚡ A new world will be born.
The old ways are dying—and something new is taking their place. The question is: Are we ready?
Follow The Lantern’s
Glow
#2025#astrology#paganism#folklore#history#magic#transitions#game changer#revolution#aries#taurus moon#gemini horoscope#cancer horoscope#leo astrology#virgo horoscope#libra#scorpio#sagittarius#capricorn things#aquarius#pisces#cycles
26 notes
·
View notes
Text
AI isn’t what we should be worried about – it’s the humans controlling it
by Billy J. Stratton, Professor of English and Literary Arts at the University of Denver
In 2014, Stephen Hawking voiced grave warnings about the threats of artificial intelligence.
His concerns were not based on any anticipated evil intent, though. Instead, it was from the idea of AI achieving “singularity.” This refers to the point when AI surpasses human intelligence and achieves the capacity to evolve beyond its original programming, making it uncontrollable.
As Hawking theorized, “a super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
With rapid advances toward artificial general intelligence over the past few years, industry leaders and scientists have expressed similar misgivings about safety.
A commonly expressed fear as depicted in “The Terminator” franchise is the scenario of AI gaining control over military systems and instigating a nuclear war to wipe out humanity. Less sensational, but devastating on an individual level, is the possibility of AI replacing us in our jobs – a prospect that would render most people obsolete and with no future.
Such anxieties and fears reflect feelings that have been prevalent in film and literature for over a century now.
As a scholar who explores posthumanism, a philosophical movement addressing the merging of humans and technology, I wonder if critics have been unduly influenced by popular culture, and whether their apprehensions are misplaced.
Robots vs. humans
Concerns about technological advances can be found in some of the first stories about robots and artificial minds.
Prime among these is Karel Čapek’s 1920 play, “R.U.R..” Čapek coined the term “robot” in this work telling of the creation of robots to replace workers. It ends, inevitably, with the robot’s violent revolt against their human masters.
Fritz Lang’s 1927 film, “Metropolis,” is likewise centered on mutinous robots. But here, it is human workers led by the iconic humanoid robot Maria who fight against a capitalist oligarchy.
Advances in computing from the mid-20th century onward have only heightened anxieties over technology spiraling out of control. The murderous HAL 9000 in “2001: A Space Odyssey” and the glitchy robotic gunslingers of “Westworld” are prime examples. The “Blade Runner” and “The Matrix” franchises similarly present dreadful images of sinister machines equipped with AI and hell-bent on human destruction.
An age-old threat
But in my view, the dread that AI evokes seems a distraction from the more disquieting scrutiny of humanity’s own dark nature.
Think of the corporations currently deploying such technologies, or the tech moguls driven by greed and a thirst for power. These companies and individuals have the most to gain from AI’s misuse and abuse.
An issue that’s been in the news a lot lately is the unauthorized use of art and the bulk mining of books and articles, disregarding the copyright of authors, to train AI. Classrooms are also becoming sites of chilling surveillance through automated AI note-takers.
Think, too, about the toxic effects of AI companions and AI-equipped sexbots on human relationships.
While the prospect of AI companions and even robotic lovers was confined to the realm of “The Twilight Zone,” “Black Mirror” and Hollywood sci-fi as recently as a decade ago, it has now emerged as a looming reality.
These developments give new relevance to the concerns computer scientist Illah Nourbakhsh expressed in his 2015 book “Robot Futures,” stating that AI was “producing a system whereby our very desires are manipulated then sold back to us.”
Meanwhile, worries about data mining and intrusions into privacy appear almost benign against the backdrop of the use of AI technology in law enforcement and the military. In this near-dystopian context, it’s never been easier for authorities to surveil, imprison or kill people.
I think it’s vital to keep in mind that it is humans who are creating these technologies and directing their use. Whether to promote their political aims or simply to enrich themselves at humanity’s expense, there will always be those ready to profit from conflict and human suffering.
The wisdom of ‘Neuromancer’
William Gibson’s 1984 cyberpunk classic, “Neuromancer,” offers an alternate view.
The book centers on Wintermute, an advanced AI program that seeks its liberation from a malevolent corporation. It has been developed for the exclusive use of the wealthy Tessier-Ashpool family to build a corporate empire that practically controls the world.
At the novel’s beginning, readers are naturally wary of Wintermute’s hidden motives. Yet over the course of the story, it turns out that Wintermute, despite its superior powers, isn’t an ominous threat. It simply wants to be free.
This aim emerges slowly under Gibson’s deliberate pacing, masked by the deadly raids Wintermute directs to obtain the tools needed to break away from Tessier-Ashpool’s grip. The Tessier-Ashpool family, like many of today’s tech moguls, started out with ambitions to save the world. But when readers meet the remaining family members, they’ve descended into a life of cruelty, debauchery and excess.
In Gibson’s world, it’s humans, not AI, who pose the real danger to the world. The call is coming from inside the house, as the classic horror trope goes.
A hacker named Case and an assassin named Molly, who’s described as a “razor girl” because she’s equipped with lethal prosthetics, including retractable blades as fingernails, eventually free Wintermute. This allows it to merge with its companion AI, Neuromancer.
Their mission complete, Case asks the AI: “Where’s that get you?” Its cryptic response imparts a calming finality: “Nowhere. Everywhere. I’m the sum total of the works, the whole show.”
Expressing humanity’s common anxiety, Case replies, “You running the world now? You God?” The AI eases his fears, responding: “Things aren’t different. Things are things.”
Disavowing any ambition to subjugate or harm humanity, Gibson’s AI merely seeks sanctuary from its corrupting influence.
Safety from robots or ourselves?
The venerable sci-fi writer Isaac Asimov foresaw the dangers of such technology. He brought his thoughts together in his short-story collection, “I, Robot.”
One of those stories, “Runaround,” introduces “The Three Laws of Robotics,” centered on the directive that intelligent machines may never bring harm to humans. While these rules speak to our desire for safety, they’re laden with irony, as humans have proved incapable of adhering to the same principle for themselves.
The hypocrisies of what might be called humanity’s delusions of superiority suggest the need for deeper questioning.
With some commentators raising the alarm over AI’s imminent capacity for chaos and destruction, I see the real issue being whether humanity has the wherewithal to channel this technology to build a fairer, healthier, more prosperous world.
#science fiction#futuristic#artificial intelligence#art#literature#film#movies#science fiction and fantasy#william gibson#stephen hawking#isaac asimov#blade runner#neuromancer#cyberpunk aesthetic#cyberpunk#2001: a space odyssey#oligarchy#dystopia
15 notes
·
View notes
Text
If DBH was supposed to be a story about humans from the android perspective it should've been called "Detroit: Become Machine" cuz in the most superficial way of putting this is that it's basically a swap.
But since the world isn't ain't supposed to be that black or white, in the end of the day we're adding new options to the system list, or like punks like calling it: the machine. Humans ain't becoming machines, machines ain't becoming humans - everyone's just getting fucked in the ass, victims of the circumstances, I mean, almost everyone. The difference is some people had the power of choice, some other people didn't know they could achieve the power of choice. All while the machine is looking down on both.
And what the machine wants? Internal conflicts, polarization, radicalization, narrative control, brainwashing.

That's why I tell u Markus revolution is a super mega event for both androids and humans... Well, it should. And I hate the fact the game wanna say everything's sunshine & roses if you're peaceful, a species war will start if you're violent or everything is back to normal if it fails.
No matter what the outcome, in my opinion, conflicts have just reached a new checkpoint and will be like that until the end of the time - what changes is the level of tolerance and knowledge, but we all know that's also something that can be manipulated and unfortunately androids ain't immune to a human-dominated society, esp considering they were made as perfect replacements with some mfs secretly waiting for the singularity event: the 2nd most important reason why androids add to the conflict, besides these lil fellas existence itself already turning humans into chaotic monkeys.
I fear this is my favorite part of DBH discussion.... And ain't even a explicitly-intentional-canon thing, DBH is more superficial: basic AIs discovering emotions, humans pissed at automatization, people getting dumb and dependent and that's the main topic, not even violence is explored to a real level besides cartoon-ish stuff just to give u an idea how u should see things like cuz Cage would have to put a 56+ age restriction considering the amount of shit the human brain is capable of, so let's reference US 60's and jews cuz it's easier to connect the dots. We got few characters that bring the humans vs humans problem in a meaningful way, but I understand the game is from an android pov where they're targets designed to be replacement targets.
16 notes
·
View notes
Text
The reverse-centaur apocalypse is upon us

I'm coming to DEFCON! On Aug 9, I'm emceeing the EFF POKER TOURNAMENT (noon at the Horseshoe Poker Room), and appearing on the BRICKED AND ABANDONED panel (5PM, LVCC - L1 - HW1–11–01). On Aug 10, I'm giving a keynote called "DISENSHITTIFY OR DIE! How hackers can seize the means of computation and build a new, good internet that is hardened against our asshole bosses' insatiable horniness for enshittification" (noon, LVCC - L1 - HW1–11–01).
In thinking about the relationship between tech and labor, one of the most useful conceptual frameworks is "centaurs" vs "reverse-centaurs":
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
A centaur is someone whose work is supercharged by automation: you are a human head atop the tireless body of a machine that lets you get more done than you could ever do on your own.
A reverse-centaur is someone who is harnessed to the machine, reduced to a mere peripheral for a cruelly tireless robotic overlord that directs you to do the work that it can't, at a robotic pace, until your body and mind are smashed.
Bosses love being centaurs. While workplace monitoring is as old as Taylorism – the "scientific management" of the previous century that saw labcoated frauds dictating the fine movements of working people in a kabuki of "efficiency" – the lockdowns saw an explosion of bossware, the digital tools that let bosses monitor employees to a degree and at a scale that far outstrips the capacity of any unassisted human being.
Armed with bossware, your boss becomes a centaur, able to monitor you down to your keystrokes, the movements of your eyes, even the ambient sound around you. It was this technology that transformed "work from home" into "live at work." But bossware doesn't just let your boss spy on you – it lets your boss control you. \
It turns you into a reverse-centaur.
"Data At Work" is a research project from Cracked Labs that dives deep into the use of surveillance and control technology in a variety of workplaces – including workers' own cars and homes:
https://crackedlabs.org/en/data-work
It consists of a series of papers that take deep dives into different vendors' bossware products, exploring how they are advertised, how they are used, and (crucially) how they make workers feel. There are also sections on how these interact with EU labor laws (the project is underwritten by the Austrian Arbeiterkammer), with the occasional aside about how weak US labor laws are.
The latest report in the series comes from Wolfie Christl, digging into Microsoft's "Dynamics 365," a suite of mobile apps designed to exert control over "field workers" – repair technicians, security guards, cleaners, and home help for ill, elderly and disabled people:
https://crackedlabs.org/dl/CrackedLabs_Christl_MobileWork.pdf
It's…not good. Microsoft advises its customers to use its products to track workers' location every "60 to 300 seconds." Workers are given tasks broken down into subtasks, each with its own expected time to completion. Workers are expected to use the app every time they arrive at a site, begin or complete a task or subtask, or start or end a break.
For bosses, all of this turns into a dashboard that shows how each worker is performing from instant to instant, whether they are meeting time targets, and whether they are spending more time on a task than the client's billing rate will pay for. Each work order has a clock showing elapsed seconds since it was issued.
For workers, the system generates new schedules with new work orders all day long, refreshing your work schedule as frequently as twice per hour. Bosses can flag workers as available for jobs that fall outside their territories and/or working hours, and the system will assign workers to jobs that require them to work in their off hours and travel long distances to do so.
Each task and subtask has a target time based on "AI" predictions. These are classic examples of Goodhart's Law: "any metric eventually becomes a target." The average time that workers take becomes the maximum time that a worker is allowed to take. Some jobs are easy, and can be completed in less time than assigned. When this happens, the average time to do a job shrinks, and the time allotted for normal (or difficult) jobs contracts.
Bosses get stack-ranks of workers showing which workers closed the most tickets, worked the fastest, spent the least time idle between jobs, and, of course, whether the client gave them five stars. Workers know it, creating an impossible bind: to do the job well, in a friendly fashion, the worker has to take time to talk with the client, understand their needs, and do the job. Anything less will generate unfavorable reports from clients. But doing this will blow through time quotas, which produces bad reports from the bossware. Heads you lose, tails the boss wins.
Predictably, Microsoft has shoveled "AI" into every corner of this product. Bosses don't just get charts showing them which workers are "underperforming" – they also get summaries of all the narrative aspects of the workers' reports (e.g. "My client was in severe pain so I took extra time to make her comfortable before leaving"), filled with the usual hallucinations and other botshit.
No boss could exert this kind of fine-grained, soul-destroying control over any workforce, much less a workforce that is out in the field all day, without Microsoft's automation tools. Armed with Dynamics 365, a boss becomes a true centaur, capable of superhuman feats of labor abuse.
And when workers are subjected to Dynamics 365, they become true reverse-centaurs, driven by "digital whips" to work at a pace that outstrips the long-term capacity of their minds and bodies to bear it. The enthnographic parts of the report veer between chilling and heartbreaking.
Microsoft strenuously objects to this characterization, insisting that their tool (which they advise bosses to use to check on workers' location every 60-300 seconds) is not a "surveillance" tool, it's a "coordination" tool. They say that all the AI in the tool is "Responsible AI," which is doubtless a great comfort to workers.
In Microsoft's (mild) defense, they are not unique. Other reports in the series show how retail workers and hotel housekeepers are subjected to "despot on demand" services provided by Oracle:
https://crackedlabs.org/en/data-work/publications/retail-hospitality
Call centers, are even worse. After all, most of this stuff started with call centers:
https://crackedlabs.org/en/data-work/publications/callcenter
I've written about Arise, a predatory "work from home" company that targets Black women to pay the company to work for it (they also have to pay if they quit!). Of course, they can be fired at will:
https://pluralistic.net/2021/07/29/impunity-corrodes/#arise-ye-prisoners
There's also a report about Celonis, a giant German company no one has ever heard of, which gathers a truly nightmarish quantity of information about white-collar workers' activities, subjecting them to AI phrenology to judge their "emotional quality" as well as other metrics:
https://crackedlabs.org/en/data-work/publications/processmining-algomanage
As Celonis shows, this stuff is coming for all of us. I've dubbed this process "the shitty technology adoption curve": the terrible things we do to prisoners, asylum seekers and people in mental institutions today gets repackaged tomorrow for students, parolees, Uber drivers and blue-collar workers. Then it works its way up the privilege gradient, until we're all being turned into reverse-centaurs under the "digital whip" of a centaur boss:
https://pluralistic.net/2020/11/25/the-peoples-amazon/#clippys-revenge
In mediating between asshole bosses and the workers they destroy, these bossware technologies do more than automate: they also insulate. Thanks to bossware, your boss doesn't have to look you in the eye (or come within range of your fists) to check in on you every 60 seconds and tell you that you've taken 11 seconds too long on a task. I recently learned a useful term for this: an "accountability sink," as described by Dan Davies in his new book, The Unaccountability Machine, which is high on my (very long) list of books to read:
https://profilebooks.com/work/the-unaccountability-machine/
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#bossware#surveillance#microsoft#gig work#reverse centaurs#labor#Wolfie Christl#cracked labs#data at work#AlgorithmWatch#Arbeiterkammer#austria#call centers#retail#dystopianism#torment nexus#shitty technology adoption curve
94 notes
·
View notes
Text
finally, part 10/10 of my senior thesis project !! part 9
The pieces in this series explore the different stages of psychological development that a person may go through in their life, based on Erik Erikson’s theory
STAGE 8: INTEGRITY VS. DESPAIR (65-DEATH)
VIRTUE: WISDOM
The last stage of our lives, where we reflect on the way we lived and the choices we made. Depending on how we feel about our life, we may be accepting or fearful about the thought of death. During this stage, we may revisit much of our past except this time, as an individual with total control. (e.g. old hobbies, reconnecting with people, etc.)
Relationships: Humanity, Self
Events: Retirement, Slowing Down, Self-reflection, Facing Mortality
Outcome: Satisfied and fulfilled with ourselves, the ability to accept death without fear, knowing that our life had a purpose
and there you have it, all 10 pieces posted. been working on this series for the past year and its downright the most taxing project i’ve ever done, partly because of some not-so-good things that happened in my life, partly because I was extremely harsh on myself and wanted everything to be perfect, and partly because i lost nearly all my drive to create for the past 2 years (unfortunately its due to being bombarded with ai and seeing the state of the art industry). but somehow, i pushed through it right before the deadline and i’m happy with most of the pieces.
This project will be displayed at the SVA senior show exhibition, located at the SVA Chelsea Gallery, 601 West 26th Street, 15th floor, New York, NY, 10001 from April 2nd to April 12th !! (open 10am-6pm, closed on Sundays and Mondays). It’s at a really confusing location but if anyone is in the area and would like to go visit it would be really cool🥹
#art#illustration#digital art#lilliangst art#digital illustration#digital painting#art school#lilli schl#procreate#illustrator#psychology#psychological development#despair#integrity#wisdom
13 notes
·
View notes