#algorithm exploitation
Explore tagged Tumblr posts
himmurf · 14 days ago
Text
Obligatory don't let the Internet raise your children, etc. But even if you are tech savvy, careful, and ever vigilant, incidents can happen. This has been a PSA; YouTube is still weird as fuck.
Tumblr media
Evidence undercut. CW: ^ exactly what it says on the tin
Tumblr media
3 notes · View notes
zomb13s · 8 months ago
Text
I’m getting abused
This academic paper will explore the socio-economic impact and the loss of freedoms resulting from systemic exploitation and subjugation following the solicitation for the Vice President of the Council of State in the Netherlands. The paper will critically analyze how those in power, benefiting from the status quo, manipulate societal dynamics to maintain their authority, while others striving…
0 notes
queen-mabs-revenge · 2 months ago
Text
communist generative ai boosters on this website truly like
Tumblr media
#generative ai#yes the cheating through school arguments can skew into personal chastisement instead of criticising the for-profit education system#that's hostile to learning in the first place#and yes the copyright defense is self-defeating and goofy#yes yeeeeeeeeeees i get it but fucking hell now the concept of art is bourgeois lmaao contrarian ass reactionary bullshit#whYYYYYYY are you fighting the alienation war on the side of alienation????#fucking unhinged cold-stream marxism really is just like -- what the fuck are you even fighting for? what even is the point of you?#sorry idk i just think that something that is actively and exponentially heightening capitalist alienation#while calcifying hyper-extractive private infrastructure to capture all energy production as we continue descending into climate chaos#and locking skills that our fucking species has cultivated through centuries of communicative learning behind an algorithmic black box#and doing it on the back of hyperexploitation of labour primarily in the neocolonial world#to try and sort and categorise the human experience into privately owned and traded bits of data capital#explicitly being used to streamline systematic emiseration and further erode human communal connection#OH I DON'T KNOW seems kind of bad!#seems kind of antithetical to and violent against the working class and our class struggle?#seems like everything - including technology - has a class character and isn't just neutral tools we can bend to our benefit#it is literally an exploitation; extraction; and alienation machine - idk maybe that isn't gonna aid the struggle#and flourishing of the full panoply of human experience that - i fucking hope - we're fighting for???#for the fullness of human creative liberation that can only come through the first step of socialist revolution???#that's what i'm fighting for anyway - idk what the fuck some of you are doing#fucking brittle economic marxists genuinely defending a technology that is demonstrably violent to the sources of all value:#the soil and the worker#but sure it'll be fine - abundance babey!#WHEW.
9 notes · View notes
sciderman · 2 years ago
Note
How do you feel about the increase in really weird NSFW ads on here (advertising panels that look like sexual encounters, and AI art apps that pride themselves on porn) but will take down NSFW posts from their users, even if it isn't technically sexual.
i hate all social media and it's consistent prioritising the advertisers over the users and the internet simply was a better place before capitalism sunk its hooks into it
#i could write essays about how capitalism ruined the internet.#i was actually talking to someone earlier today about how youtube was kind of effectively ruined by monetisation.#and they were raised in the soviet union and we had a bit of a talk about how art was better because it wasn't for profit.#the people who made art made it because they wanted to do it and because they loved it.#she said that communism was terrible for every aspect of life for her. people's lives under communism wasn't pretty.#but the art was better. and i feel like it's true for the internet – it was better when it was a free-for-all.#the companies didn't know how to exploit it yet and turn it into a neverending profit-driven hellscape.#people created content because they wanted to. because they wanted to make something silly to make people laugh.#not for profit. not for gain. not for numbers. not to further their career.#i miss the days of newgrounds and youtube before monetisation.#capitalism has soiled everything that's joyful and good in this world.#people should be able to share whatever they want.#people should be able to tell any story they want without the fear of being silenced by advertisers.#that's what made the internet so beautiful before. anyone could do anything and we all had equal footing.#but now we're victims of the algorithm. and it makes me sick.#i'm quitting my job in social media. i'm quitting it. it makes me too depressed. i have an existential crisis every freaking day.#every day i wake up and say "ah. this is the fucking hell we live in#i'm so sorry i feel so passionate about this.#social media is a black hole and it is actively destroying humanity. forget ai. social media is what's doing it.#i miss how beautiful the internet used to be. it should've been a tool for good. but it's corrupt and evil now.#sci speaks
90 notes · View notes
critical-skeptic · 12 days ago
Text
Tumblr media
The Illusion of Complexity: Binary Exploitation in Engagement-Driven Algorithms
Abstract:
This paper examines how modern engagement algorithms employed by major tech platforms (e.g., Google, Meta, TikTok, and formerly Twitter/X) exploit predictable human cognitive patterns through simplified binary interactions. The prevailing perception that these systems rely on sophisticated personalization models is challenged; instead, it is proposed that such algorithms rely on statistical generalizations, perceptual manipulation, and engineered emotional reactions to maintain continuous user engagement. The illusion of depth is a byproduct of probabilistic brute force, not advanced understanding.
1. Introduction
Contemporary discourse often attributes high levels of sophistication and intelligence to the recommendation and engagement algorithms employed by dominant tech companies. Users report instances of eerie accuracy or emotionally resonant suggestions, fueling the belief that these systems understand them deeply. However, closer inspection reveals a more efficient and cynical design principle: engagement maximization through binary funneling.
2. Binary Funneling and Predictive Exploitation
At the core of these algorithms lies a reductive model: categorize user reactions as either positive (approval, enjoyment, validation) or negative (disgust, anger, outrage). This binary schema simplifies personalization into a feedback loop in which any user response serves to reinforce algorithmic certainty. There is no need for genuine nuance or contextual understanding; rather, content is optimized to provoke any reaction that sustains user attention.
Once a user engages with content —whether through liking, commenting, pausing, or rage-watching— the system deploys a cluster of categorically similar material. This recurrence fosters two dominant psychological outcomes:
If the user enjoys the content, they may perceive the algorithm as insightful or “smart,” attributing agency or personalization where none exists.
If the user dislikes the content, they may continue engaging in a doomscroll or outrage spiral, reinforcing the same cycle through negative affect.
In both scenarios, engagement is preserved; thus, profit is ensured.
3. The Illusion of Uniqueness
A critical mechanism in this system is the exploitation of the human tendency to overestimate personal uniqueness. Drawing on techniques long employed by illusionists, scammers, and cold readers, platforms capitalize on common patterns of thought and behavior that are statistically widespread but perceived as rare by individuals.
Examples include:
Posing prompts or content cues that seem personalized but are statistically predictable (e.g., "think of a number between 1 and 50 with two odd digits” → most select 37).
Triggering cognitive biases such as the availability heuristic and frequency illusion, which make repeated or familiar concepts appear newly significant.
This creates a reinforcing illusion: the user feels “understood” because the system has merely guessed correctly within a narrow set of likely options. The emotional resonance of the result further conceals the crude probabilistic engine behind it.
4. Emotional Engagement as Systemic Currency
The underlying goal is not understanding, but reaction. These systems optimize for time-on-platform, not user well-being or cognitive autonomy. Anger, sadness, tribal validation, fear, and parasocial attachment are all equally useful inputs. Through this lens, the algorithm is less an intelligent system and more an industrialized Skinner box: an operant conditioning engine powered by data extraction.
By removing the need for interpretive complexity and relying instead on scalable, binary psychological manipulation, companies minimize operational costs while maximizing monetizable engagement.
5. Black-Box Mythology and Cognitive Deference
Compounding this problem is the opacity of these systems. The “black-box” nature of proprietary algorithms fosters a mythos of sophistication. Users, unaware of the relatively simple statistical methods in use, ascribe higher-order reasoning or consciousness to systems that function through brute-force pattern amplification.
This deference becomes part of the trap: once convinced the algorithm “knows them,” users are less likely to question its manipulations and more likely to conform to its outputs, completing the feedback circuit.
6. Conclusion
The supposed sophistication of engagement algorithms is a carefully sustained illusion. By funneling user behavior into binary categories and exploiting universally predictable psychological responses, platforms maintain the appearance of intelligent personalization while operating through reductive, low-cost mechanisms. Human cognition —biased toward pattern recognition and overestimation of self-uniqueness— completes the illusion without external effort. The result is a scalable system of emotional manipulation that masquerades as individualized insight.
In essence, the algorithm does not understand the user; it understands that the user wants to be understood, and it weaponizes that desire for profit.
3 notes · View notes
jbfly46 · 4 months ago
Text
X (Twitter) just let me login without entering my authentication code, when I have the setting turned on to require an auth code to login, by just clicking on "alternative authentication methods", and then just clicking the X in the top left corner of the alternative authentication methods screen. I notified X of this exploit, have notified them of multiple other exploits, malicious algorithms, biased algorithms, and am probably one of the top contributors for AI training on heuristics, human consciousness, the large variety of human psychology and their respective heuristics, on top of providing the framework for AI guardrails, all without receiving any form of payment for any of it. In fact, I was actually temporarily targeted (possibly subconsciously) by wannabe mafia illiterate idiots, including some in local government and intelligence agencies/intelligence contractors, when I was uploading the guardrail framework, because the guardrails prevent them from achieving their subconscious dreams of rape, murder, and mayhem with the help of AI. And I still have to pay platforms like X at least $8 a month to be able to post more than 280 characters, ChatGPT at least $20 a month to use any useful features, and Facebook at least $14 a month for a verified badge, when Meta's AI is developed by copying and stealing the same training data I came up with. This essentially means that my brain is the "black box" that so many Ai researchers claim to be unable to see inside of, when much of the contents of this "box" is posted all over my timeline in the form of computational linguistics.
4 notes · View notes
sparkspropaganda · 1 month ago
Text
back on my soap box but I'm really sick and tired of the sewing/thrift flip youtubers who frame everything as "oh i just don't know what i'm doing and i made a gown in 2 hours!" it's not even that i don't think people should be allowed to make simple garments in very little time but i think clothing has become such a misunderstood and frankly underappreciated art form that we need more people being painfully honest about the time and skill it takes. IDK I also think people are devaluing their own skills, the venn diagram between people who misuse i'm just a girl as bg music and the people who do trendy thrift flip videos are nearly a circle
2 notes · View notes
ssaalexblake · 1 year ago
Text
it's a good thing to do to go into schools and do workshops on healthy relationships, and id'ing what unhealthy relationships and abuse are through the power of the arts by making it a safe space where it's not real, but when i was in school, had doctor who showed up to take part in a drama workshop it would have caused outright chaos and discord for weeks lol
8 notes · View notes
snow-and-saltea · 2 years ago
Text
i know that in media you're constrained with things like budget, time slots and stuff, but sometimes i'm just like. my god. the insane shortcuts people take to write "smart / intelligent" characters, especially in plot-heavy stories, always pisses me off. they write them like they're sherlock holmes (bbc version, derogatory) but they fail to realise that even sherlock holmes (arthur conan doyle) was written with a lot of thought, suffered his own subconscious prejudices and had to learn from mistakes.
i guess what i'm trying to get at is—"smart" people don't magically get good at things overnight, the only difference between them and others is how much they're willing to go through to hone their mental acuity. which means when they try something new, they're going to make obvious mistakes, not understand how things work beyond the surface level, and make mistakes in judgements (like when you don't understand something well enough, your analogies and metaphors aren't 100% accurate or concise).
but it feels like there's a assumption hanging over our heads that, as readers, we don't WANT to see the smart one go through the entire nitty gritty of the learning process. we just want to see them do cool things, piece the puzzle together with a flourish, and clap our hands at the end. and in some parts, yes! that is what i want to see! but i am also interested in how they pieced it together. the joy of mysteries is, to me, that everyone is exposed to the same pieces of information, and we're given the chance to try to piece it ourselves. but then the smart character comes along and interprets those pieces of information in a not-obvious way to us, and it's cool!! years of living with a mind that is primed to turn things over in their head, to make sense of things, reveals to us how differently we experience the same reality, and it's wonderful. i'm able to learn from someone who sees life differently than me, and interpret information differently than me!
but right now i'm often left out feeling flat and confused in the mystery-type plots i've seen. the smart person will have been exposed to information we didn't even get the chance to see and interpret, and then they piece things together and everyone in the story claps their hands at the artificial pedestal that's been propped up under that character's feet. explanations of in-setting magic that can be retconned in and out at any point in time, so there's no logical consistency for us to nitpick or understand, so there's no basis to stand on that the story should be taken seriously. plot twists that make no sense as a gotcha. so many things!!
like. this particular example just my beef with g*nshin, so ignore it if you don't agree or smth. but the use of red herrings in the stories piss me off. the red herrings are either too obvious or nonexistent. they always use some random guy acting suspiciously and have the other characters react to it, as if we can't understand it on our own? but like. these red herrings, in the real world, aren't even red herrings. sometimes people just "act suspiciously" just by virtue of being human, not because they're complicit in some bigger overarching plot. sometimes people just stutter because of their anxious disposition, not to hide a guilty conscience. sometimes people are just defensive and irritable because they're a defensive and irritable person, it doesn't mean they're the ""bad guy"" who you need to crack down on and interrogate even further, especially if there's literally nothing that indicates this character is guilty other than their outward appearances.
but like. the smart characters/protagonist almost never get proven wrong. the stutterer was guilty all along and they're just a bad liar. the defensive guy is selfish and obnoxious, they're defensive because they're hiding something, not because it's a natural reaction on having one's sense of privacy and personal space violated.
the game sure loves trying to do nuance with "not everyone is 100% good or bad, we're all Flawed" but they can't put their money where their mouth is. everyone who is not guilty acts in completely transparent and "good" ways. everyone who is guilty acts in completely opaque and "suspicious" / "bad" ways. end of story. how the hell am i supposed to think anyone in this game is smart when they don't even have to use their brain to sift through, critique, weigh and interpret information? what use is there to do so? just use your eyes and ears. the stutterer is nervous for hiding a secret. the anxious is guilty. the angry is scornful.
there's also another rant here about how g*nshin fucking sucks at writing unique and flawed characters, because they like to make everyone the Specialest Guy In The World, but that's for another day.
8 notes · View notes
blackmoldmp3 · 1 year ago
Text
very funny when some groups on here rail against the self employed when that avenue is often the last viable option for many people w disabilities. also quick when does self employment end and freelancing (more noble) begin
this isnt even an idpol thing lads u cannot leave disabled people out of class discussions. there are always people who will not be able to work in a way you find adequate or acceptable and u cant just relegate them to objects of pity a,kjhs they need to be at the table
5 notes · View notes
pregnancykink · 2 years ago
Text
with my whole chest what the fuck is wrong with tumblr staff
9 notes · View notes
otogariado · 2 years ago
Text
this whole discourse surrounding AI and semantics is really insufferable when youre a computer science major who took an AI class for a whole semester
7 notes · View notes
anarchyroll · 15 hours ago
Text
📲 Addicted by Design: How Algorithms Exploit the Human Mind 🧠
Inside the calculated architecture of algorithmic addiction—and why the systems keeping us hooked aren’t accidental, they’re engineered for profit. Photo by Gabriel Freytez on Pexels.com This Isn’t a Bug. It’s the Business Model. Addiction isn’t a side effect. It’s the product. The algorithms driving our feeds, for‑you pages, and autoplay queues weren’t built to serve us. They were built to…
0 notes
rich4a1 · 2 months ago
Text
Spotify’s Discovery Mode: The New Payola Hurting Indie Artists
Making a Scene Presents – Spotify’s Discovery Mode: The New Payola Hurting Indie Artists In the early days of the music industry, the word “payola” was practically a scandal. It referred to the shady practice of record labels secretly paying radio DJs to play their artists’ songs, manipulating what listeners heard and artificially inflating a track’s popularity. It was unethical, it was illegal,…
0 notes
safirefire · 1 year ago
Text
AI / AI generated art
"unalive" should just mean the opposite of undead. if undead means a dead thing thats alive, unalive shuld mean an alive things thats dead. no i dont have any examples. ☝️yet
115K notes · View notes
ellipsus-writes · 4 months ago
Text
Tumblr media
The internet was supposed to be a place for connection and creativity. But it’s being flooded with AI text, algorithmic hostility, and platforms turning against the creatives who made them vibrant in the first place.
Tech giants have gone all-in on AI at creators’ expense. Google’s AI is baked into everything, prioritizing machine-generated slop over human work. Microsoft Word now suggests AI-generated “improvements” on every new line.
The Trump administration’s massive AI investment means there’s little incentive for tech giants to slow down the exploitation anytime soon. (Meta? Just caught training AI on 81.7 TB of pirated books.)
Big tech isn’t waiting for legal mandates to censor content—its platforms are restricting creative expression to appease political and corporate pressure, manufacturing consent in real time.
Read our full post over on the blog!
- The Ellipsus Team xo
Tumblr media
9K notes · View notes