#data algorithms
Explore tagged Tumblr posts
Text
I’m getting abused
This academic paper will explore the socio-economic impact and the loss of freedoms resulting from systemic exploitation and subjugation following the solicitation for the Vice President of the Council of State in the Netherlands. The paper will critically analyze how those in power, benefiting from the status quo, manipulate societal dynamics to maintain their authority, while others striving…
#algorithm control#algorithm distortion#algorithm exploitation#algorithm manipulation#algorithmic control#algorithmic manipulation#capitalism#Collective Action#Council of State#creative capital#creative control#Creative Economy#creative exploitation#creative freedom#creative industry#Creative Innovation#creative oppression#creative output#creative rights#creative suppression#creative talent#Creativity#creativity exploitation#cultural capital#cultural exploitation#data algorithms#data capitalism#data control#data exploitation#data manipulation
0 notes
Note
As cameras becomes more normalized (Sarah Bernhardt encouraging it, grifters on the rise, young artists using it), I wanna express how I will never turn to it because it fundamentally bores me to my core. There is no reason for me to want to use cameras because I will never want to give up my autonomy in creating art. I never want to become reliant on an inhuman object for expression, least of all if that object is created and controlled by manufacturing companies. I paint not because I want a painting but because I love the process of painting. So even in a future where everyone’s accepted it, I’m never gonna sway on this.
if i have to explain to you that using a camera to take a picture is not the same as using generative ai to generate an image then you are a fucking moron.
#ask me#anon#no more patience for this#i've heard this for the past 2 years#“an object created and controlled by companies” anon the company cannot barge into your home and take your camera away#or randomly change how it works on a whim. you OWN the camera that's the whole POINT#the entire point of a camera is that i can control it and my body to produce art. photography is one of the most PHYSICAL forms of artmakin#you have to communicate with your space and subjects and be conscious of your position in a physical world.#that's what makes a camera a tool. generative ai (if used wholesale) is not a tool because it's not an implement that helps you#do a task. it just does the task for you. you wouldn't call a microwave a “tool”#but most importantly a camera captures a REPRESENTATION of reality. it captures a specific irreproducible moment and all its data#read Roland Barthes: Studium & Punctum#generative ai creates an algorithmic IMITATION of reality. it isn't truth. it's the average of truths.#while conceptually that's interesting (if we wanna get into media theory) but that alone should tell you why a camera and ai aren't the sam#ai is incomparable to all previous mediums of art because no medium has ever solely relied on generative automation for its creation#no medium of art has also been so thoroughly constructed to be merged into online digital surveillance capitalism#so reliant on the collection and commodification of personal information for production#if you think using a camera is “automation” you have worms in your brain and you need to see a doctor#if you continue to deny that ai is an apparatus of tech capitalism and is being weaponized against you the consumer you're delusional#the fact that SO many tumblr lefists are ready to defend ai while talking about smashing the surveillance state is baffling to me#and their defense is always “well i don't engage in systems that would make me vulnerable to ai so if you own an apple phone that's on you”#you aren't a communist you're just self-centered
629 notes
·
View notes
Text
Reverse engineers bust sleazy gig work platform

If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/11/23/hack-the-class-war/#robo-boss
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
Supposedly, these lines were included in a 1979 internal presentation at IBM; screenshots of them routinely go viral:
https://twitter.com/SwiftOnSecurity/status/1385565737167724545?lang=en
The reason for their newfound popularity is obvious: the rise and rise of algorithmic management tools, in which your boss is an app. That IBM slide is right: turning an app into your boss allows your actual boss to create an "accountability sink" in which there is no obvious way to blame a human or even a company for your maltreatment:
https://profilebooks.com/work/the-unaccountability-machine/
App-based management-by-bossware treats the bug identified by the unknown author of that IBM slide into a feature. When an app is your boss, it can force you to scab:
https://pluralistic.net/2023/07/30/computer-says-scab/#instawork
Or it can steal your wages:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
But tech giveth and tech taketh away. Digital technology is infinitely flexible: the program that spies on you can be defeated by another program that defeats spying. Every time your algorithmic boss hacks you, you can hack your boss back:
https://pluralistic.net/2022/12/02/not-what-it-does/#who-it-does-it-to
Technologists and labor organizers need one another. Even the most precarious and abused workers can team up with hackers to disenshittify their robo-bosses:
https://pluralistic.net/2021/07/08/tuyul-apps/#gojek
For every abuse technology brings to the workplace, there is a liberating use of technology that workers unleash by seizing the means of computation:
https://pluralistic.net/2024/01/13/solidarity-forever/#tech-unions
One tech-savvy group on the cutting edge of dismantling the Torment Nexus is Algorithms Exposed, a tiny, scrappy group of EU hacker/academics who recruit volunteers to reverse engineer and modify the algorithms that rule our lives as workers and as customers:
https://pluralistic.net/2022/12/10/e2e/#the-censors-pen
Algorithms Exposed have an admirable supply of seemingly boundless energy. Every time I check in with them, I learn that they've spun out yet another special-purpose subgroup. Today, I learned about Reversing Works, a hacking team that reverse engineers gig work apps, revealing corporate wrongdoing that leads to multimillion euro fines for especially sleazy companies.
One such company is Foodinho, an Italian subsidiary of the Spanish food delivery company Glovo. Foodinho/Glovo has been in the crosshairs of Italian labor enforcers since before the pandemic, racking up millions in fines – first for failing to file the proper privacy paperwork disclosing the nature of the data processing in the app that Foodinho riders use to book jobs. Then, after the Italian data commission investigated Foodinho, the company attracted new, much larger fines for its out-of-control surveillance conduct.
As all of this was underway, Reversing Works was conducting its own research into Glovo/Foodinho's app, running it on a simulated Android handset inside a PC so they could peer into app's data collection and processing. They discovered a nightmarish world of pervasive, illegal worker surveillance, and published their findings a year ago in November, 2023:
https://www.etui.org/sites/default/files/2023-10/Exercising%20workers%20rights%20in%20algorithmic%20management%20systems_Lessons%20learned%20from%20the%20Glovo-Foodinho%20digital%20labour%20platform%20case_2023.pdf
That report reveals all kinds of extremely illegal behavior. Glovo/Foodinho makes its riders' data accessible across national borders, so Glovo managers outside of Italy can access fine-grained surveillance information and sensitive personal information – a major data protection no-no.
Worse, Glovo's app embeds trackers from a huge number of other tech platforms (for chat, analytics, and more), making it impossible for the company to account for all the ways that its riders' data is collected – again, a requirement under Italian and EU data protection law.
All this data collection continues even when riders have clocked out for the day – its as though your boss followed you home after quitting time and spied on you.
The research also revealed evidence of a secretive worker scoring system that ranked workers based on undisclosed criteria and reserved the best jobs for workers with high scores. This kind of thing is pervasive in algorithmic management, from gig work to Youtube and Tiktok, where performers' videos are routinely suppressed because they crossed some undisclosed line. When an app is your boss, your every paycheck is docked because you violated a policy you're not allowed to know about, because if you knew why your boss was giving you shitty jobs, or refusing to show the video you spent thousands of dollars making to the subscribers who asked to see it, then maybe you could figure out how to keep your boss from detecting your rulebreaking next time.
All this data-collection and processing is bad enough, but what makes it all a thousand times worse is Glovo's data retention policy – they're storing this data on their workers for four years after the worker leaves their employ. That means that mountains of sensitive, potentially ruinous data on gig workers is just lying around, waiting to be stolen by the next hacker that breaks into the company's servers.
Reversing Works's report made quite a splash. A year after its publication, the Italian data protection agency fined Glovo another 5 million euros and ordered them to cut this shit out:
https://reversing.works/posts/2024/11/press-release-reversing.works-investigation-exposes-glovos-data-privacy-violations-marking-a-milestone-for-worker-rights-and-technology-accountability/
As the report points out, Italy is extremely well set up to defend workers' rights from this kind of bossware abuse. Not only do Italian enforcers have all the privacy tools created by the GDPR, the EU's flagship privacy regulation – they also have the benefit of Italy's 1970 Workers' Statute. The Workers Statute is a visionary piece of legislation that protects workers from automated management practices. Combined with later privacy regulation, it gave Italy's data regulators sweeping powers to defend Italian workers, like Glovo's riders.
Italy is also a leader in recognizing gig workers as de facto employees, despite the tissue-thin pretense that adding an app to your employment means that you aren't entitled to any labor protections. In the case of Glovo, the fine-grained surveillance and reputation scoring were deemed proof that Glovo was employer to its riders.
Reversing Works' report is a fascinating read, especially the sections detailing how the researchers recruited a Glovo rider who allowed them to log in to Glovo's platform on their account.
As Reversing Works points out, this bottom-up approach – where apps are subjected to technical analysis – has real potential for labor organizations seeking to protect workers. Their report established multiple grounds on which a union could seek to hold an abusive employer to account.
But this bottom-up approach also holds out the potential for developing direct-action tools that let workers flex their power, by modifying apps, or coordinating their actions to wring concessions out of their bosses.
After all, the whole reason for the gig economy is to slash wage-bills, by transforming workers into contractors, and by eliminating managers in favor of algorithms. This leaves companies extremely vulnerable, because when workers come together to exercise power, their employer can't rely on middle managers to pressure workers, deal with irate customers, or step in to fill the gap themselves:
https://projects.itforchange.net/state-of-big-tech/changing-dynamics-of-labor-and-capital/
Only by seizing the means of computation, workers and organized labor can turn the tables on bossware – both by directly altering the conditions of their employment, and by producing the evidence and tools that regulators can use to force employers to make those alterations permanent.
Image: EFF (modified) https://www.eff.org/files/issues/eu-flag-11_1.png
CC BY 3.0 http://creativecommons.org/licenses/by/3.0/us/
#pluralistic#etui#glovo#foodinho#alogrithms exposed#reverse engineering#platform work directive#eu#data protection#algorithmic management#gdpr#privacy#labor#union busting#tracking exposed#reversing works#adversarial interoperability#comcom#bossware
352 notes
·
View notes
Text
I don’t have a posted DNI for a few reasons but in this case I’ll be crystal clear:
I do not want people who use AI in their whump writing (generating scenarios, generating story text, etc.) to follow me or interact with my posts. I also do not consent to any of my writing, posts, or reblogs being used as inputs or data for AI.
#not whump#whump community#ai writing#beans speaks#blog stuff#:/ stop using generative text machines that scrape data from writers to ‘make your dream scenarios’#go download some LANDSAT data and develop an AI to determine land use. use LiDAR to determine tree crown health by near infrared values.#thats a good use of AI (algorithms) that I know and respect.#using plagiarized predictive text machines is in poor taste and also damaging to the environment. be better.
293 notes
·
View notes
Text
thinking about how jon was literally set up to blunder, being thrown into a job position everyone knew he wasn’t qualified for, so he had to scramble to prove his worth to everyone including himself, being dragged into supernatural experiences where he takes the blame for any of it happening when it was all so far out of his control. he had a target on his back since he was a child. everyone he cares about either died or left or wants him dead. except martin. and that one thing, that one special relationship where jon is loved and he can love back, they are thrown into an apocalyptic hellscape that jon can only blame himself for, and rather than reveling in this newfound love, they have to use it as a crutch to even muster the willpower to keep going. i’m gonna vomit
#i’m just rambling because i don’t want to do homework#curse you algorithms & data#jmart#jonathan sims#head archivist of the magnus institute london#tma#the magnus archives#mag spoilers
329 notes
·
View notes
Text
mage viktor discourse again on twitter and all i can say in my little corner over here once again is, I don't know why the entire fandom takes it as canon that mage Viktor failed to save every world he manipulated.
Canon does not provide evidence of this. This is fanon speculation. It's a fine headcanon to have, but everyone talks about it like it's canon when it isn't. Canon is ambiguous about the outcome of the timelines mage Viktor altered. The little nods we are given point, in my opinion, towards the opposite conclusion, that he successfully averted destruction.
I've written meta on this before but in summary:
1) 'In all timelines, in all possibilities' is worded precisely, it's not 'out of all timelines'; the implication is that every time, Jayce brings Viktor back from the brink, not just in our timeline. 'Only you' doesn't refer to our timeline's Jayce, it refers to all Jayces. Jayce always brings him home. If Viktor continuously put the fate of each timeline in Jayce's hands and Jayce failed over and over, I don't think he'd say those words. And the way he says them matters. His words are tinged with wonder, not sorrow. As if over and over again, he is shown that Jayce saves him, and it continues to amaze him. He doesn't sound defeated, like this is the next in a long line of Jayces he's sending off to die. The feeling is that Viktor's faith in Jayce has not been misplaced.
2) If mage Viktor doomed every timeline, there would be hundreds (or more) mage Viktors. All running around manipulating timelines. I highly doubt the writers wanted to get into that kind of sticky situation. The tragedy of mage Viktor is that he is singular. Alone. Burdened with the responsibility of the multiverse. The emotional gut punch of his fate is ruined if other timelines led to the same outcome, and from a practical standpoint, having multiple reality-bending omniscient mages would rip apart the fabric of the arcane.
There are other points, such as there being only one corrupted Mercury Hammer and our Jayce is the only one to receive it, and the fact that if mage Viktor is as omniscient as he is implied to be, he could easily step back into other timelines and correct course, because it's highly unlikely he could sit still and watch things go down in flames. But these things can be argued elsewhere.
While I love conversations about mage Viktor's motives and selfishness vs altruism, the writers & artbook have expressed that Jayce and Viktor care greatly about Runeterra and want to fix their mistakes to save it, and that their reconciliation is symbolic of Piltover and Zaun coming together as well. Yes, they make disastrous decisions towards each other, making choices for the other or without the other, which has negative consequences for their relationship and for Runeterra - but I think fandom pushes their selfishness even past what's canon sometimes, as if their entire goal hadn't always been to selflessly help the world around them. Their final reconciliation is about bridging the gap that grew between them - the pain and grief and secrets, betraying themselves and each other - to mutually choose each other openly and honestly. Part of the beauty of their story, as expressed by the creators, is that in their final moments, they chose each other and took responsibility for their actions by sacrificing themselves to end what they started, together - and that choosing each other saved the world. TPTB have stated this - that Jayce and Viktor are the glue holding civilization together, and when they come back to each other, they can restore balance. It's when they're apart, when they hurt each other and miscommunicate, when they abandon their commitment to each other and their dream, that the greater world suffers. Their strife is mirrored in the story-world at large.
Mage Viktor is framed as a solitary penitent figure, damned to an eternity of atoning for his mistakes. He paid the ultimate price and now is forced to live his personal nightmare of exactly what he was trying to avoid for himself with the glorious evolution. The narrative clues we're given point more in the direction that he saves timelines rather than dooms them. If Viktor's actions kept killing Jayce, the very boy he couldn't bear to not save each time, it would undermine these narrative choices. Yes, Viktor couldn't stand to live in a world where he never meets Jayce, so he ensures it keeps happening. But in that same breath, he couldn't bear to see a world where his actions continue to destroy Jayce and destroy Runeterra. His entire arc in s2 is born of his selfless desire to help humanity, help individual people. He would not lightly destroy entire worlds. That's his original grief multiplied a thousandfold, and narratively it would lessen the impact of the one, true loss he did suffer, his own Jayce. It wouldn't make sense for him to be alright with damning other timelines to suffer the same catastrophic tragedy that created him. I mean, maybe I'm delusional here, but is that not the entire point? Because that's what I took away when I watched the show.
As I said, I love discussions about mage Viktor, as there's a lot to play with. All I wish is that the fandom at large would not just assume or accept the Mage Viktor Dooms Every Timeline idea as canon, when there is nothing in the actual canon that confirms this. Maybe people need to just, go back and rewatch the actual episode, to recall how mage Viktor is presented to us, and what it's implied we're supposed to take away from his scenes, and separate that from the layers of headcanon the fandom has constructed.
#arcane#mage viktor#jayvik#viktor arcane#meta#this is like. along the same vein as 'jayce knew all along viktor would go to the hexgates during the final battle'#like that is a headcanon. we don't know that!!#the actual scene could be read either way and i know when i watched it that's not how i interpreted it#and i doubt it's how most casual viewers intrepeted it#fandom gets so deep into itself after a show ends that you really have to just. rewatch the show to recalibrate yourself lol#for all that people bicker about mage viktor yall dont include him in your fics v much lol#anyway i love mage viktor and he's probably my favorite version of viktor <3#i just wish fandom stopped insisting on a monolithic view of canon#and the idea that mage viktor fucked over hundreds of timelines to collect data points like a scientist is just#rubs me the wrong way as a scientist lol#you do realize that scientists don't treat everything in life like a science experiment right?#it's about inquisitiveness and curiosity. not 'i will approach this emotional thing from a cold and calculating standpoint'#viktor has never been cold and calculating. he's consistently driven by emotion in the show jfc please rewatch canon#i just think that people would benefit from a surface level reading once in a while lol#sometimes fandom digs so far into the minutiae that they forget the overarching takeaways that the story presents#assuming there must be some hidden meaning that sometimes (like this) is decided to be the literal opposite of what's presented#rewatch mage viktor's scenes and ask yourself if 'deranged destroyer of worlds' is really what the show was trying to have you take away#then again there seems to be a faction of this fandom that for some absurd reason thinks jayce was forced to stay and die with viktor#so i guess media illiteracy can't be helped for some lmao#i post these things on here because my twitter posts get literally 10 views thanks algorithm#so the chunk of the fandom i really want to see this will not#but i must speak my truth
129 notes
·
View notes
Text
Imagine: Rewind meets Optimus Prime at some point during the war and starts fanboying SUUUPER hard... NOT because that's fucking Optimus Prime or anything but because Orion Pax is a fucking LEGEND in the archivist community
#like who cares this guy has the matrix of leadership#he created the most efficient data storage algorithm since the golden age#he has the entire iaconian library memorized#he knows every change that's been made to the covenant of primus#thats why this guy is cool#transformers#maccadam#Optimus prime#rewind#orion pax
54 notes
·
View notes
Text
My fears regarding TikTok go beyond the blatant propaganda, narrative control, and censorship. I am concerned about the algorithm and the potential it has to be weaponized. It is so specific and accurate to find what niches, beliefs, and subcultures people are a part of that it could be used to identify people/groups the government want to target. The US didn’t want China to have the data because they wanted the data.
For instance, if they wanted to target crafty, left-handed, eldest siblings, who like [insert specific sports team here] all they would have to do is pull on the threads of the algorithm to have the accounts and information of people who were sorted into these specific boxes.
#that is why I’m only redownloading it to remove my data and delete the account#I fear they would use it to target the pro Palestine movements pro human rights movements#or to identify people of specific immigration statuses#and all this Trump ass-kissing has me concerned that they have already rolled over to placate him#and our data has already been compromised#because tiktok was a huge well of information and organization regarding narratives the government could not get ahead of#but with the algorithm and account info#they could target and punish dissidents#tiktok#tiktok ban#us politics
26 notes
·
View notes
Text
youtube
Hell Poison -Fire Deep Within (from their upcoming album, 'Pressure from the Debths', <---that's their spelling, it's not a typo from me) The resurgence of thrash seems to be an ongoing process and, speaking for myself here, to those of us who lived through the hayday of thrash (circa 1985 to the mid 1990s) the newer bands feel like they're cosplaying 80s metal bands. The style, the riffs, the aesthetics, recycled and adorned to re-enact a period of metal history that they hold with the deepest of affections. I get it. I still love 80s thrash too. I just don't have any nostalgia for it. The thing here is, I keep stumbling on bands who've got the 80s thrash shtick down and Hell Poison is one of those bands, only they've infused their particular brand of 80s thrash with an obvious love for Motorhead and black metal bands like Venom. Hell, they FEEL like they might have started as a Venom cover band who, possessing more talent and vigor than Venom, decided to move on to crafting their own songs. The lyrics are just as stupid though. The riffs I dig but I never understood the appeal of satanism or whatever the hell "subversive bands" sing about: burning churches, demons, incantations, killing priests, raping nuns, it's all idiocy* (and rape is a disgusting violation of another's autonomy and personhood kids so don't do it or write songs about doing it.) Crafting a video to look like an aging VHS tape being watched on a TV (probably in a basement) is just stepping up that 80s metal cosplay wonderfully. I think I enjoyed Hell Poison's 2021 album, 'Breathing for the Filth'. To repeat myself, the lyrics are ignorable, it's the music you want to savor. If this is your kind of genre then you might enjoy this band. *I don't necessarily mind trash talking Christians or Christianity especially the fundamentalist, right-wing, "Christ is too liberal', we bless AR-15s at our churches, wealth is a virtue to be celebrated in the name of capital G God, kind of "Christian". You know the type. Let's hope they all get raptured.
#hell poison#thrash metal#brazilian thrash#ordered sound#I guess this is a kind of advertisement#is sharing any interest a kind of advertisement?#more data for the algorithm#Youtube
15 notes
·
View notes
Text
god I've seen what you've done for others (official character playlist)
#HICKEY WHEN#we get to his week and it's just a statement#“ec would never trust the state or an algorithm with his private data go listen to a babbling brook”
21 notes
·
View notes
Text
guys I don’t know what the fuck happened to the meta ad algorithm but every single ad it is showing me between stories is like . severe idf propaganda. anti vaxism. pro life conspiracy theories . so that’s fun that’s cool. like i have been trying to use less Instagram and they said okay we’ve got just the thing . here’s israeli soldiers holding up huge ass guns next to a newborn baby . here’s your friend posting a story about Israeli war crimes in Gaza but don’t worry we’re gonna follow it up with an ad about how Israel is a land of happy smiling people come visit. now here’s someone talking about how exciting it is to find an unvaccinated surrogate for crunchy family planning. now here’s a lady talking about how abortion is bad for women’s mental health ….. and it’s like ok u guys win I am clicking out of stories now :)
#it’s just SO sudden and I have to imagine targeted???#bc all my ads used to be like . hey here is a cool vintage velvet shirt. hi do you like this beautiful gold ring it looks like a locket#am I just caught in some cursed Jewish woman spot of the algorithm or is everyone seeing more insane right wing propaganda there#and like . is this coming from zuck/ a platform wide shift to promote more right wing shit#or are these advertisers just buying insane amounts of data all the sudden
24 notes
·
View notes
Text
truly my best brain time is in the middle of the night caffeine & sugar rush. I think I just understood math, like some part of the general pattern of math if that makes sense. something clicked somewhere in my brain and I felt it
#idk this might sound like a sleep deprived caffeine up god complexed person's rambles bc that's 100% what it is#(it's vector & matrices I'm at currently bc got some algorithms to figure out and explain how those would apply to deal with EEG-data#in practice and I think I got like the basic thing that idk how to put into words; like the connections etc.)#I think it helped to have that EEG-data there to think things through with bc some context is always nice#I suck at just purely theoretical math#anyways feeling great & maybe I'm not the dumbest person on this earth after all#april 2024#2024
74 notes
·
View notes
Text
No streaming platform can accurately predict taste; humans are too dynamic to be predicted consistently. Instead, Spotify builds models of users and makes predictions by recommending music that matches the models. Stuck in these feedback loops, musical styles start to converge as songs are recommended according to a pre-determined vocabulary of Echo Nest descriptors. Eventually, listeners may start to resemble the models streaming platforms have created. Over time, some may grow intolerant of anything other than an echo. Before there were Echo Nest parameters, the 20th century music industry relied on other kinds of data to try to make hits. So-called “merchants of cool” hit the streets to hunt for the next big trend, conducting studies on teenage desire that generated tons of data, which was then consulted to market the next hit sensation. This kind of data collection is now built into the apparatus for listening itself. Once a user has listened to enough music through Spotify to establish a taste profile (which can be reduced to data like songs themselves, in terms of the same variables), the recommendation systems simply get to work. The more you use Spotify, the more Spotify can affirm or try to predict your interests. (Are you ready for some more acousticness?) Breaking down both the products and consumers of culture into data has not only revealed an apparent underlying formula for virality; it has also contributed to new kinds of formulaic content and a canalizing of taste in the age of streaming. Reduced to component parts, culture can now be recombined and optimized to drive user engagement. This allows platforms to squeeze more value out of backlogs of content and shuffle pre-existing data points into series of new correlations, driving the creation of new content on terms that the platforms are best equipped to handle and profit from. (Listeners will get the most out of music optimized for Spotify on Spotify.) But although such reconfigured cultural artifacts might appear new, they are made from a depleted pantry of the same old ingredients. This threatens to starve culture of the resources to generate new ideas, new possibilities.
118 notes
·
View notes
Text
The surprising truth about data-driven dictatorships

Here’s the “dictator’s dilemma”: they want to block their country’s frustrated elites from mobilizing against them, so they censor public communications; but they also want to know what their people truly believe, so they can head off simmering resentments before they boil over into regime-toppling revolutions.
These two strategies are in tension: the more you censor, the less you know about the true feelings of your citizens and the easier it will be to miss serious problems until they spill over into the streets (think: the fall of the Berlin Wall or Tunisia before the Arab Spring). Dictators try to square this circle with things like private opinion polling or petition systems, but these capture a small slice of the potentially destabiziling moods circulating in the body politic.
Enter AI: back in 2018, Yuval Harari proposed that AI would supercharge dictatorships by mining and summarizing the public mood — as captured on social media — allowing dictators to tack into serious discontent and diffuse it before it erupted into unequenchable wildfire:
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Harari wrote that “the desire to concentrate all information and power in one place may become [dictators] decisive advantage in the 21st century.” But other political scientists sharply disagreed. Last year, Henry Farrell, Jeremy Wallace and Abraham Newman published a thoroughgoing rebuttal to Harari in Foreign Affairs:
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making
They argued that — like everyone who gets excited about AI, only to have their hopes dashed — dictators seeking to use AI to understand the public mood would run into serious training data bias problems. After all, people living under dictatorships know that spouting off about their discontent and desire for change is a risky business, so they will self-censor on social media. That’s true even if a person isn’t afraid of retaliation: if you know that using certain words or phrases in a post will get it autoblocked by a censorbot, what’s the point of trying to use those words?
The phrase “Garbage In, Garbage Out” dates back to 1957. That’s how long we’ve known that a computer that operates on bad data will barf up bad conclusions. But this is a very inconvenient truth for AI weirdos: having given up on manually assembling training data based on careful human judgment with multiple review steps, the AI industry “pivoted” to mass ingestion of scraped data from the whole internet.
But adding more unreliable data to an unreliable dataset doesn’t improve its reliability. GIGO is the iron law of computing, and you can’t repeal it by shoveling more garbage into the top of the training funnel:
https://memex.craphound.com/2018/05/29/garbage-in-garbage-out-machine-learning-has-not-repealed-the-iron-law-of-computer-science/
When it comes to “AI” that’s used for decision support — that is, when an algorithm tells humans what to do and they do it — then you get something worse than Garbage In, Garbage Out — you get Garbage In, Garbage Out, Garbage Back In Again. That’s when the AI spits out something wrong, and then another AI sucks up that wrong conclusion and uses it to generate more conclusions.
To see this in action, consider the deeply flawed predictive policing systems that cities around the world rely on. These systems suck up crime data from the cops, then predict where crime is going to be, and send cops to those “hotspots” to do things like throw Black kids up against a wall and make them turn out their pockets, or pull over drivers and search their cars after pretending to have smelled cannabis.
The problem here is that “crime the police detected” isn’t the same as “crime.” You only find crime where you look for it. For example, there are far more incidents of domestic abuse reported in apartment buildings than in fully detached homes. That’s not because apartment dwellers are more likely to be wife-beaters: it’s because domestic abuse is most often reported by a neighbor who hears it through the walls.
So if your cops practice racially biased policing (I know, this is hard to imagine, but stay with me /s), then the crime they detect will already be a function of bias. If you only ever throw Black kids up against a wall and turn out their pockets, then every knife and dime-bag you find in someone’s pockets will come from some Black kid the cops decided to harass.
That’s life without AI. But now let’s throw in predictive policing: feed your “knives found in pockets” data to an algorithm and ask it to predict where there are more knives in pockets, and it will send you back to that Black neighborhood and tell you do throw even more Black kids up against a wall and search their pockets. The more you do this, the more knives you’ll find, and the more you’ll go back and do it again.
This is what Patrick Ball from the Human Rights Data Analysis Group calls “empiricism washing”: take a biased procedure and feed it to an algorithm, and then you get to go and do more biased procedures, and whenever anyone accuses you of bias, you can insist that you’re just following an empirical conclusion of a neutral algorithm, because “math can’t be racist.”
HRDAG has done excellent work on this, finding a natural experiment that makes the problem of GIGOGBI crystal clear. The National Survey On Drug Use and Health produces the gold standard snapshot of drug use in America. Kristian Lum and William Isaac took Oakland’s drug arrest data from 2010 and asked Predpol, a leading predictive policing product, to predict where Oakland’s 2011 drug use would take place.

[Image ID: (a) Number of drug arrests made by Oakland police department, 2010. (1) West Oakland, (2) International Boulevard. (b) Estimated number of drug users, based on 2011 National Survey on Drug Use and Health]
Then, they compared those predictions to the outcomes of the 2011 survey, which shows where actual drug use took place. The two maps couldn’t be more different:
https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
Predpol told cops to go and look for drug use in a predominantly Black, working class neighborhood. Meanwhile the NSDUH survey showed the actual drug use took place all over Oakland, with a higher concentration in the Berkeley-neighboring student neighborhood.
What’s even more vivid is what happens when you simulate running Predpol on the new arrest data that would be generated by cops following its recommendations. If the cops went to that Black neighborhood and found more drugs there and told Predpol about it, the recommendation gets stronger and more confident.
In other words, GIGOGBI is a system for concentrating bias. Even trace amounts of bias in the original training data get refined and magnified when they are output though a decision support system that directs humans to go an act on that output. Algorithms are to bias what centrifuges are to radioactive ore: a way to turn minute amounts of bias into pluripotent, indestructible toxic waste.
There’s a great name for an AI that’s trained on an AI’s output, courtesy of Jathan Sadowski: “Habsburg AI.”
And that brings me back to the Dictator’s Dilemma. If your citizens are self-censoring in order to avoid retaliation or algorithmic shadowbanning, then the AI you train on their posts in order to find out what they’re really thinking will steer you in the opposite direction, so you make bad policies that make people angrier and destabilize things more.
Or at least, that was Farrell(et al)’s theory. And for many years, that’s where the debate over AI and dictatorship has stalled: theory vs theory. But now, there’s some empirical data on this, thanks to the “The Digital Dictator’s Dilemma,” a new paper from UCSD PhD candidate Eddie Yang:
https://www.eddieyang.net/research/DDD.pdf
Yang figured out a way to test these dueling hypotheses. He got 10 million Chinese social media posts from the start of the pandemic, before companies like Weibo were required to censor certain pandemic-related posts as politically sensitive. Yang treats these posts as a robust snapshot of public opinion: because there was no censorship of pandemic-related chatter, Chinese users were free to post anything they wanted without having to self-censor for fear of retaliation or deletion.
Next, Yang acquired the censorship model used by a real Chinese social media company to decide which posts should be blocked. Using this, he was able to determine which of the posts in the original set would be censored today in China.
That means that Yang knows that the “real” sentiment in the Chinese social media snapshot is, and what Chinese authorities would believe it to be if Chinese users were self-censoring all the posts that would be flagged by censorware today.
From here, Yang was able to play with the knobs, and determine how “preference-falsification” (when users lie about their feelings) and self-censorship would give a dictatorship a misleading view of public sentiment. What he finds is that the more repressive a regime is — the more people are incentivized to falsify or censor their views — the worse the system gets at uncovering the true public mood.
What’s more, adding additional (bad) data to the system doesn’t fix this “missing data” problem. GIGO remains an iron law of computing in this context, too.
But it gets better (or worse, I guess): Yang models a “crisis” scenario in which users stop self-censoring and start articulating their true views (because they’ve run out of fucks to give). This is the most dangerous moment for a dictator, and depending on the dictatorship handles it, they either get another decade or rule, or they wake up with guillotines on their lawns.
But “crisis” is where AI performs the worst. Trained on the “status quo” data where users are continuously self-censoring and preference-falsifying, AI has no clue how to handle the unvarnished truth. Both its recommendations about what to censor and its summaries of public sentiment are the least accurate when crisis erupts.
But here’s an interesting wrinkle: Yang scraped a bunch of Chinese users’ posts from Twitter — which the Chinese government doesn’t get to censor (yet) or spy on (yet) — and fed them to the model. He hypothesized that when Chinese users post to American social media, they don’t self-censor or preference-falsify, so this data should help the model improve its accuracy.
He was right — the model got significantly better once it ingested data from Twitter than when it was working solely from Weibo posts. And Yang notes that dictatorships all over the world are widely understood to be scraping western/northern social media.
But even though Twitter data improved the model’s accuracy, it was still wildly inaccurate, compared to the same model trained on a full set of un-self-censored, un-falsified data. GIGO is not an option, it’s the law (of computing).
Writing about the study on Crooked Timber, Farrell notes that as the world fills up with “garbage and noise” (he invokes Philip K Dick’s delighted coinage “gubbish”), “approximately correct knowledge becomes the scarce and valuable resource.”
https://crookedtimber.org/2023/07/25/51610/
This “probably approximately correct knowledge” comes from humans, not LLMs or AI, and so “the social applications of machine learning in non-authoritarian societies are just as parasitic on these forms of human knowledge production as authoritarian governments.”
The Clarion Science Fiction and Fantasy Writers’ Workshop summer fundraiser is almost over! I am an alum, instructor and volunteer board member for this nonprofit workshop whose alums include Octavia Butler, Kim Stanley Robinson, Bruce Sterling, Nalo Hopkinson, Kameron Hurley, Nnedi Okorafor, Lucius Shepard, and Ted Chiang! Your donations will help us subsidize tuition for students, making Clarion — and sf/f — more accessible for all kinds of writers.
Libro.fm is the indie-bookstore-friendly, DRM-free audiobook alternative to Audible, the Amazon-owned monopolist that locks every book you buy to Amazon forever. When you buy a book on Libro, they share some of the purchase price with a local indie bookstore of your choosing (Libro is the best partner I have in selling my own DRM-free audiobooks!). As of today, Libro is even better, because it’s available in five new territories and currencies: Canada, the UK, the EU, Australia and New Zealand!
[Image ID: An altered image of the Nuremberg rally, with ranked lines of soldiers facing a towering figure in a many-ribboned soldier's coat. He wears a high-peaked cap with a microchip in place of insignia. His head has been replaced with the menacing red eye of HAL9000 from Stanley Kubrick's '2001: A Space Odyssey.' The sky behind him is filled with a 'code waterfall' from 'The Matrix.']
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
—
Raimond Spekking (modified) https://commons.wikimedia.org/wiki/File:Acer_Extensa_5220_-_Columbia_MB_06236-1N_-_Intel_Celeron_M_530_-_SLA2G_-_in_Socket_479-5029.jpg
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
—
Russian Airborne Troops (modified) https://commons.wikimedia.org/wiki/File:Vladislav_Achalov_at_the_Airborne_Troops_Day_in_Moscow_%E2%80%93_August_2,_2008.jpg
“Soldiers of Russia” Cultural Center (modified) https://commons.wikimedia.org/wiki/File:Col._Leonid_Khabarov_in_an_everyday_service_uniform.JPG
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
#pluralistic#habsburg ai#self censorship#henry farrell#digital dictatorships#machine learning#dictator's dilemma#eddie yang#preference falsification#political science#training bias#scholarship#spirals of delusion#algorithmic bias#ml#Fully automated data driven authoritarianism#authoritarianism#gigo#garbage in garbage out garbage back in#gigogbi#yuval noah harari#gubbish#pkd#philip k dick#phildickian
833 notes
·
View notes
Text
me clicking “not interested” on a post about a canon x canon ship involving my f/o knowing full well it doesn’t do anything
#tumblr is SLIGHTLY better for this bc of the filter but. sometimes people don’t tag their posts correctly which doesn’t help#sigh#self ship#self ship community#self shipping#selfship#f/o community#selfshipping#f/o#self insert#romantic f/o#fictional other#selfshipper#selfship problems#selfship meme#selfshipping meme#selfshipping community#but fr#i thought the algorithms were supposed to be smart smh#if you’re gonna take my data AT LEAST you should know what i don’t want to see!!#but nooooo it automatically goes- oh! you like this character!! well here’s art of them being in love with this other character!!#ugh
94 notes
·
View notes
Text
"the mainstream media isn't talking about this!"
is your perception of the mainstream media solely based on what your parents recount from fox or sky news? has enough time passed between the event and now for professional journalists to verify sources? are you reading beyond the headline and the lead? do you understand why journalists phrase things in the way that they do?
don't get me wrong, there are certainly cases where that statement is true, however, I routinely see it applied to events that I later read about in an honest-to-god physical newspaper
#side note: buy a newspaper!#having something that isn't manipulated specifically for you by an algorithm is a good experience#sure its curated#but the editor isn't mining your data
71 notes
·
View notes