Tumgik
#general purpose computing
Text
Autoenshittification
Tumblr media
Forget F1: the only car race that matters now is the race to turn your car into a digital extraction machine, a high-speed inkjet printer on wheels, stealing your private data as it picks your pocket. Your car’s digital infrastructure is a costly, dangerous nightmare — but for automakers in pursuit of postcapitalist utopia, it’s a dream they can’t give up on.
Your car is stuffed full of microchips, a fact the world came to appreciate after the pandemic struck and auto production ground to a halt due to chip shortages. Of course, that wasn’t the whole story: when the pandemic started, the automakers panicked and canceled their chip orders, only to immediately regret that decision and place new orders.
But it was too late: semiconductor production had taken a serious body-blow, and when Big Car placed its new chip orders, it went to the back of a long, slow-moving line. It was a catastrophic bungle: microchips are so integral to car production that a car is basically a computer network on wheels that you stick your fragile human body into and pray.
The car manufacturers got so desperate for chips that they started buying up washing machines for the microchips in them, extracting the chips and discarding the washing machines like some absurdo-dystopian cyberpunk walnut-shelling machine:
https://www.autoevolution.com/news/desperate-times-companies-buy-washing-machines-just-to-rip-out-the-chips-187033.html
These digital systems are a huge problem for the car companies. They are the underlying cause of a precipitous decline in car quality. From touch-based digital door-locks to networked sensors and cameras, every digital system in your car is a source of endless repair nightmares, costly recalls and cybersecurity vulnerabilities:
https://www.reuters.com/business/autos-transportation/quality-new-vehicles-us-declining-more-tech-use-study-shows-2023-06-22/
What’s more, drivers hate all the digital bullshit, from the janky touchscreens to the shitty, wildly insecure apps. Digital systems are drivers’ most significant point of dissatisfaction with the automakers’ products:
https://www.theverge.com/23801545/car-infotainment-customer-satisifaction-survey-jd-power
Even the automakers sorta-kinda admit that this is a problem. Back in 2020 when Massachusetts was having a Right-to-Repair ballot initiative, Big Car ran these unfuckingbelievable scare ads that basically said, “Your car spies on you so comprehensively that giving anyone else access to its systems will let murderers stalk you to your home and kill you:
https://pluralistic.net/2020/09/03/rip-david-graeber/#rolling-surveillance-platforms
But even amid all the complaining about cars getting stuck in the Internet of Shit, there’s still not much discussion of why the car-makers are making their products less attractive, less reliable, less safe, and less resilient by stuffing them full of microchips. Are car execs just the latest generation of rubes who’ve been suckered by Silicon Valley bullshit and convinced that apps are a magic path to profitability?
Nope. Car execs are sophisticated businesspeople, and they’re surfing capitalism’s latest — and last — hot trend: dismantling capitalism itself.
Now, leftists have been predicting the death of capitalism since The Communist Manifesto, but even Marx and Engels warned us not to get too frisky: capitalism, they wrote, is endlessly creative, constantly reinventing itself, re-emerging from each crisis in a new form that is perfectly adapted to the post-crisis reality:
https://www.nytimes.com/2022/10/31/books/review/a-spectre-haunting-china-mieville.html
But capitalism has finally run out of gas. In his forthcoming book, Techno Feudalism: What Killed Capitalism, Yanis Varoufakis proposes that capitalism has died — but it wasn’t replaced by socialism. Rather, capitalism has given way to feudalism:
https://www.penguin.co.uk/books/451795/technofeudalism-by-varoufakis-yanis/9781847927279
Under capitalism, capital is the prime mover. The people who own and mobilize capital — the capitalists — organize the economy and take the lion’s share of its returns. But it wasn’t always this way: for hundreds of years, European civilization was dominated by rents, not markets.
A “rent” is income that you get from owning something that other people need to produce value. Think of renting out a house you own: not only do you get paid when someone pays you to live there, you also get the benefit of rising property values, which are the result of the work that all the other homeowners, business owners, and residents do to make the neighborhood more valuable.
The first capitalists hated rent. They wanted to replace the “passive income” that landowners got from taxing their serfs’ harvest with active income from enclosing those lands and grazing sheep in order to get wool to feed to the new textile mills. They wanted active income — and lots of it.
Capitalist philosophers railed against rent. The “free market” of Adam Smith wasn’t a market that was free from regulation — it was a market free from rents. The reason Smith railed against monopolists is because he (correctly) understood that once a monopoly emerged, it would become a chokepoint through which a rentier could cream off the profits he considered the capitalist’s due:
https://locusmag.com/2021/03/cory-doctorow-free-markets/
Today, we live in a rentier’s paradise. People don’t aspire to create value — they aspire to capture it. In Survival of the Richest, Doug Rushkoff calls this “going meta”: don’t provide a service, just figure out a way to interpose yourself between the provider and the customer:
https://pluralistic.net/2022/09/13/collapse-porn/#collapse-porn
Don’t drive a cab, create Uber and extract value from every driver and rider. Better still: don’t found Uber, invest in Uber options and extract value from the people who invest in Uber. Even better, invest in derivatives of Uber options and extract value from people extracting value from people investing in Uber, who extract value from drivers and riders. Go meta.
This is your brain on the four-hour-work-week, passive income mind-virus. In Techno Feudalism, Varoufakis deftly describes how the new “Cloud Capital” has created a new generation of rentiers, and how they have become the richest, most powerful people in human history.
Shopping at Amazon is like visiting a bustling city center full of stores — but each of those stores’ owners has to pay the majority of every sale to a feudal landlord, Emperor Jeff Bezos, who also decides which goods they can sell and where they must appear on the shelves. Amazon is full of capitalists, but it is not a capitalist enterprise. It’s a feudal one:
https://pluralistic.net/2022/11/28/enshittification/#relentless-payola
This is the reason that automakers are willing to enshittify their products so comprehensively: they were one of the first industries to decouple rents from profits. Recall that the reason that Big Car needed billions in bailouts in 2008 is that they’d reinvented themselves as loan-sharks who incidentally made cars, lending money to car-buyers and then “securitizing” the loans so they could be traded in the capital markets.
Even though this strategy brought the car companies to the brink of ruin, it paid off in the long run. The car makers got billions in public money, paid their execs massive bonuses, gave billions to shareholders in buybacks and dividends, smashed their unions, fucked their pensioned workers, and shipped jobs anywhere they could pollute and murder their workforce with impunity.
Car companies are on the forefront of postcapitalism, and they understand that digital is the key to rent-extraction. Remember when BMW announced that it was going to rent you the seatwarmer in your own fucking car?
https://pluralistic.net/2020/07/02/big-river/#beemers
Not to be outdone, Mercedes announced that they were going to rent you your car’s accelerator pedal, charging an extra $1200/year to unlock a fully functional acceleration curve:
https://www.theverge.com/2022/11/23/23474969/mercedes-car-subscription-faster-acceleration-feature-price
This is the urinary tract infection business model: without digitization, all your car’s value flowed in a healthy stream. But once the car-makers add semiconductors, each one of those features comes out in a painful, burning dribble, with every button on that fakakta touchscreen wired directly into your credit-card.
But it’s just for starters. Computers are malleable. The only computer we know how to make is the Turing Complete Von Neumann Machine, which can run every program we know how to write. Once they add networked computers to your car, the Car Lords can endlessly twiddle the knobs on the back end, finding new ways to extract value from you:
https://doctorow.medium.com/twiddler-1b5c9690cce6
That means that your car can track your every movement, and sell your location data to anyone and everyone, from marketers to bounty-hunters looking to collect fees for tracking down people who travel out of state for abortions to cops to foreign spies:
https://www.vice.com/en/article/n7enex/tool-shows-if-car-selling-data-privacy4cars-vehicle-privacy-report
Digitization supercharges financialization. It lets car-makers offer subprime auto-loans to desperate, poor people and then killswitch their cars if they miss a payment:
https://www.youtube.com/watch?v=4U2eDJnwz_s
Subprime lending for cars would be a terrible business without computers, but digitization makes it a great source of feudal rents. Car dealers can originate loans to people with teaser rates that quickly blow up into payments the dealer knows their customer can’t afford. Then they repo the car and sell it to another desperate person, and another, and another:
https://pluralistic.net/2022/07/27/boricua/#looking-for-the-joke-with-a-microscope
Digitization also opens up more exotic options. Some subprime cars have secondary control systems wired into their entertainment system: miss a payment and your car radio flips to full volume and bellows an unstoppable, unmutable stream of threats. Tesla does one better: your car will lock and immobilize itself, then blare its horn and back out of its parking spot when the repo man arrives:
https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/
Digital feudalism hasn’t stopped innovating — it’s just stopped innovating good things. The digital device is an endless source of sadistic novelties, like the cellphones that disable your most-used app the first day you’re late on a payment, then work their way down the other apps you rely on for every day you’re late:
https://restofworld.org/2021/loans-that-hijack-your-phone-are-coming-to-india/
Usurers have always relied on this kind of imaginative intimidation. The loan-shark’s arm-breaker knows you’re never going to get off the hook; his goal is in intimidating you into paying his boss first, liquidating your house and your kid’s college fund and your wedding ring before you default and he throws you off a building.
Thanks to the malleability of computerized systems, digital arm-breakers have an endless array of options they can deploy to motivate you into paying them first, no matter what it costs you:
https://pluralistic.net/2021/04/02/innovation-unlocks-markets/#digital-arm-breakers
Car-makers are trailblazers in imaginative rent-extraction. Take VIN-locking: this is the practice of adding cheap microchips to engine components that communicate with the car’s overall network. After a new part is installed in your car, your car’s computer does a complex cryptographic handshake with the part that requires an unlock code provided by an authorized technician. If the code isn’t entered, the car refuses to use that part.
VIN-locking has exploded in popularity. It’s in your iPhone, preventing you from using refurb or third-party replacement parts:
https://doctorow.medium.com/apples-cement-overshoes-329856288d13
It’s in fuckin’ ventilators, which was a nightmare during lockdown as hospital techs nursed their precious ventilators along by swapping parts from dead systems into serviceable ones:
https://www.vice.com/en/article/3azv9b/why-repair-techs-are-hacking-ventilators-with-diy-dongles-from-poland
And of course, it’s in tractors, along with other forms of remote killswitch. Remember that feelgood story about John Deere bricking the looted Ukrainian tractors whose snitch-chips showed they’d been relocated to Russia?
https://doctorow.medium.com/about-those-kill-switched-ukrainian-tractors-bc93f471b9c8
That wasn’t a happy story — it was a cautionary tale. After all, John Deere now controls the majority of the world’s agricultural future, and they’ve boobytrapped those ubiquitous tractors with killswitches that can be activated by anyone who hacks, takes over, or suborns Deere or its dealerships.
Control over repair isn’t limited to gouging customers on parts and service. When a company gets to decide whether your device can be fixed, it can fuck you over in all kinds of ways. Back in 2019, Tim Apple told his shareholders to expect lower revenues because people were opting to fix their phones rather than replace them:
https://www.apple.com/newsroom/2019/01/letter-from-tim-cook-to-apple-investors/
By usurping your right to decide who fixes your phone, Apple gets to decide whether you can fix it, or whether you must replace it. Problem solved — and not just for Apple, but for car makers, tractor makers, ventilator makers and more. Apple leads on this, even ahead of Big Car, pioneering a “recycling” program that sees trade-in phones shredded so they can’t possibly be diverted from an e-waste dump and mined for parts:
https://www.vice.com/en/article/yp73jw/apple-recycling-iphones-macbooks
John Deere isn’t sleeping on this. They’ve come up with a valuable treasure they extract when they win the Right-to-Repair: Deere singles out farmers who complain about its policies and refuses to repair their tractors, stranding them with six-figure, two-ton paperweight:
https://pluralistic.net/2022/05/31/dealers-choice/#be-a-shame-if-something-were-to-happen-to-it
The repair wars are just a skirmish in a vast, invisible fight that’s been waged for decades: the War On General-Purpose Computing, where tech companies use the law to make it illegal for you to reconfigure your devices so they serve you, rather than their shareholders:
https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/
The force behind this army is vast and grows larger every day. General purpose computers are antithetical to technofeudalism — all the rents extracted by technofeudalists would go away if others (tinkereres, co-ops, even capitalists!) were allowed to reconfigure our devices so they serve us.
You’ve probably noticed the skirmishes with inkjet printer makers, who can only force you to buy their ink at 20,000% markups if they can stop you from deciding how your printer is configured:
https://pluralistic.net/2022/08/07/inky-wretches/#epson-salty But we’re also fighting against insulin pump makers, who want to turn people with diabetes into walking inkjet printers:
https://pluralistic.net/2022/06/10/loopers/#hp-ification
And companies that make powered wheelchairs:
https://pluralistic.net/2022/06/08/chair-ish/#r2r
These companies start with people who have the least agency and social power and wreck their lives, then work their way up the privilege gradient, coming for everyone else. It’s called the “shitty technology adoption curve”:
https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust
Technofeudalism is the public-private-partnership from hell, emerging from a combination of state and private action. On the one hand, bailing out bankers and big business (rather than workers) after the 2008 crash and the covid lockdown decoupled income from profits. Companies spent billions more than they earned were still wildly profitable, thanks to those public funds.
But there’s also a policy dimension here. Some of those rentiers’ billions were mobilized to both deconstruct antitrust law (allowing bigger and bigger companies and cartels) and to expand “IP” law, turning “IP” into a toolsuite for controlling the conduct of a firm’s competitors, critics and customers:
https://locusmag.com/2020/09/cory-doctorow-ip/
IP is key to understanding the rise of technofeudalism. The same malleability that allows companies to “twiddle” the knobs on their services and keep us on the hook as they reel us in would hypothetically allow us to countertwiddle, seizing the means of computation:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
The thing that stands between you and an alternative app store, an interoperable social media network that you can escape to while continuing to message the friends you left behind, or a car that anyone can fix or unlock features for is IP, not technology. Under capitalism, that technology would already exist, because capitalists have no loyalty to one another and view each other’s margins as their own opportunities.
But under technofeudalism, control comes from rents (owning things), not profits (selling things). The capitalist who wants to participate in your iPhone’s “ecosystem” has to make apps and submit them to Apple, along with 30% of their lifetime revenues — they don’t get to sell you jailbreaking kit that lets you choose their app store.
Rent-seeking technology has a holy grail: control over “ring zero” — the ability to compel you to configure your computer to a feudalist’s specifications, and to verify that you haven’t altered your computer after it came into your possession:
https://pluralistic.net/2022/01/30/ring-minus-one/#drm-political-economy
For more than two decades, various would-be feudal lords and their court sorcerers have been pitching ways of doing this, of varying degrees of outlandishness.
At core, here’s what they envision: inside your computer, they will nest another computer, one that is designed to run a very simple set of programs, none of which can be altered once it leaves the factory. This computer — either a whole separate chip called a “Trusted Platform Module” or a region of your main processor called a secure enclave — can tally observations about your computer: which operating system, modules and programs it’s running.
Then it can cryptographically “sign” these observations, proving that they were made by a secure chip and not by something you could have modified. Then you can send this signed “attestation” to someone else, who can use it to determine how your computer is configured and thus whether to trust it. This is called “remote attestation.”
There are some cool things you can do with remote attestation: for example, two strangers playing a networked video game together can use attestations to make sure neither is running any cheat modules. Or you could require your cloud computing provider to use attestations that they aren’t stealing your data from the server you’re renting. Or if you suspect that your computer has been infected with malware, you can connect to someone else and send them an attestation that they can use to figure out whether you should trust it.
Today, there’s a cool remote attestation technology called “PrivacyPass” that replaces CAPTCHAs by having you prove to your own device that you are a human. When a server wants to make sure you’re a person, it sends a random number to your device, which signs that number along with its promise that it is acting on behalf of a human being, and sends it back. CAPTCHAs are all kinds of bad — bad for accessibility and privacy — and this is really great.
But the billions that have been thrown at remote attestation over the decades is only incidentally about solving CAPTCHAs or verifying your cloud server. The holy grail here is being able to make sure that you’re not running an ad-blocker. It’s being able to remotely verify that you haven’t disabled the bossware your employer requires. It’s the power to block someone from opening an Office365 doc with LibreOffice. It’s your boss’s ability to ensure that you haven’t modified your messaging client to disable disappearing messages before he sends you an auto-destructing memo ordering you to break the law.
And there’s a new remote attestation technology making the rounds: Google’s Web Environment Integrity, which will leverage Google’s dominance over browsers to allow websites to block users who run ad-blockers:
https://github.com/RupertBenWiser/Web-Environment-Integrity
There’s plenty else WEI can do (it would make detecting ad-fraud much easier), but for every legitimate use, there are a hundred ways this could be abused. It’s a technology purpose-built to allow rent extraction by stripping us of our right to technological self-determination.
Releasing a technology like this into a world where companies are willing to make their products less reliable, less attractive, less safe and less resilient in pursuit of rents is incredibly reckless and shortsighted. You want unauthorized bread? This is how you get Unauthorized Bread:
https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-near-future-tale-of-refugees-and-sinister-iot-appliances/amp/
Tumblr media
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
Tumblr media
[Image ID: The interior of a luxury car. There is a dagger protruding from the steering wheel. The entertainment console has been replaced by the text 'You wouldn't download a car,' in MPAA scare-ad font. Outside of the windscreen looms the Matrix waterfall effect. Visible in the rear- and side-view mirror is the driver: the figure from Munch's 'Scream.' The screen behind the steering-wheel has been replaced by the menacing red eye of HAL9000 from Stanley Kubrick's '2001: A Space Odyssey.']
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
4K notes · View notes
elminsters · 1 month
Text
i love coming on here and finding out everyone lost their damn mind while i slept
20 notes · View notes
doctorwhoisadhd · 15 days
Text
there's a certain quality the harmonies of like... early to mid 2000s alt rock has. which i am obsessed with... like i wanna do that. i NEED to figure out how to write harmonies that sound like that
#ari opinion hour#i sort of understand it but not necessarily well enough to do it on command#i think i sort of achieved the sound of it with my blaseball winter exchange song i did for snow but specifically only in the very last bit#like only with the 'im not alive anymore' part#(which sidenote i wish id had the second half faster + w more drive but its not like that was like a full recording which i could do)#i think i just need my music to have more teeth in general cause it scratches an itch that i think i must have developed due to some aspect#of music school. its probably my dissatisfaction with the attitudes in the classical world#<- which understand i say that in the same way that like my jazz prof does. the classical world doesnt have enough teeth nor enough#understanding of the way in which music is like. another art. and art needs to be able to have teeth and use elements normally regarded as#''undesirable'' on purpose because art is there to make you feel emotions and not just the positive ones and not just sadness or anger in#terms of the negative ones#art is there to make u feel ALL extant emotions and that includes boredom disgust fear jealousy pity cowardice apathy overwhelmedness etc#also the classical world i find often forgets what the word ''play'' means#i am of the opinion that perfection is a waste of time if i wanted perfect i'd ask a computer to do it for me. i want real#anyway. i forgot what this post was even about lol point is i need to figure out how to write harmonies that have that soaring quality that#like. you can hear it in like helena by mcr and wake me up by evanescence and stuff. and frankly most of the songs on three cheers for swee#revenge which i am listening to now for the first time. i need to learn more about this stuff maybe ill listen to the evanescence album tha#song is from next.#or something i should really be working on my essay but theres no way i wont have it done in time which is good i think i just mostly have#to worry about sources and stuff but even that should be relatively easy i think
3 notes · View notes
cyberspacebear · 9 months
Text
sometimes it hurts so bad knowing the internet will never be young again
that it will never feel new and bright and technological again
9 notes · View notes
aurosoul · 1 year
Text
I had a dream that I got my Magic Leap headset and then took it to a forest to create magical crystals and rainbows in the streams there and then I took it to the beach to decorate the sand with glowing shells and stars!!!!!!! and then woke up to realize I’ll soon be able to do these things FOR REAL!!!!!!!
13 notes · View notes
taupewolfy · 4 months
Text
amazed at how terribly i seem to understand coding. literally picked up several different languages over the course of many years and as soon as i walk away from the pc and come back i've forgotten it all.
2 notes · View notes
literaryhistories · 2 years
Text
am i crazy or does “since this is your expertise I’d expect a more mature literary discussion” sounds unnecessarily mean? they could’ve just said “you should expand this part of analysis” and it would serve the same purpose?
25 notes · View notes
riverofrainbows · 1 year
Text
Ok so i finally watched part two of the library episode with River and the vashta nerada and all that (spoilers ahead?)
And oh boy Moffat that was foul. Rivers ending. Just absolutely pathetic. He put her. In the white dress, the dead wife white summer dress. And then put her on a green field and had some rando team members there too (also in white clothes??). And then. And then. He made her a mum. Can you get more heteronormative, more misogynistic.
He made her a mum to the little girl (and i did find the idea that she could tell her some of her stories to her very cute), but then also to the two made up little kids Donna had while in there, the apparently made up little kids because every actual person was saved you know. He made her a mum to three kids (two of those made up) for the rest of her existence.
And her whole character was all about adventure, independence, research. But no, she could not have a proper ending without being put into traditional gender norms. He probably only let her have as much fun as she did later because he already had her ending secured in the bag.
That is a fucking disgrace to her character, and frankly embarrassing for Moffat. Like he should be embarrassed, that the only peaceful ending he could imagine was rounding up any children lying about and stuffing them and her in a traditional nuclear family for eternity, nevermind that two of those children were actually imaginary and one was a few hundred years old, and she was never really the traditional mum type (confined to her home and offspring).
She also never stopped smiling in the epilogue, implying that she was truly happy then? As if that would fulfill her? She just died, she can show some bitter sweet emotion at least.
15 notes · View notes
crystalis · 7 months
Text
i still wish sylvando's stats were distributed differently in dq11 or that he learned some fire spells .. he has like 330 magical might at level 99 or sometjing which is pointless becuase the strongest spell he learns is Swoosh and the output for that caps at like 200 magical might..
itd have made more sense for him to have less magical might and to put more into his hp, so it'd at least be closer to Jade's or something rather tham having the same hp as Erik at lvl 99. oorrrr he coulddve had some fire spells like frizz/frizzle/kafrizz which would make sense because he can breathe fire..
#i feel like frizz line would make more sense than the sizz line because he already learns woosh/swoosh so sizz line would be redundant#i wish i was like savvy with computers and game modding so i could mod dq11 just for fun and do random things u___u#that would scratch an itch in my brain#i would love to make jasper a party member and give him all the moves i think he would have.....#and rework rab's skill tree so that claws arent so weak .. or just buff claws in general#bc why the heck do they have the same attack as like wands#oh yeah another move that would love to change is Party Pooper. the useless spear skill that serena and jade get#it costs 16 skill points to learn and its weaker than Helichopter because it deals 90% damage okay#first of all serena learns swoosh/kaswoosh so like its already useless for her and thennn jade learns vacuum smash and like 20 other moves#that also hit groups of enemies so like. it would make sense if Party Pooper was a very early game skill so you learn it immediately but it#like takes a while to learn it before you have enough skill points.. i feel like that couldve just been handled differently#like either buff Party Pooper to be stronger or make it like the very first spear skill you can learn or something so it has some purpose#not that every attack needs an optimal purpose but you know.. its fun to think about#oh i would also let Serena learn Crushed Ice in Act 2. u____u spears are so freakin useless on serena in Act 2 and it would at least give#them some use vs Tatsunaga because it's weak to ice and that'd be cool#instead of Be Like Water#or does she have Counter Wait in Act 2 i dont remember#why must her strongest spear move be Thunder Thrust until Act 3
2 notes · View notes
Text
Forcing your computer to rat you out
Tumblr media
Powerful people imprisoned by the cluelessness of their own isolation, locked up with their own motivated reasoning: “It’s impossible to get a CEO to understand something when his quarterly earnings call depends on him not understanding it.”
Take Mark Zuckerberg. Zuckerberg insists that anyone who wanted to use a pseudonym online is “two-faced,” engaged in dishonest social behavior. The Zuckerberg Doctrine claims that forcing people to use their own names is a way to ensure civility. This is an idea so radioactively wrong, it can be spotted from orbit.
From the very beginning, social scientists (both inside and outside Facebook) told Zuckerberg that he was wrong. People have lots of reasons to hide their identities online, both good and bad, but a Real Names Policy affects different people differently:
https://memex.craphound.com/2018/01/22/social-scientists-have-warned-zuck-all-along-that-the-facebook-theory-of-interaction-would-make-people-angry-and-miserable/
For marginalized and at-risk people, there are plenty of reasons to want to have more than one online identity — say, because you are a #MeToo whistleblower hoping that Harvey Weinstein won’t sic his ex-Mossad mercenaries on you:
https://www.newyorker.com/news/news-desk/harvey-weinsteins-army-of-spies
Or maybe you’re a Rohingya Muslim hoping to avoid the genocidal attentions of the troll army that used Facebook to organize — under their real, legal names — to rape and murder you and everyone you love:
https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/
But even if no one is looking to destroy your life or kill you and your family, there are plenty of good reasons to present different facets of your identity to different people. No one talks to their lover, their boss and their toddler in exactly the same way, or reveals the same facts about their lives to those people. Maintaining different facets to your identity is normal and healthy — and the opposite, presenting the same face to everyone in your life, is a wildly terrible way to live.
None of this is controversial among social scientists, nor is it hard to grasp. But Zuckerberg stubbornly stuck to this anonymity-breeds-incivility doctrine, even as dictators used the fact that Facebook forced dissidents to use their real names to retain power through the threat (and reality) of arrest and torture:
https://pluralistic.net/2023/01/25/nationalize-moderna/#hun-sen
Why did Zuck cling to this dangerous and obvious fallacy? Because the more he could collapse your identity into one unitary whole, the better he could target you with ads. Truly, it is impossible to get a billionaire to understand something when his mega-yacht depends on his not understanding it.
This motivated reasoning ripples through all of Silicon Valley’s top brass, producing what Anil Dash calls “VC QAnon,” the collection of conspiratorial, debunked and absurd beliefs embraced by powerful people who hold the digital lives of billions of us in their quivering grasp:
https://www.anildash.com/2023/07/07/vc-qanon/
These fallacy-ridden autocrats like to disguise their demands as observations, as though wanting something to be true was the same as making it true. Think of when Eric Schmidt — then the CEO of Google — dismissed online privacy concerns, stating “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place”:
https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy
Schmidt was echoing the sentiments of his old co-conspirator, Sun Microsystems CEO Scott McNealy: “You have zero privacy anyway. Get over it”:
https://www.wired.com/1999/01/sun-on-privacy-get-over-it/
Both men knew better. Schmidt, in particular, is very jealous of his own privacy. When Cnet reporters used Google to uncover and publish public (but intimate and personal) facts about Schmidt, Schmidt ordered Google PR to ignore all future requests for comment from Cnet reporters:
https://www.cnet.com/tech/tech-industry/how-cnet-got-banned-by-google/
(Like everything else he does, Elon Musk’s policy of responding to media questions about Twitter with a poop emoji is just him copying things other people thought up, making them worse, and taking credit for them:)
https://www.theverge.com/23815634/tesla-elon-musk-origin-founder-twitter-land-of-the-giants
Schmidt’s actions do not reflect an attitude of “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Rather, they are the normal response that we all have to getting doxed.
When Schmidt and McNealy and Zuck tell us that we don’t have privacy, or we don’t want privacy, or that privacy is bad for us, they’re disguising a demand as an observation. “Privacy is dead” actually means, “When privacy is dead, I will be richer than you can imagine, so stop trying to save it, goddamnit.”
We are all prone to believing our own bullshit, but when a tech baron gets high on his own supply, his mental contortions have broad implications for all of us. A couple years after Schmidt’s anti-privacy manifesto, Google launched Google Plus, a social network where everyone was required to use their “real name.”
This decision — justified as a means of ensuring civility and a transparent ruse to improve ad targeting — kicked off the Nym Wars:
https://epeus.blogspot.com/2011/08/google-plus-must-stop-this-identity.html
One of the best documents to come out of that ugly conflict is “Falsehoods Programmers Believe About Names,” a profound and surprising enumeration of all the ways that the experiences of tech bros in Silicon Valley are the real edge-cases, unreflective of the reality of billions of their users:
https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/
This, in turn, spawned a whole genre of programmer-fallacy catalogs, falsehoods programmers believe about time, currency, birthdays, timezones, email addresses, national borders, nations, biometrics, gender, language, alphabets, phone numbers, addresses, systems of measurement, and, of course, families:
https://github.com/kdeldycke/awesome-falsehood
But humility is in short supply in tech. It’s impossible to get a programmer to understand something when their boss requires them not to understand it. A programmer will happily insist that ordering you to remove your “mask” is for your own good — and not even notice that they’re taking your skin off with it.
There are so many ways that tech executives could improve their profits if only we would abandon our stubborn attachment to being so goddamned complicated. Think of Netflix and its anti-passsword-sharing holy war, which is really a demand that we redefine “family” to be legible and profitable for Netflix:
https://pluralistic.net/2023/02/02/nonbinary-families/#red-envelopes
But despite the entreaties of tech companies to collapse our identities, our families, and our online lives into streamlined, computably hard-edged shapes that fit neatly into their database structures, we continue to live fuzzy, complicated lives that only glancingly resemble those of the executives seeking to shape them.
Now, the rich, powerful people making these demands don’t plan on being constrained by them. They are conservatives, in the tradition of #FrankWilhoit, believers in a system of “in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect”:
https://crookedtimber.org/2018/03/21/liberals-against-progressives/#comment-729288
As with Schmidt’s desire to spy on you from asshole to appetite for his own personal gain, and his violent aversion to having his own personal life made public, the tech millionaires and billionaires who made their fortune from the flexibility of general purpose computers would like to end that flexibility. They insist that the time for general purpose computers has passed, and that today, “consumers” crave the simplicity of appliances:
https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/
It is in the War On General Purpose Computing that we find the cheapest and flimsiest rhetoric. Companies like Apple — and their apologists — insist that no one wants to use third-party app stores, or seek out independent repair depots — and then spend millions to make sure that it’s illegal to jailbreak your phone or get it fixed outside of their own official channel:
https://doctorow.medium.com/apples-cement-overshoes-329856288d13
The cognitive dissonance of “no one wants this,” and “we must make it illegal to get this” is powerful, but the motivated reasoning is more powerful still. It is impossible to get Tim Cook to understand something when his $49 million paycheck depends on him not understanding it.
The War on General Purpose Computing has been underway for decades. Computers, like the people who use them, stubbornly insist on being reality-based, and the reality of computers is that they are general purpose. Every computer is a Turing complete, universal Von Neumann machine, which means that it can run every valid program. There is no way to get a computer to be almost Turing Complete, only capable of running programs that don’t upset your shareholders’ fragile emotional state.
There is no such thing as a printer that will only run the “reject third-party ink” program. There is no such thing as a phone that will only run the “reject third-party apps” program. There are only laws, like the Section 1201 of the Digital Millennium Copyright Act, that make writing and distributing those programs a felony punishable by a five-year prison sentence and a $500,000 fine (for a first offense).
That is to say, the War On General Purpose Computing is only incidentally a technical fight: it is primarily a legal fight. When Apple says, “You can’t install a third party app store on your phone,” what they means is, “it’s illegal to install that third party app store.” It’s not a technical countermeasure that stands between you and technological self-determination, it’s a legal doctrine we can call “felony contempt of business model”:
https://locusmag.com/2020/09/cory-doctorow-ip/
But the mighty US government will not step in to protect a company’s business model unless it at least gestures towards the technical. To invoke DMCA 1201, a company must first add the thinnest skin of digital rights management to their product. Since 1201 makes removing DRM illegal, a company can use this molecule-thick scrim of DRM to felonize any activity that the DRM prevents.
More than 20 years ago, technologists started to tinker with ways to combine the legal and technical to tame the wild general purpose computer. Starting with Microsoft’s Palladium project, they theorized a new “Secure Computing” model for allowing companies to reach into your computer long after you had paid for it and brought it home, in order to discipline you for using it in ways that undermined its shareholders’ interest.
Secure Computing began with the idea of shipping every computer with two CPUs. The first one was the normal CPU, the one you interacted with when you booted it up, loaded your OS, and ran programs. The second CPU would be a Trusted Platform Module, a brute-simple system-on-a-chip designed to be off-limits to modification, even by its owner (that is, you).
The TPM would ship with a limited suite of simple programs it could run, each thoroughly audited for bugs, as well as secret cryptographic signing keys that you were not permitted to extract. The original plan called for some truly exotic physical security measures for that TPM, like an acid-filled cavity that would melt the chip if you tried to decap it or run it through an electron-tunneling microscope:
https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil
This second computer represented a crack in the otherwise perfectly smooth wall of a computer’s general purposeness; and Trusted Computing proposed to hammer a piton into that crack and use it to anchor a whole superstructure that could observe — and limited — the activity of your computer.
This would start with observation: the TPM would observe every step of your computer’s boot sequence, creating cryptographic hashes of each block of code as it loaded and executed. Each stage of the boot-up could be compared to “known good” versions of those programs. If your computer did something unexpected, the TPM could halt it in its tracks, blocking the boot cycle.
What kind of unexpected things do computers do during their boot cycle? Well, if your computer is infected with malware, it might load poisoned versions of its operating system. Once your OS is poisoned, it’s very hard to detect its malicious conduct, since normal antivirus programs rely on the OS to faithfully report what your computer is doing. When the AV program asks the OS to tell it which programs are running, or which files are on the drive, it has no choice but to trust the OS’s response. When the OS is compromised, it can feed a stream of lies to users’ programs, assuring these apps that everything is fine.
That’s a very beneficial use for a TPM, but there’s a sinister flipside: the TPM can also watch your boot sequence to make sure that there aren’t beneficial modifications present in your operating system. If you modify your OS to let you do things the manufacturer wants to prevent — like loading apps from a third-party app-store — the TPM can spot this and block it.
Now, these beneficial and sinister uses can be teased apart. When the Palladium team first presented its research, my colleague Seth Schoen proposed an “owner override”: a modification of Trusted Computing that would let the computer’s owner override the TPM:
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
This override would introduce its own risks, of course. A user who was tricked into overriding the TPM might expose themselves to malicious software, which could harm that user, as well as attacking other computers on the user’s network and the other users whose data were on the compromised computer’s drive.
But an override would also provide serious benefits: it would rule out the monopolistic abuse of a TPM to force users to run malicious code that the manufacturer insisted on — code that prevented the user from doing things that benefited the user, even if it harmed the manufacturer’s shareholders. For example, with owner override, Microsoft couldn’t force you to use its official MS Office programs rather than third-party compatible programs like Apple’s iWork or Google Docs or LibreOffice.
Owner override also completely changed the calculus for another, even more dangerous part of Trusted Computing: remote attestation.
Remote Attestation is a way for third parties to request a reliable, cryptographically secured assurances about which operating system and programs your computer is running. In Remote Attestation, the TPM in your computer observes every stage of your computer’s boot, gathers information about all the programs you’re running, and cryptographically signs them, using the signing keys the manufacturer installed during fabrication.
You can send this “attestation” to other people on the internet. If they trust that your computer’s TPM is truly secure, then they know that you have sent them a true picture of your computer’s working (the actual protocol is a little more complicated and involves the remote party sending you a random number to cryptographically hash with the attestation, to prevent out-of-date attestations).
Now, this is also potentially beneficial. If you want to make sure that your technologically unsophisticated friend is running an uncompromised computer before you transmit sensitive data to it, you can ask them for an attestation that will tell you whether they’ve been infected with malware.
But it’s also potentially very sinister. Your government can require all the computers in its borders to send a daily attestation to confirm that you’re still running the mandatory spyware. Your abusive spouse — or abusive boss — can do the same for their own disciplinary technologies. Such a tool could prevent you from connecting to a service using a VPN, and make it impossible to use Tor Browser to protect your privacy when interacting with someone who wishes you harm.
The thing is, it’s completely normal and good for computers to lie to other computers on behalf of their owners. Like, if your IoT ebike’s manufacturer goes out of business and all their bikes get bricked because they can no longer talk to their servers, you can run an app that tricks the bike into thinking that it’s still talking to the mothership:
https://nltimes.nl/2023/07/15/alternative-app-can-unlock-vanmoof-bikes-popular-amid-bankruptcy-fears
Or if you’re connecting to a webserver that tries to track you by fingerprinting you based on your computer’s RAM, screen size, fonts, etc, you can order your browser to send random data about this stuff:
https://jshelter.org/fingerprinting/
Or if you’re connecting to a site that wants to track you and nonconsensually cram ads into your eyeballs, you can run an adblocker that doesn’t show you the ads, but tells the site that it did:
https://www.eff.org/deeplinks/2019/07/adblocking-how-about-nah
Owner override leaves some of the beneficial uses of remote attestation intact. If you’re asking a friend to remotely confirm that your computer is secure, you’re not going to use an override to send them bad data about about your computer’s configuration.
And owner override also sweeps all of the malicious uses of remote attestation off the board. With owner override, you can tell any lie about your computer to a webserver, a site, your boss, your abusive spouse, or your government, and they can’t spot the lie.
But owner override also eliminates some beneficial uses of remote attestation. For example, owner override rules out remote attestation as a way for strangers to play multiplayer video games while confirming that none of them are using cheat programs (like aimhack). It also means that you can’t use remote attestation to verify the configuration of a cloud server you’re renting in order to assure yourself that it’s not stealing your data or serving malware to your users.
This is a tradeoff, and it’s a tradeoff that’s similar to lots of other tradeoffs we make online, between the freedom to do something good and the freedom to do something bad. Participating anonymously, contributing to free software, distributing penetration testing tools, or providing a speech platform that’s open to the public all represent the same tradeoff.
We have lots of experience with making the tradeoff in favor of restrictions rather than freedom: powerful bad actors are happy to attach their names to their cruel speech and incitement to violence. Their victims are silenced for fear of that retaliation.
When we tell security researchers they can’t disclose defects in software without the manufacturer’s permission, the manufacturers use this as a club to silence their critics, not as a way to ensure orderly updates.
When we let corporations decide who is allowed to speak, they act with a mixture of carelessness and self-interest, becoming off-the-books deputies of authoritarian regimes and corrupt, powerful elites.
Alas, we made the wrong tradeoff with Trusted Computing. For the past twenty years, Trusted Computing has been creeping into our devices, albeit in somewhat denatured form. The original vision of acid-filled secondary processors has been replaced with less exotic (and expensive) alternatives, like “secure enclaves.” With a secure enclave, the manufacturer saves on the expense of installing a whole second computer, and instead, they draw a notional rectangle around a region of your computer’s main chip and try really hard to make sure that it can only perform a very constrained set of tasks.
This gives us the worst of all worlds. When secure enclaves are compromised, we not only lose the benefit of cryptographic certainty, knowing for sure that our computers are only booting up trusted, unalterted versions of the OS, but those compromised enclaves run malicious software that is essentially impossible to detect or remove:
https://pluralistic.net/2022/07/28/descartes-was-an-optimist/#uh-oh
But while Trusted Computing has wormed its way into boot-restrictions — preventing you from jailbreaking your computer so it will run the OS and apps of your choosing — there’s been very little work on remote attestation…until now.
Web Environment Integrity is Google’s proposal to integrate remote attestation into everyday web-browsing. The idea is to allow web-servers to verify what OS, extensions, browser, and add-ons your computer is using before the server will communicate with you:
https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md
Even by the thin standards of the remote attestation imaginaries, there are precious few beneficial uses for this. The googlers behind the proposal have a couple of laughable suggestions, like, maybe if ad-supported sites can comprehensively refuse to serve ad-blocking browsers, they will invest the extra profits in making things you like. Or: letting websites block scriptable browsers will make it harder for bad people to auto-post fake reviews and comments, giving users more assurances about the products they buy.
But foundationally, WEI is about compelling you to disclose true facts about yourself to people who you want to keep those facts from. It is a Real Names Policy for your browser. Google wants to add a new capability to the internet: the ability of people who have the power to force you to tell them things to know for sure that you’re not lying.
The fact that the authors assume this will be beneficial is just another “falsehood programmers believe”: there is no good reason to hide the truth from other people. Squint a little and we’re back to McNealy’s “Privacy is dead, get over it.” Or Schmidt’s “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
And like those men, the programmers behind this harebrained scheme don’t imagine that it will ever apply to them. As Chris Palmer — who worked on Chromium — points out, this is not compatible with normal developer tools or debuggers, which are “incalculably valuable and not really negotiable”:
https://groups.google.com/a/chromium.org/g/blink-dev/c/Ux5h_kGO22g/m/5Lt5cnkLCwAJ
This proposal is still obscure in the mainstream, but in tech circles, it has precipitated a flood of righteous fury:
https://arstechnica.com/gadgets/2023/07/googles-web-integrity-api-sounds-like-drm-for-the-web/
As I wrote last week, giving manufacturers the power to decide how your computer is configured, overriding your own choices, is a bad tradeoff — the worst tradeoff, a greased slide into terminal enshittification:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
This is how you get Unauthorized Bread:
https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-near-future-tale-of-refugees-and-sinister-iot-appliances/
All of which leads to the question: what now? What should be done about WEI and remote attestation?
Let me start by saying: I don’t think it should be illegal for programmers to design and release these tools. Code is speech, and we can’t understand how this stuff works if we can’t study it.
But programmers shouldn’t deploy it in production code, in the same way that programmers should be allowed to make pen-testing tools, but shouldn’t use them to attack production systems and harm their users. Programmers who do this should be criticized and excluded from the society of their ethical, user-respecting peers.
Corporations that use remote attestation should face legal restrictions: privacy law should prevent the use of remote attestation to compel the production of true facts about users or the exclusion of users who refuse to produce those facts. Unfair competition law should prevent companies from using remote attestation to block interoperability or tie their products to related products and services.
Finally, we must withdraw the laws that prevent users and programmers from overriding TPMs, secure enclaves and remote attestations. You should have the right to study and modify your computer to produce false attestations, or run any code of your choosing. Felony contempt of business model is an outrage. We should alter or strike down DMCA 1201, the Computer Fraud and Abuse Act, and other laws (like contract law’s “tortious interference”) that stand between you and “sole and despotic dominion” over your own computer. All of that applies not just to users who want to reconfigure their own computers, but also toolsmiths who want to help them do so, by offering information, code, products or services to jailbreak and alter your devices.
Tech giants will squeal at this, insisting that they serve your interests when they prevent rivals from opening up their products. After all, those rivals might be bad guys who want to hurt you. That’s 100% true. What is likewise true is that no tech giant will defend you from its own bad impulses, and if you can’t alter your device, you are powerless to stop them:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Companies should be stopped from harming you, but the right place to decide whether a business is doing something nefarious isn’t in the boardroom of that company’s chief competitor: it’s in the halls of democratically accountable governments:
https://www.eff.org/wp/interoperability-and-privacy
So how do we get there? Well, that’s another matter. In my next book, The Internet Con: How to Seize the Means of Computation (Verso Books, Sept 5), I lay out a detailed program, describing which policies will disenshittify the internet, and how to get those policies:
https://www.versobooks.com/products/3035-the-internet-con
Predictably, there are challenges getting this kind of book out into the world via our concentrated tech sector. Amazon refuses to carry the audio edition on its monopoly audiobook platform, Audible, unless it is locked to Amazon forever with mandatory DRM. That’s left me self-financing my own DRM-free audio edition, which is currently available for pre-order via this Kickstarter:
http://seizethemeansofcomputation.org
Tumblr media
I’m kickstarting the audiobook for “The Internet Con: How To Seize the Means of Computation,” a Big Tech disassembly manual to disenshittify the web and bring back the old, good internet. It’s a DRM-free book, which means Audible won’t carry it, so this crowdfunder is essential. Back now to get the audio, Verso hardcover and ebook:
https://www.kickstarter.com/projects/doctorow/the-internet-con-how-to-seize-the-means-of-computation
Tumblr media
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai
Tumblr media
[Image ID: An anatomical drawing of a flayed human head; it has been altered to give it a wide-stretched mouth revealing a gadget nestled in the back of the figure's throat, connected by a probe whose two coiled wires stretch to an old fashioned electronic box. The head's eyes have been replaced by the red, menacing eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.' Behind the head is a code waterfall effect as seen in the credits of the Wachowskis' 'The Matrix.']
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
infirmux · 1 year
Text
now perhaps it is finally time to watch that last hellraiser i think i was only putting it off so long so as to trick myself into thinking i have enjoyed the time we spent together but there will still be some lamentation when it is gone....
5 notes · View notes
felikatze · 1 year
Text
i put on roro melodies for one car ride and now the laix brainrot is back so bad <3
2 notes · View notes
misskamelie · 2 years
Text
I believe that children should be taught/learnt how technology works, but doing so by giving them a phone or ipad is literally the worst thing one could do, methinks
2 notes · View notes
ploridafanthers · 12 days
Text
you guys... know you aren't supposed to be chasing after actors as an attainable aesthetic ideal, right?
1 note · View note
shlimon · 7 months
Text
Empowering Digital Responsibility: The Role of Monitoring and Filtering Software
Tumblr media
In today's digital age, computer monitoring, content filtering, and time management software have become indispensable tools for various purposes. Whether you're a concerned parent, a vigilant employer, or an educational institution, understanding and utilizing these solutions is crucial for maintaining productivity, security, and safety in the digital realm.
Computer monitoring, at its core, involves tracking and recording computer activities, serving diverse objectives. In the corporate world, it optimizes employee productivity and enhances security by identifying potential threats and compliance issues. For parents, it offers peace of mind by ensuring a safe online environment for their children. In educational settings, computer monitoring safeguards students' digital learning experiences while promoting focus and productivity.
Computer monitoring software designed for schools is especially adept at addressing the unique challenges faced by educational institutions. It incorporates content filtering to block inappropriate or distracting websites, offers remote monitoring for efficient supervision, and enables time management to help strike a balance between learning and leisure. In addition, it produces usage reports and aids in identifying and preventing cyberbullying, which is of paramount concern for schools.
Choosing the right computer monitoring software can be daunting, considering the multitude of options available. Some of the top choices include Veriato, Netsanity, Net Nanny, Monitask, Lightspeed Systems, Qustodio, and Teramind, each tailored to specific needs.
To delve deeper into the world of computer monitoring, content filtering, and time management software, we invite you to read our full article on www.saasandgadget.com. Discover how these tools can transform your digital experience, whether at home, in the workplace, or in an educational setting. Stay informed, stay secure, and harness the power of the digital world with the right software solutions.
Visit www.saasandgadget.com now to read the full article and explore the best computer monitoring software options for your unique needs. Elevate productivity, enhance security, and ensure a safe online environment with the right tools at your disposal.
0 notes
Text
my cousin is talking about the lyrics he relates to (not from one song just lyrics in general) and why he does and like I would love to join in absolutely so!! but it feels like the lyrics I relate to aren't good enough and what if I'm getting everything wrong about said lyrics. so nevermind
1 note · View note