#computer attack
Explore tagged Tumblr posts
Text
Has your heart ever skipped a beat reading a headline about a ransomware attack? Have you ever wondered if your precious photos, crucial work documents, or even your entire system could be held hostage by malicious software? You're not alone. Ransomware attacks are a growing threat, targeting individuals and organizations alike, and the consequences can be devastating. But here's the good news: preventing ransomware attacks is largely within your control.
In this blog post, we’re going to dive deep into practical, actionable strategies that will help you secure your computer from ransomware, safeguarding your digital life. Get ready to empower yourself with knowledge and build a robust defense!
0 notes
Text

0 notes
Text
attacks for upyoursbrute, shystercrook and bigfishnolungman on art fight!
#my art#artfight#art fight#artfight2025#furry#anthro#dragon#starting to worry i won't get to 200 attacks total but i'm gonna do my best. lol.#actually idk i just checked i think i can do 22 attacks in uhh 3 weeks or whatever...gotta LOCK IN#anyway i have been destroyed by art again thank you anyone who's attacked me...starting revenge season early to get to as many people#as i can#ALSO being nerfed by the weather it's so hot....which I LOVE but i don't love hunching in front of a computer while it's so nice#so maybe linocuts will see a dramatic return against all odds this year idk#ANYWAY#much love
757 notes
·
View notes
Text
Are the means of computation even seizable?

I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me in PITTSBURGH in TOMORROW (May 15) at WHITE WHALE BOOKS, and in PDX on Jun 20 at BARNES AND NOBLE with BUNNIE HUANG. More tour dates (London, Manchester) here.
Something's very different in tech. Once upon a time, every bad choice by tech companies – taking away features, locking out mods or plugins, nerfing the API – was countered, nearly instantaneously, by someone writing a program that overrode that choice.
Bad clients would be muscled aside by third-party clients. Locked bootloaders would be hacked and replaced. Code that confirmed you were using OEM parts, consumables or adapters would be found and nuked from orbit. Weak APIs would be replaced with muscular, unofficial APIs built out of unstoppable scrapers running on headless machines in some data-center. Every time some tech company erected a 10-foot enshittifying fence, someone would show up with an 11-foot disenshittifying ladder.
Those 11-foot ladders represented the power of interoperability, the inescapable bounty of the Turing-complete, universal von Neumann machine, which, by definition, is capable of running every valid program. Specifically, they represented the power of adversarial interoperability – when someone modifies a technology against its manufacturer's wishes. Adversarial interoperability is the origin story of today's tech giants, from Microsoft to Apple to Google:
https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
But adversarial interop has been in steady decline for the past quarter-century. These big companies moved fast and broke things, but no one is returning the favor. If you ask the companies what changed, they'll just smirk and say that they're better at security than the incumbents they disrupted. The reason no one's hacked up a third-party iOS App Store is that Apple's security team is just so fucking 1337 that no one can break their shit.
I think this is nonsense. I think that what's really going on is that we've made it possible for companies to design their technologies in such a way that any attempt at adversarial interop is illegal.
"Anticircumvention" laws like Section 1201 of the 1998 Digital Millennium Copyright Act make bypassing any kind of digital lock (AKA "Digital Rights Management" or "DRM") very illegal. Under DMCA, just talking about how to remove a digital lock can land you in prison for 5 years. I tell the story of this law's passage in "Understood: Who Broke the Internet," my new podcast series for the CBC:
https://pluralistic.net/2025/05/08/who-broke-the-internet/#bruce-lehman
For a quarter century, tech companies have aggressively lobbied and litigated to expand the scope of anticircumvention laws. At the same time, companies have come up with a million ways to wrap their products in digital locks that are a crime to break.
Digital locks let Chamberlain, a garage-door opener monopolist block all third-party garage-door apps. Then, Chamberlain stuck ads in its app, so you have to watch an ad to open your garage-door:
https://pluralistic.net/2023/11/09/lead-me-not-into-temptation/#chamberlain
Digital locks let John Deere block third-party repair of its tractors:
https://pluralistic.net/2022/05/08/about-those-kill-switched-ukrainian-tractors/
And they let Apple block third-party repair of iPhones:
https://pluralistic.net/2022/05/22/apples-cement-overshoes/
These companies built 11-foot ladders to get over their competitors' 10-foot walls, and then they kicked the ladder away. Once they were secure atop their walls, they committed enshittifying sins their fallen adversaries could only dream of.
I've been campaigning to abolish anticircumvention laws for the past quarter-century, and I've noticed a curious pattern. Whenever these companies stand to lose their legal protections, they freak out and spend vast fortunes to keep those protections intact. That's weird, because it strongly implies that their locks don't work. A lock that works works, whether or not it's illegal to break that lock. The reason Signal encryption works is that it's working encryption. The legal status of breaking Signal's encryption has nothing to do with whether it works. If Signal's encryption was full of technical flaws but it was illegal to point those flaws out, you'd be crazy to trust Signal.
Signal does get involved in legal fights, of course, but the fights it gets into are ones that require Signal to introduce defects in its encryption – not fights over whether it is legal to disclose flaws in Signal or exploit them:
https://pluralistic.net/2023/03/05/theyre-still-trying-to-ban-cryptography/
But tech companies that rely on digital locks manifestly act like their locks don't work and they know it. When the tech and content giants bullied the W3C into building DRM into 2 billion users' browsers, they categorically rejected any proposal to limit their ability to destroy the lives of people who broke that DRM, even if it was only to add accessibility or privacy to video:
https://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership
The thing is, if the lock works, you don't need the legal right to destroy the lives of people who find its flaws, because it works.
Do digital locks work? Can they work? I think the answer to both questions is a resounding no. The design theory of a digital lock is that I can provide you with an encrypted file that your computer has the keys to. Your computer will access those keys to decrypt or sign a file, but only under the circumstances that I have specified. Like, you can install an app when it comes from my app store, but not when it comes from a third party. Or you can play back a video in one kind of browser window, but not in another one. For this to work, your computer has to hide a cryptographic key from you, inside a device you own and control. As I pointed out more than a decade ago, this is a fool's errand:
https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/
After all, you or I might not have the knowledge and resources to uncover the keys' hiding place, but someone does. Maybe that someone is a person looking to go into business selling your customers the disenshittifying plugin that unfucks the thing you deliberately broke. Maybe it's a hacker-tinkerer, pursuing an intellectual challenge. Maybe it's a bored grad student with a free weekend, an electron-tunneling microscope, and a seminar full of undergrads looking for a project.
The point is that hiding secrets in devices that belong to your adversaries is very bad security practice. No matter how good a bank safe is, the bank keeps it in its vault – not in the bank-robber's basement workshop.
For a hiding-secrets-in-your-adversaries'-device plan to work, the manufacturer has to make zero mistakes. The adversary – a competitor, a tinkerer, a grad student – only has to find one mistake and exploit it. This is a bedrock of security theory: attackers have an inescapable advantage.
So I think that DRM doesn't work. I think DRM is a legal construct, not a technical one. I think DRM is a kind of magic Saran Wrap that manufacturers can wrap around their products, and, in so doing, make it a literal jailable offense to use those products in otherwise legal ways that their shareholders don't like. As Jay Freeman put it, using DRM creates a new law called "Felony Contempt of Business Model." It's a law that has never been passed by any legislature, but is nevertheless enforceable.
In the 25 years I've been fighting anticircumvention laws, I've spoken to many government officials from all over the world about the opportunity that repealing their anticircumvention laws represents. After all, Apple makes $100b/year by gouging app makers for 30 cents on ever dollar. Allow your domestic tech sector to sell the tools to jailbreak iPhones and install third party app stores, and you can convert Apple's $100b/year to a $100m/year business for one of your own companies, and the other $999,900,000,000 will be returned to the world's iPhone owners as a consumer surplus.
But every time I pitched this, I got the same answer: "The US Trade Representative forced us to pass this law, and threatened us with tariffs if we didn't pass it." Happy Liberation Day, people – every country in the world is now liberated from the only reason to keep this stupid-ass law on their books:
https://pluralistic.net/2025/01/15/beauty-eh/#its-the-only-war-the-yankees-lost-except-for-vietnam-and-also-the-alamo-and-the-bay-of-ham
In light of the Trump tariffs, I've been making the global rounds again, making the case for an anticircumvention repeal:
https://www.ft.com/content/b882f3a7-f8c9-4247-9662-3494eb37c30b
One of the questions I've been getting repeatedly from policy wonks, activists and officials is, "Is it even possible to jailbreak modern devices?" They want to know if companies like Apple, Tesla, Google, Microsoft, and John Deere have created unbreakable digital locks. Obviously, this is an important question, because if these locks are impregnable, then getting rid of the law won't deliver the promised benefits.
It's true that there aren't as many jailbreaks as we used to see. When a big project like Nextcloud – which is staffed up with extremely accomplished and skilled engineers – gets screwed over by Google's app store, they issue a press-release, not a patch:
https://arstechnica.com/gadgets/2025/05/nextcloud-accuses-google-of-big-tech-gatekeeping-over-android-app-permissions/
Perhaps that's because the tech staff at Nextcloud are no match for Google, not even with the attacker's advantage on their side.
But I don't think so. Here's why: we do still get jailbreaks and mods, but these almost exclusively come from anonymous tinkerers and hobbyists:
https://consumerrights.wiki/Mazda_DMCA_takedown_of_Open_Source_Home_Assistant_App
Or from pissed off teenagers:
https://www.theverge.com/2022/9/29/23378541/the-og-app-instagram-clone-pulled-from-app-store
These hacks are incredibly ambitious! How ambitious? How about a class break for every version of iOS as well as an unpatchable hardware attack on 8 years' worth of Apple bootloaders?
https://pluralistic.net/2020/05/25/mafia-logic/#sosumi
Now, maybe it's the case at all the world's best hackers are posting free code under pseudonyms. Maybe all the code wizards working for venture backed tech companies that stand to make millions through clever reverse engineering are just not as mad skilled as teenagers who want an ad-free Insta and that's why they've never replicated the feat.
Or maybe it's because teenagers and anonymous hackers are just about the only people willing to risk a $500,000 fine and 5-year prison sentence. In other words, maybe the thing that protects DRM is law, not code. After all, when Polish security researchers revealed the existence of secret digital locks that the train manufacturer Newag used to rip off train operators for millions of euros, Newag dragged them into court:
https://fsfe.org/news/2025/news-20250407-01.en.html
Tech companies are the most self-mythologizing industry on the planet, beating out even the pharma sector in boasting about their prowess and good corporate citizenship. They swear that they've made a functional digital lock…but they sure act like the only thing those locks do is let them sue people who reveal their workings.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/05/14/pregnable/#checkm8
#pluralistic#apple#drm#og app#instagram#meta#dmca 1201#comcom#competitive compatibility#interop#interoperability#adversarial interoperability#who broke the internet#self-mythologizing#infosec#schneiers law#red team advantage#attackers advantage#luddism#seize the means of computation
433 notes
·
View notes
Text
[HORRIFIC_DIAL_UP_NOISES.MP3]
artfight attack for @r0yalstar ! come fight me <3
#retro#1990s#artfight#artists on tumblr#artfight 2024#art#team stardust#i'm shooting for at LEAST 15 attacks this year. i'm at 11#computers#object head#r0yalstar
2K notes
·
View notes
Text
I’ve started putting squirrel tv on for them when I want to use my computer. So far it seems to be working to keep Bug and Bianca from “assisting” me on the computer
#in this scenario assisting me means they are attacking the computer screen when the mouse moves#my post#Bianca#Bug#Arthur#Petunia#Arasha
540 notes
·
View notes
Text
I’ve noticed that I end up focusing on a different character every time I get back into Don’t Starve. So now I’m playing the funny robot for the first time.
#dst#dont starve#dont starve together#wx 78#ghost draws#omg finally#I was fighting the tags so much#now I can add my thoughts back in hopefully#but I finally got my computer fix so I’ve started playing#I’m playing them for the first time and they’re already rubbing off on me I think#I’ve been so much more aggressive than usual#ive been fighting way more. tho having that mod that shows the attack radius also helps. it helps to learn kiting#I’m scared for spring I have no plan for it. I was playing a lot of wurt before this is like the opposite#I already missed deerclops cause I was sailing. no idea how I’m gonna do this
248 notes
·
View notes
Text
lil fake advertisement for my portfolio
#would u buy energy boba from cute computer cat girl#shit will probably give u a heart attack but yk.. cat girl#ghastpost#art#illustration#artists on tumblr#drawing#sketch#oc#orignal character#cat girl#anime#robot#character design#furry
163 notes
·
View notes
Text

"Eric Noonan, CEO of cybersecurity provider CyberSheath, told CNN it’s likely too early to tell if an attack caused the issues.
“One of the things that should always be taken with a grain of salt is any statements made in the short period of time, immediately after, or even in this case during an attack,” Noonan said.
Musk has a history of attributing technical snafus to cyberattacks. When his conversation with Donald Trump on X started 42 minutes late in August 2024, he said there was a “probability” of an attack."
CNN
Sky News
POLITICO
#destiel meme news#destiel meme#news#united states#us news#world news#elon musk#fuck elon#elongated muskrat#x twitter#twitter#x#cyber attack#cyberattack#cyber security#tech industry#computer technology#computer security#i hope it's just the gutted hollowed out shell of twitter's code finally giving out
191 notes
·
View notes
Text
I took a three hour break.... and then felt absurdly motivated to make this attack lmao.
@jansuta's 01/02!!!
#art#digital art#csp#clip studio paint#clip studio pro#clip studio art#robot#computer#not my oc!#artfight#art fight#artfight2025#art fight 2025#artfight 2025#artfight attack#art fight attack#team crystals#animation#this guy is REALLY cool concept and design wise... hopefully you like it#(especially since it's friendly fire) (lol)
68 notes
·
View notes
Text
ARMIN’s PROFILE







#armin arlert#armin aot#nerdmin#compsci student armin#computer science student armin#aot smau#aot college au#aot modern au#aot#attack on titan modern au#attack on titan au#attack on titan#snk smau#snk#shingeki no kyojin#instagram au#anime and manga#anime#smau#social media au#socmed au#eren x armin#eremin#arumika#armin x mikasa
73 notes
·
View notes
Text

#British Military Personnel#Computer Attack#Defense Minister#Europe#Grant Shapps#Personal Data#Thousands
0 notes
Text
#alanwakeedit#alan wake#alan wake 2#*#**#mk.op#mk.edit#mk.gifs#otis_inf cam shots#reshade tools used#maybe another reason i'm having such terrible panic attacks is that i'm not going to have my computer next week#so in that way i almost wish i was dogsitting this week instead#cause i do want to spend my time creating even though i'm currently crying so not sure how that's going to go#even though yes i just made this gif through my tears
64 notes
·
View notes
Text
Forcing your computer to rat you out

Powerful people imprisoned by the cluelessness of their own isolation, locked up with their own motivated reasoning: “It’s impossible to get a CEO to understand something when his quarterly earnings call depends on him not understanding it.”
Take Mark Zuckerberg. Zuckerberg insists that anyone who wanted to use a pseudonym online is “two-faced,” engaged in dishonest social behavior. The Zuckerberg Doctrine claims that forcing people to use their own names is a way to ensure civility. This is an idea so radioactively wrong, it can be spotted from orbit.
From the very beginning, social scientists (both inside and outside Facebook) told Zuckerberg that he was wrong. People have lots of reasons to hide their identities online, both good and bad, but a Real Names Policy affects different people differently:
https://memex.craphound.com/2018/01/22/social-scientists-have-warned-zuck-all-along-that-the-facebook-theory-of-interaction-would-make-people-angry-and-miserable/
For marginalized and at-risk people, there are plenty of reasons to want to have more than one online identity — say, because you are a #MeToo whistleblower hoping that Harvey Weinstein won’t sic his ex-Mossad mercenaries on you:
https://www.newyorker.com/news/news-desk/harvey-weinsteins-army-of-spies
Or maybe you’re a Rohingya Muslim hoping to avoid the genocidal attentions of the troll army that used Facebook to organize — under their real, legal names — to rape and murder you and everyone you love:
https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/
But even if no one is looking to destroy your life or kill you and your family, there are plenty of good reasons to present different facets of your identity to different people. No one talks to their lover, their boss and their toddler in exactly the same way, or reveals the same facts about their lives to those people. Maintaining different facets to your identity is normal and healthy — and the opposite, presenting the same face to everyone in your life, is a wildly terrible way to live.
None of this is controversial among social scientists, nor is it hard to grasp. But Zuckerberg stubbornly stuck to this anonymity-breeds-incivility doctrine, even as dictators used the fact that Facebook forced dissidents to use their real names to retain power through the threat (and reality) of arrest and torture:
https://pluralistic.net/2023/01/25/nationalize-moderna/#hun-sen
Why did Zuck cling to this dangerous and obvious fallacy? Because the more he could collapse your identity into one unitary whole, the better he could target you with ads. Truly, it is impossible to get a billionaire to understand something when his mega-yacht depends on his not understanding it.
This motivated reasoning ripples through all of Silicon Valley’s top brass, producing what Anil Dash calls “VC QAnon,” the collection of conspiratorial, debunked and absurd beliefs embraced by powerful people who hold the digital lives of billions of us in their quivering grasp:
https://www.anildash.com/2023/07/07/vc-qanon/
These fallacy-ridden autocrats like to disguise their demands as observations, as though wanting something to be true was the same as making it true. Think of when Eric Schmidt — then the CEO of Google — dismissed online privacy concerns, stating “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place”:
https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy
Schmidt was echoing the sentiments of his old co-conspirator, Sun Microsystems CEO Scott McNealy: “You have zero privacy anyway. Get over it”:
https://www.wired.com/1999/01/sun-on-privacy-get-over-it/
Both men knew better. Schmidt, in particular, is very jealous of his own privacy. When Cnet reporters used Google to uncover and publish public (but intimate and personal) facts about Schmidt, Schmidt ordered Google PR to ignore all future requests for comment from Cnet reporters:
https://www.cnet.com/tech/tech-industry/how-cnet-got-banned-by-google/
(Like everything else he does, Elon Musk’s policy of responding to media questions about Twitter with a poop emoji is just him copying things other people thought up, making them worse, and taking credit for them:)
https://www.theverge.com/23815634/tesla-elon-musk-origin-founder-twitter-land-of-the-giants
Schmidt’s actions do not reflect an attitude of “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Rather, they are the normal response that we all have to getting doxed.
When Schmidt and McNealy and Zuck tell us that we don’t have privacy, or we don’t want privacy, or that privacy is bad for us, they’re disguising a demand as an observation. “Privacy is dead” actually means, “When privacy is dead, I will be richer than you can imagine, so stop trying to save it, goddamnit.”
We are all prone to believing our own bullshit, but when a tech baron gets high on his own supply, his mental contortions have broad implications for all of us. A couple years after Schmidt’s anti-privacy manifesto, Google launched Google Plus, a social network where everyone was required to use their “real name.”
This decision — justified as a means of ensuring civility and a transparent ruse to improve ad targeting — kicked off the Nym Wars:
https://epeus.blogspot.com/2011/08/google-plus-must-stop-this-identity.html
One of the best documents to come out of that ugly conflict is “Falsehoods Programmers Believe About Names,” a profound and surprising enumeration of all the ways that the experiences of tech bros in Silicon Valley are the real edge-cases, unreflective of the reality of billions of their users:
https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/
This, in turn, spawned a whole genre of programmer-fallacy catalogs, falsehoods programmers believe about time, currency, birthdays, timezones, email addresses, national borders, nations, biometrics, gender, language, alphabets, phone numbers, addresses, systems of measurement, and, of course, families:
https://github.com/kdeldycke/awesome-falsehood
But humility is in short supply in tech. It’s impossible to get a programmer to understand something when their boss requires them not to understand it. A programmer will happily insist that ordering you to remove your “mask” is for your own good — and not even notice that they’re taking your skin off with it.
There are so many ways that tech executives could improve their profits if only we would abandon our stubborn attachment to being so goddamned complicated. Think of Netflix and its anti-passsword-sharing holy war, which is really a demand that we redefine “family” to be legible and profitable for Netflix:
https://pluralistic.net/2023/02/02/nonbinary-families/#red-envelopes
But despite the entreaties of tech companies to collapse our identities, our families, and our online lives into streamlined, computably hard-edged shapes that fit neatly into their database structures, we continue to live fuzzy, complicated lives that only glancingly resemble those of the executives seeking to shape them.
Now, the rich, powerful people making these demands don’t plan on being constrained by them. They are conservatives, in the tradition of #FrankWilhoit, believers in a system of “in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect”:
https://crookedtimber.org/2018/03/21/liberals-against-progressives/#comment-729288
As with Schmidt’s desire to spy on you from asshole to appetite for his own personal gain, and his violent aversion to having his own personal life made public, the tech millionaires and billionaires who made their fortune from the flexibility of general purpose computers would like to end that flexibility. They insist that the time for general purpose computers has passed, and that today, “consumers” crave the simplicity of appliances:
https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/
It is in the War On General Purpose Computing that we find the cheapest and flimsiest rhetoric. Companies like Apple — and their apologists — insist that no one wants to use third-party app stores, or seek out independent repair depots — and then spend millions to make sure that it’s illegal to jailbreak your phone or get it fixed outside of their own official channel:
https://doctorow.medium.com/apples-cement-overshoes-329856288d13
The cognitive dissonance of “no one wants this,” and “we must make it illegal to get this” is powerful, but the motivated reasoning is more powerful still. It is impossible to get Tim Cook to understand something when his $49 million paycheck depends on him not understanding it.
The War on General Purpose Computing has been underway for decades. Computers, like the people who use them, stubbornly insist on being reality-based, and the reality of computers is that they are general purpose. Every computer is a Turing complete, universal Von Neumann machine, which means that it can run every valid program. There is no way to get a computer to be almost Turing Complete, only capable of running programs that don’t upset your shareholders’ fragile emotional state.
There is no such thing as a printer that will only run the “reject third-party ink” program. There is no such thing as a phone that will only run the “reject third-party apps” program. There are only laws, like the Section 1201 of the Digital Millennium Copyright Act, that make writing and distributing those programs a felony punishable by a five-year prison sentence and a $500,000 fine (for a first offense).
That is to say, the War On General Purpose Computing is only incidentally a technical fight: it is primarily a legal fight. When Apple says, “You can’t install a third party app store on your phone,” what they means is, “it’s illegal to install that third party app store.” It’s not a technical countermeasure that stands between you and technological self-determination, it’s a legal doctrine we can call “felony contempt of business model”:
https://locusmag.com/2020/09/cory-doctorow-ip/
But the mighty US government will not step in to protect a company’s business model unless it at least gestures towards the technical. To invoke DMCA 1201, a company must first add the thinnest skin of digital rights management to their product. Since 1201 makes removing DRM illegal, a company can use this molecule-thick scrim of DRM to felonize any activity that the DRM prevents.
More than 20 years ago, technologists started to tinker with ways to combine the legal and technical to tame the wild general purpose computer. Starting with Microsoft’s Palladium project, they theorized a new “Secure Computing” model for allowing companies to reach into your computer long after you had paid for it and brought it home, in order to discipline you for using it in ways that undermined its shareholders’ interest.
Secure Computing began with the idea of shipping every computer with two CPUs. The first one was the normal CPU, the one you interacted with when you booted it up, loaded your OS, and ran programs. The second CPU would be a Trusted Platform Module, a brute-simple system-on-a-chip designed to be off-limits to modification, even by its owner (that is, you).
The TPM would ship with a limited suite of simple programs it could run, each thoroughly audited for bugs, as well as secret cryptographic signing keys that you were not permitted to extract. The original plan called for some truly exotic physical security measures for that TPM, like an acid-filled cavity that would melt the chip if you tried to decap it or run it through an electron-tunneling microscope:
https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil
This second computer represented a crack in the otherwise perfectly smooth wall of a computer’s general purposeness; and Trusted Computing proposed to hammer a piton into that crack and use it to anchor a whole superstructure that could observe — and limited — the activity of your computer.
This would start with observation: the TPM would observe every step of your computer’s boot sequence, creating cryptographic hashes of each block of code as it loaded and executed. Each stage of the boot-up could be compared to “known good” versions of those programs. If your computer did something unexpected, the TPM could halt it in its tracks, blocking the boot cycle.
What kind of unexpected things do computers do during their boot cycle? Well, if your computer is infected with malware, it might load poisoned versions of its operating system. Once your OS is poisoned, it’s very hard to detect its malicious conduct, since normal antivirus programs rely on the OS to faithfully report what your computer is doing. When the AV program asks the OS to tell it which programs are running, or which files are on the drive, it has no choice but to trust the OS’s response. When the OS is compromised, it can feed a stream of lies to users’ programs, assuring these apps that everything is fine.
That’s a very beneficial use for a TPM, but there’s a sinister flipside: the TPM can also watch your boot sequence to make sure that there aren’t beneficial modifications present in your operating system. If you modify your OS to let you do things the manufacturer wants to prevent — like loading apps from a third-party app-store — the TPM can spot this and block it.
Now, these beneficial and sinister uses can be teased apart. When the Palladium team first presented its research, my colleague Seth Schoen proposed an “owner override”: a modification of Trusted Computing that would let the computer’s owner override the TPM:
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
This override would introduce its own risks, of course. A user who was tricked into overriding the TPM might expose themselves to malicious software, which could harm that user, as well as attacking other computers on the user’s network and the other users whose data were on the compromised computer’s drive.
But an override would also provide serious benefits: it would rule out the monopolistic abuse of a TPM to force users to run malicious code that the manufacturer insisted on — code that prevented the user from doing things that benefited the user, even if it harmed the manufacturer’s shareholders. For example, with owner override, Microsoft couldn’t force you to use its official MS Office programs rather than third-party compatible programs like Apple’s iWork or Google Docs or LibreOffice.
Owner override also completely changed the calculus for another, even more dangerous part of Trusted Computing: remote attestation.
Remote Attestation is a way for third parties to request a reliable, cryptographically secured assurances about which operating system and programs your computer is running. In Remote Attestation, the TPM in your computer observes every stage of your computer’s boot, gathers information about all the programs you’re running, and cryptographically signs them, using the signing keys the manufacturer installed during fabrication.
You can send this “attestation” to other people on the internet. If they trust that your computer’s TPM is truly secure, then they know that you have sent them a true picture of your computer’s working (the actual protocol is a little more complicated and involves the remote party sending you a random number to cryptographically hash with the attestation, to prevent out-of-date attestations).
Now, this is also potentially beneficial. If you want to make sure that your technologically unsophisticated friend is running an uncompromised computer before you transmit sensitive data to it, you can ask them for an attestation that will tell you whether they’ve been infected with malware.
But it’s also potentially very sinister. Your government can require all the computers in its borders to send a daily attestation to confirm that you’re still running the mandatory spyware. Your abusive spouse — or abusive boss — can do the same for their own disciplinary technologies. Such a tool could prevent you from connecting to a service using a VPN, and make it impossible to use Tor Browser to protect your privacy when interacting with someone who wishes you harm.
The thing is, it’s completely normal and good for computers to lie to other computers on behalf of their owners. Like, if your IoT ebike’s manufacturer goes out of business and all their bikes get bricked because they can no longer talk to their servers, you can run an app that tricks the bike into thinking that it’s still talking to the mothership:
https://nltimes.nl/2023/07/15/alternative-app-can-unlock-vanmoof-bikes-popular-amid-bankruptcy-fears
Or if you’re connecting to a webserver that tries to track you by fingerprinting you based on your computer’s RAM, screen size, fonts, etc, you can order your browser to send random data about this stuff:
https://jshelter.org/fingerprinting/
Or if you’re connecting to a site that wants to track you and nonconsensually cram ads into your eyeballs, you can run an adblocker that doesn’t show you the ads, but tells the site that it did:
https://www.eff.org/deeplinks/2019/07/adblocking-how-about-nah
Owner override leaves some of the beneficial uses of remote attestation intact. If you’re asking a friend to remotely confirm that your computer is secure, you’re not going to use an override to send them bad data about about your computer’s configuration.
And owner override also sweeps all of the malicious uses of remote attestation off the board. With owner override, you can tell any lie about your computer to a webserver, a site, your boss, your abusive spouse, or your government, and they can’t spot the lie.
But owner override also eliminates some beneficial uses of remote attestation. For example, owner override rules out remote attestation as a way for strangers to play multiplayer video games while confirming that none of them are using cheat programs (like aimhack). It also means that you can’t use remote attestation to verify the configuration of a cloud server you’re renting in order to assure yourself that it’s not stealing your data or serving malware to your users.
This is a tradeoff, and it’s a tradeoff that’s similar to lots of other tradeoffs we make online, between the freedom to do something good and the freedom to do something bad. Participating anonymously, contributing to free software, distributing penetration testing tools, or providing a speech platform that’s open to the public all represent the same tradeoff.
We have lots of experience with making the tradeoff in favor of restrictions rather than freedom: powerful bad actors are happy to attach their names to their cruel speech and incitement to violence. Their victims are silenced for fear of that retaliation.
When we tell security researchers they can’t disclose defects in software without the manufacturer’s permission, the manufacturers use this as a club to silence their critics, not as a way to ensure orderly updates.
When we let corporations decide who is allowed to speak, they act with a mixture of carelessness and self-interest, becoming off-the-books deputies of authoritarian regimes and corrupt, powerful elites.
Alas, we made the wrong tradeoff with Trusted Computing. For the past twenty years, Trusted Computing has been creeping into our devices, albeit in somewhat denatured form. The original vision of acid-filled secondary processors has been replaced with less exotic (and expensive) alternatives, like “secure enclaves.” With a secure enclave, the manufacturer saves on the expense of installing a whole second computer, and instead, they draw a notional rectangle around a region of your computer’s main chip and try really hard to make sure that it can only perform a very constrained set of tasks.
This gives us the worst of all worlds. When secure enclaves are compromised, we not only lose the benefit of cryptographic certainty, knowing for sure that our computers are only booting up trusted, unalterted versions of the OS, but those compromised enclaves run malicious software that is essentially impossible to detect or remove:
https://pluralistic.net/2022/07/28/descartes-was-an-optimist/#uh-oh
But while Trusted Computing has wormed its way into boot-restrictions — preventing you from jailbreaking your computer so it will run the OS and apps of your choosing — there’s been very little work on remote attestation…until now.
Web Environment Integrity is Google’s proposal to integrate remote attestation into everyday web-browsing. The idea is to allow web-servers to verify what OS, extensions, browser, and add-ons your computer is using before the server will communicate with you:
https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md
Even by the thin standards of the remote attestation imaginaries, there are precious few beneficial uses for this. The googlers behind the proposal have a couple of laughable suggestions, like, maybe if ad-supported sites can comprehensively refuse to serve ad-blocking browsers, they will invest the extra profits in making things you like. Or: letting websites block scriptable browsers will make it harder for bad people to auto-post fake reviews and comments, giving users more assurances about the products they buy.
But foundationally, WEI is about compelling you to disclose true facts about yourself to people who you want to keep those facts from. It is a Real Names Policy for your browser. Google wants to add a new capability to the internet: the ability of people who have the power to force you to tell them things to know for sure that you’re not lying.
The fact that the authors assume this will be beneficial is just another “falsehood programmers believe”: there is no good reason to hide the truth from other people. Squint a little and we’re back to McNealy’s “Privacy is dead, get over it.” Or Schmidt’s “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
And like those men, the programmers behind this harebrained scheme don’t imagine that it will ever apply to them. As Chris Palmer — who worked on Chromium — points out, this is not compatible with normal developer tools or debuggers, which are “incalculably valuable and not really negotiable”:
https://groups.google.com/a/chromium.org/g/blink-dev/c/Ux5h_kGO22g/m/5Lt5cnkLCwAJ
This proposal is still obscure in the mainstream, but in tech circles, it has precipitated a flood of righteous fury:
https://arstechnica.com/gadgets/2023/07/googles-web-integrity-api-sounds-like-drm-for-the-web/
As I wrote last week, giving manufacturers the power to decide how your computer is configured, overriding your own choices, is a bad tradeoff — the worst tradeoff, a greased slide into terminal enshittification:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
This is how you get Unauthorized Bread:
https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-near-future-tale-of-refugees-and-sinister-iot-appliances/
All of which leads to the question: what now? What should be done about WEI and remote attestation?
Let me start by saying: I don’t think it should be illegal for programmers to design and release these tools. Code is speech, and we can’t understand how this stuff works if we can’t study it.
But programmers shouldn’t deploy it in production code, in the same way that programmers should be allowed to make pen-testing tools, but shouldn’t use them to attack production systems and harm their users. Programmers who do this should be criticized and excluded from the society of their ethical, user-respecting peers.
Corporations that use remote attestation should face legal restrictions: privacy law should prevent the use of remote attestation to compel the production of true facts about users or the exclusion of users who refuse to produce those facts. Unfair competition law should prevent companies from using remote attestation to block interoperability or tie their products to related products and services.
Finally, we must withdraw the laws that prevent users and programmers from overriding TPMs, secure enclaves and remote attestations. You should have the right to study and modify your computer to produce false attestations, or run any code of your choosing. Felony contempt of business model is an outrage. We should alter or strike down DMCA 1201, the Computer Fraud and Abuse Act, and other laws (like contract law’s “tortious interference”) that stand between you and “sole and despotic dominion” over your own computer. All of that applies not just to users who want to reconfigure their own computers, but also toolsmiths who want to help them do so, by offering information, code, products or services to jailbreak and alter your devices.
Tech giants will squeal at this, insisting that they serve your interests when they prevent rivals from opening up their products. After all, those rivals might be bad guys who want to hurt you. That’s 100% true. What is likewise true is that no tech giant will defend you from its own bad impulses, and if you can’t alter your device, you are powerless to stop them:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Companies should be stopped from harming you, but the right place to decide whether a business is doing something nefarious isn’t in the boardroom of that company’s chief competitor: it’s in the halls of democratically accountable governments:
https://www.eff.org/wp/interoperability-and-privacy
So how do we get there? Well, that’s another matter. In my next book, The Internet Con: How to Seize the Means of Computation (Verso Books, Sept 5), I lay out a detailed program, describing which policies will disenshittify the internet, and how to get those policies:
https://www.versobooks.com/products/3035-the-internet-con
Predictably, there are challenges getting this kind of book out into the world via our concentrated tech sector. Amazon refuses to carry the audio edition on its monopoly audiobook platform, Audible, unless it is locked to Amazon forever with mandatory DRM. That’s left me self-financing my own DRM-free audio edition, which is currently available for pre-order via this Kickstarter:
http://seizethemeansofcomputation.org

I’m kickstarting the audiobook for “The Internet Con: How To Seize the Means of Computation,” a Big Tech disassembly manual to disenshittify the web and bring back the old, good internet. It’s a DRM-free book, which means Audible won’t carry it, so this crowdfunder is essential. Back now to get the audio, Verso hardcover and ebook:
https://www.kickstarter.com/projects/doctorow/the-internet-con-how-to-seize-the-means-of-computation
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai
[Image ID: An anatomical drawing of a flayed human head; it has been altered to give it a wide-stretched mouth revealing a gadget nestled in the back of the figure's throat, connected by a probe whose two coiled wires stretch to an old fashioned electronic box. The head's eyes have been replaced by the red, menacing eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.' Behind the head is a code waterfall effect as seen in the credits of the Wachowskis' 'The Matrix.']
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#chaffing#spoofing#remote attestation#rene descartes#adversarial interoperability#war on general purpose computing#canvas attacks#vpns#compelled speech#onion routing#owner override#stalkerware#ngscb#palladium#trusted computing#secure enclaves#tor#interop#net neutrality#taking the fifth#right to remain silent#real names policy#the zuckerberg doctrine#none of your business#the right to lie#right to repair#bossware#spyware#wei web environment integrity
2K notes
·
View notes
Text
Ah-ah! Does a story die with its narrator? Ah-ah! Surely it's forgotten soon or later. Ah-ah! Hope they spell my name right in the paper. Ah-ah! In here lies t h e g r e a t i m p e r s o n a t o r .
#halsey#the great impersonator#panic attack#hometown#ego#lonely is the muse#I've been listening to and enjoying this album since it dropped but finally made some gifs today#loads more are on tenor I was waiting to upgrade my computer
93 notes
·
View notes
Text
Hello, guys! Just a quick life update - I was on vacations for 2 weeks, came back on Monday and... started building my new husband! I still can't believe I managed to do it myself and that everything works. Today I FINALLY installed Sims 3 and made it work. My old hubby served me well for 9 years and it was sad to say goodbye to him but at the same time I'm so excited for my new monster! I love him already, haha :D I hope we have a bright future ahead!


See you very soon! Love you all ♥
#text post#rollo's ramblings#non sims#i always had prebuild computers#i'm so proud of myself#altough there were heart attack moments#can't wait to be active on tumblr again
199 notes
·
View notes