Tumgik
#content moderation at scale
Text
Podcasting "Let the Platforms Burn"
Tumblr media
This week on my podcast, I read “Let the Platforms Burn,” a recent Medium column making the case that we should focus more on making it easier for people to leave platforms, rather than making the platforms less terrible places to be:
https://doctorow.medium.com/let-the-platforms-burn-6fb3e6c0d980
The platforms used to be source of online stability, and many argued that by consolidating the wide and wooly web into a few “curated” silos, the platforms were replacing chaos with good stewardship. If we wanted to make the internet hospitable to normies, we were told, we had to accept that Apple and Facebook’s tightly managed “simplicity” were the only way to get there.
But today, all the platforms are on fire, all the time. They are rocked by scandals every bit as awful as the failures of the smaller sites of yesteryear, but while harms of a Geocities or Livejournal moderation failure were confined to a small group of specialized users, failures in the big silos reach hundreds of millions or even billions of people.
What should we do about the rolling crisis of the platforms? The default response — beloved of Big Tech’s boosters and critics alike — is to impose rules on the platforms to make them more hospitable places for the billions they’ve engulfed. But I think that will fail. Instead, I think we should make the platforms less important places by freeing those billions.
That’s the argument of the column.
Think of California’s wildfires. While climate change has increased the intensity and frequency of our fires, climate (and neglect by PG&E) is merely part of the story. The other part of the story is fire-debt.
For millennia, the original people of California practiced controlled burns of the forests they lived, hunted, and played in. These burns cleared out sick and dying trees, scoured the forest floor of tinder, and opened spaces in the canopy that gave rise to new growth. Forests need fire — literally: the California redwood can’t reproduce without it:
https://www.pbs.org/wnet/nature/giant-sequoia-needs-fire-grow/15094/
But this ended centuries ago, when settlers stole the land and declared an end to “cultural burning” by the indigenous people they expropriated, imprisoned, and killed. They established permanent settlements within the fire zone, and embarked on a journey of escalating measures to keep that smouldering fire zone from igniting.
These heroic measures continue today, and they’ve set up a vicious cycle: fire suppression creates the illusion that it’s safe to live at the wildlife urban interface. Taken in by this illusion, more people move to the fire zone — and their presence creates political pressure for even more heroic measures.
The thing is, fire suppression doesn’t mean no fires — it means wildfires. The fire debt mounts and mounts, and without an orderly bankruptcy — controlled burns — we get chaotic defaults, the kind of fire that wipes out whole towns.
Eventually, we will have to change tacks: rather than making it safe to stay in the fire zone, we’re going to have to make it easy to leave, so that we can return to those controlled burns and pay down those fire-debts.
And that’s what we need to do with the platforms.
For most of the history of consumer tech and digital networks, fire was the norm. New platforms — PC companies, operating systems, online services — would spring up and grow with incredible speed, only to collapse, seemingly without warning.
To get to the bottom of this phenomenon, you need to understand two concepts: network effects and switching costs.
Network effects: A service enjoys network effects if it increases in value as more people use it. AOL Instant Messenger grows in usefulness every time someone signs up for it, and so does Facebook. The more users, the more reasons to join. The more people who join, the more people will join.
Switching costs: The things you have to give up when you leave a product or service. When you quit Audible, you have to throw away all your audiobooks (they will only play on Audible-approved players). When you leave Facebook, you have to say goodbye to all the friends, family, communities and customers that brought you there.
Tech has historically enjoyed enormous network effects, which propelled explosive growth. But it also enjoyed low switching costs, which underpinned implosive contraction. Because digital systems are universal (all computers can run all programs; all nodes on the network can connect to one another), it was historically very easy to switch from one service to another.
Someone building a new messenger service or social media platform could import your list of contacts, or even use bots to fetch the messages left for you on the old service and put them in the inbox on the new one, and then push your replies back to the people you left behind. Likewise, when Apple made its iWork office suite, it could reverse-engineer the Microsoft Office file formats so you could take all your data with you if you quit Windows and switched to MacOS:
https://www.eff.org/deeplinks/2019/06/adversarial-interoperability-reviving-elegant-weapon-more-civilized-age-slay
This dynamic — network effects growth and low switching costs contraction — is why we think of tech as so dynamic. It’s companies like DEC were able to turn out minicomputers that shattered the dominance of mainframes. But it’s also why DEC was brought so low that a PC company, Compaq — was able to buy it for pennies on the dollar. Compaq — a company that built an empire by making interoperable IBM PC clones — was itself “disrupted” a few years later, and HP bought it for spare change found in the sofa cushions.
But HP didn’t fall to Compaq’s fate. It survived — as did IBM, Microsoft, Apple, Google and Facebook. Somehow, the cycle of “good fire” that kept any company from growing too powerful was interrupted.
Today’s tech giants run “walled gardens” that are actually walled prisons that entrap their billions of users by imposing high switching costs on them. How did that happen? How did tech become “five giant websites filled with screenshots from the other four?”
https://twitter.com/tveastman/status/1069674780826071040
The answer lies in the fact that tech was born as antitrust was dying. Reagan hit the campaign trail the same year the Apple ][+ hit shelves. With every presidency since, tech has grown more powerful and antitrust has grown weaker (the Biden administration has halted this decay, but it must repair 40 years’ worth of sabotage).
This allowed tech to “merge to monopoly.” Google built a single successful product — a search engine — and then conquered the web by buying other peoples’ companies, even as their own internal product development process produced a nearly unbroken string of flops. Apple buys 90 companies a year — Tim Cook brings home a new company more often than you bring home a bag of groceries:
https://www.theverge.com/2019/5/6/18531570/apple-company-purchases-startups-tim-cook-buy-rate
When Facebook was threatened by an upstart called Instagram, Mark Zuckerberg sent a middle-of-the-night email to his CFO defending his plan to pay $1b for the then-tiny company, insisting that the only way to secure eternal dominance was to eliminate competitors — by buying them out, not by being better than them. As Zuckerberg says, “It is better to buy than compete”:
https://www.theverge.com/2020/7/29/21345723/facebook-instagram-documents-emails-mark-zuckerberg-kevin-systrom-hearing
As tech consolidated into a cozy oligopoly whose execs hopped from one company to another, they rigged the game. They colluded on a criminal “no-poach” deal to suppress their workers’ wages:
https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_Litigation
And they colluded to illegally rig the ad-market:
https://en.wikipedia.org/wiki/Jedi_Blue
This collusion is the inevitable result of market concentration. 100 squabbling tech companies will be at each others’ throats, unable to agree on catering for their annual meeting much less a common lobbying agenda. But boil those companies down to a bare handful and they’ll quickly converge on a single hymn and twine their voices in eerie harmony:
https://pluralistic.net/2023/03/16/compulsive-cheaters/#rigged
Eliminating antitrust enforcement — letting companies buy and merge with competitors, permitting predatory pricing and other exclusionary tactics — was the first step towards unsustainable fire suppression. But, as on the California wildland-urban interface, this measure quickly gave way to ever-more-extreme ones as the fire debt mounted.
The tech’s oligarchs have spent decades both suppressing laws that would limit their extractive profits (there’s a reason there’s no US federal privacy law!), and, crucially, getting new law made to limit anyone from “disrupting” them as they disrupted their forebears.
Today, a thicket of laws and rules — patent, copyright, anti-circumvention, tortious interference, trade secrecy, noncompete, etc — have been fashioned into a legal superweapon that tech companies can use to control the conduct of their competitors, critics and customers, and prevent them from making or using interoperable tools to reduce their switching costs and leave their walled gardens:
https://locusmag.com/2020/09/cory-doctorow-ip/
Today, these laws are being bolstered with new ones that make it even more difficult for users to leave the platforms. These new laws purport to protect users from each other, but they leave them even more at the platforms’ mercy.
So we get rules requiring platforms to spy on their users in the name of preventing harassment, rather than laws requiring platforms to stand up APIs that let users leave the platform and seek out a new online home that values their wellbeing:
https://cyber.fsi.stanford.edu/publication/lawful-awful-control-over-legal-speech-platforms-governments-and-internet-users
We get laws requiring platforms to “balance” the ideology of their content moderation:
https://www.texastribune.org/2022/09/16/texas-social-media-law/
But not laws that require platforms to make it easy to seek out a new server whose moderation policies are more hospitable to your ideas:
https://www.eff.org/deeplinks/2021/07/right-or-left-you-should-be-worried-about-big-tech-censorship
The platforms insist — with some justification — that we can’t ask them to both control their users and give their users more freedom. If we want a platform to detect and block “bad content,” we can’t also require the platform to let third party interoperators plug into the system and exchange messages with it.
They’re right — but that doesn’t mean we should defend them. The problem with the platforms isn’t merely that they’re bad at defending their users’ interests. The problem is that they can’t defend those interests. Mark Zuckerberg isn’t merely monumentally, personally unsuited to serving as the unelected, unaacountable social media czar for billions of people in hundreds of countries, speaking thousands of languages. No one should have that job.
We don’t need a better Mark Zuckerberg. We need no Mark Zuckerbergs. We don’t need to perfect Zuck — we need to abolish Zuck.
Rather than pouring our resources into making life in the smoldering wildlife-urban interface safe, we should help people leave that combustible zone, with policies that make migration easy.
This month, we got an example of how just easy that migration could be. Meta launched Threads, a social media platform that used your list of Instagram followers and followees to get you set up. Those low switching costs made it easy for Instagram users to become Threads users — and the network effects meant it happened fast, with 30m signups in the first morning:
https://www.techdirt.com/2023/07/06/meta-launches-threads-and-its-important-for-reasons-that-most-people-wont-care-about/
Meta says it was able to do this because it owns both Insta and Threads. But Meta doesn’t own the list of accounts that you trust and value enough to follow, or the people who feel the same way about you. That’s yours. We could and should force Meta to let you have it.
But that’s not enough. Meta claims that it will someday integrate Threads into the Fediverse, the collection of services based on the ActivityPub standard, whose most popular app is Mastodon. On Mastodon, you not only get to export your list of followers and followees with one click, but you can import those followers and followees to a new server with one click.
Threads looks incredibly stupid, a “Twitter alternative you would order from Brookstone,” but there are already tens of millions of people establishing relationships with each other there:
https://jogblog.substack.com/p/facebooks-threads-is-so-depressing
When they get tired of “brand-safe vaporposting,” they’ll have to either give up those relationships, or resign themselves to being trapped inside another walled-garden-cum-prison operated by a mediocre tech warlord:
https://www.garbageday.email/p/the-algorithmic-anti-culture-of-scale
But what if, instead of trying to force Zuck to be a better emperor-for-life, we passed rules requiring him to let his subjects flee his tyrannical reign? We could require Threads to stand up a Fediverse gateway that let users leave the service and set up on any other Fediverse servers (we could apply this rule to all Fediverse servers, preventing petty dictators from tormenting their users, too):
https://www.eff.org/deeplinks/2023/04/platforms-decay-lets-put-users-first
Zuck founded an empire of oily rags, and so of course it’s always on fire. We can’t make it safe to stay, but we can make it easy to leave:
https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-oily-rags/
This is the thing platforms fear the most. Network effects work in both directions: if your service grows quickly because people value one another, then it will shrink quickly when the people your users care about leave. As @zephoria-blog​ recounts, this is what happened when Myspace imploded:
http://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.
Platforms collapse “slowly, then all at once.” The only way to prevent sudden platform collapse syndrome is to block interoperability so users can’t escape the harms of your walled garden without giving up the benefits they give to each other.
We should stop trying to make the platforms good. We should make them gone. We should restore the “good fire” that ended with the growth of financialized Big Tech empires. We should aim for soft landings for users, and stop pretending that there’s any safe way to life in the fire zone.
We should let the platforms burn.
Here’s the podcast:
https://craphound.com/news/2023/07/16/let-the-platforms-burn-the-opposite-of-good-fires-is-wildfires/
And here’s a direct link to the MP3 (hosting courtesy of the @internetarchive​; they’ll host your stuff for free, forever):
https://archive.org/download/Cory_Doctorow_Podcast_446/Cory_Doctorow_Podcast_446_-_Let_the_Platforms_Burn.mp3
And here’s my podcast feed:
https://feeds.feedburner.com/doctorow_podcast
Tumblr media
Tonight (July 18), I’m hosting the first Clarion Summer Write-In Series, an hour-long, free drop-in group writing and discussion session. It’s in support of the Clarion SF/F writing workshop’s fundraiser to offer tuition support to students:
https://mailchi.mp/theclarionfoundation/clarion-write-ins
Tumblr media
[Image ID: A forest wildfire. Peeking through the darks in the stark image are hints of the green Matrix "waterfall" effect.]
Tumblr media
Image: Cameron Strandberg (modified) https://commons.wikimedia.org/wiki/File:Fire-Forest.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
143 notes · View notes
socialjusticefail · 1 year
Text
Let's see what will happen once this posts from the queue.
0 notes
april · 1 year
Text
"i'm not paying tumblr because its moderation is bad!" -> site does not make money -> content moderators are not hired -> moderation continues to be understaffed -> repeat
783 notes · View notes
lisafication · 11 months
Text
There's a reason I don't work for OTW, y'know.
I'm seeing a lot of people be all 'then let the archive be destroyed!' in response to my ramble and... yeah, perfectly valid position tbh. Even aside from moderation concerns, I think that's a perfectly ideologically consistent position to hold and I can make some good arguments for it — for example, I don't think the Archive of Our Own acting as The One True Fandom Nexus is a particularly healthy thing!
I would personally not fall on that side of things, but like... if I thought AO3 was The Best Way To Do Things I would be volunteering for the OTW, rather than an administrator on Sufficient Velocity (which is a site that does do content moderation, including removal of works we identify as discriminatory. Not perfectly, by any means, but nothing ever is).
10 notes · View notes
milfbro · 1 year
Text
btw brasil might get a new law that penalizes large platforms that spread fake news and fail to moderate content plus it guarantees people copyright over the content they produce
I don't know what this is gonna do for the internet but. Lets make this into an experiment shall we
EDIT: This is only for large platforms btw. Twitter, instagram, tiktok and the likes of. I'm kind of excited if this is gonna go through I wanna know what's gonna happen
6 notes · View notes
wytchcore · 10 months
Text
i dont actually want to talk about it but i think you can critique a/3 without saying a repository of art should be deleted and i think you can have that opinion without being a porn addict
1 note · View note
Text
Meta has engaged in a “systemic and global” censorship of pro-Palestinian content since the outbreak of the Israel-Gaza war on 7 October, according to a new report from Human Rights Watch (HRW). In a scathing 51-page report, the organization documented and reviewed more than a thousand reported instances of Meta removing content and suspending or permanently banning accounts on Facebook and Instagram. The company exhibited “six key patterns of undue censorship” of content in support of Palestine and Palestinians, including the taking down of posts, stories and comments; disabling accounts; restricting users’ ability to interact with others’ posts; and “shadow banning”, where the visibility and reach of a person’s material is significantly reduced, according to HRW. Examples it cites include content originating from more than 60 countries, mostly in English, and all in “peaceful support of Palestine, expressed in diverse ways”. Even HRW’s own posts seeking examples of online censorship were flagged as spam, the report said. “Censorship of content related to Palestine on Instagram and Facebook is systemic and global [and] Meta’s inconsistent enforcement of its own policies led to the erroneous removal of content about Palestine,” the group said in the report, citing “erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals” as the roots of the problem.
[...]
Users of Meta’s products have documented what they say is technological bias in favor of pro-Israel content and against pro-Palestinian posts. Instagram’s translation software replaced “Palestinian” followed by the Arabic phrase “Praise be to Allah” to “Palestinian terrorists” in English. WhatsApp’s AI, when asked to generate images of Palestinian boys and girls, created cartoon children with guns, whereas its images Israeli children did not include firearms.
5K notes · View notes
garbageday · 1 year
Text
Tumblr media
The best way to view what Elon Musk and his weird, sad friends are doing to Twitter right now is a sort of large-scale libertarian dismantling of it. They want to bring back once-banned accounts and remove any sort of guardrails or moderation in name of free speech. And, just like how this plays out irl, far-right extremists are taking this opportunity to target journalists and activists, hoping to kick them off the platform. Well-known antifascist Chad Loder was suspended already along with a handful of other antifascist accounts. And right-wing Telegram accounts have created a list of 5,000 other Twitter accounts targeted for a large-scale mass reporting campaign. Though, I don’t think this mass reporting project will work because 5,000 accounts is just way too many to try and focus on.
Like that town that elected a bunch of libertarians and was then overrun by bears, the foolish desire of having a completely unmoderated public space means that lots of average users will fall through the cracks. At this point, you should assume if you lose access to your account for whatever reason, that’s it. Game over. It’s not an accident that Judicial Watch president Tom Fitton is talking to Musk right now about enabling “freedom” by giving users the “tools” to moderate Twitter as they see fit. These guys want to turn Twitter into New Hampshire.
On Thanksgiving, Musk tweeted that users should tweet at him personally if they see exploitative content involving minors on the site. Which is an outrageously stupid idea and one that could only come from a man so desperate to dogwhistle right-wingers obsessed with anti-LGBTQ “groomer” rhetoric right now that he forgot that child sexual exploitation material is basically the number one issue for user generated content platforms and it is assuredly increasing on Twitter right now. And, making things worse, Twitter is really the only mainstream social app left on the iOS App Store that’s allowed to have NSFW content on it. So if Musk keeps going, I think the first thing to happen is they lose that privilege. But I think it’s just as possible Apple decides it’s not worth all the drama and pulls the plug on the app entirely. Which is basically how all large-scale libertarian experiments on the internet tend to end, with some other authority stepping in to shut it down because the whole thing is filling up with nazis and pedophiles.
[Read more at Garbage Day]
2K notes · View notes
canmom · 6 months
Text
Hypothetical Decentralised Social Media Protocol Stack
if we were to dream up the Next Social Media from first principles we face three problems. one is scaling hosting, the second is discovery/aggregation, the third is moderation.
hosting
hosting for millions of users is very very expensive. you have to have a network of datacentres around the world and mechanisms to sync the data between them. you probably use something like AWS, and they will charge you an eye-watering amount of money for it. since it's so expensive, there's no way to break even except by either charging users to access your service (which people generally hate to do) or selling ads, the ability to intrude on their attention to the highest bidder (which people also hate, and go out of their way to filter out). unless you have a lot of money to burn, this is a major barrier.
the traditional internet hosts everything on different servers, and you use addresses that point you to that server. the problem with this is that it responds poorly to sudden spikes in attention. if you self-host your blog, you can get DDOSed entirely by accident. you can use a service like cloudflare to protect you but that's $$$. you can host a blog on a service like wordpress, or a static site on a service like Github Pages or Neocities, often for free, but that broadly limits interaction to people leaving comments on your blog and doesn't have the off-the-cuff passing-thought sort of interaction that social media does.
the middle ground is forums, which used to be the primary form of social interaction before social media eclipsed them, typically running on one or a few servers with a database + frontend. these are viable enough, often they can be run with fairly minimal ads or by user subscriptions (the SomethingAwful model), but they can't scale indefinitely, and each one is a separate bubble. mastodon is a semi-return to this model, with the addition of a means to use your account on one bubble to interact with another ('federation').
the issue with everything so far is that it's an all-eggs-in-one-basket approach. you depend on the forum, instance, or service paying its bills to stay up. if it goes down, it's just gone. and database-backend models often interact poorly with the internet archive's scraping, so huge chunks won't be preserved.
scaling hosting could theoretically be solved by a model like torrents or IPFS, in which every user becomes a 'server' for all the posts they download, and you look up files using hashes of the content. if a post gets popular, it also gets better seeded! an issue with that design is archival: there is no guarantee that stuff will stay on the network, so if nobody is downloading a post, it is likely to get flushed out by newer stuff. it's like link rot, but it happens automatically.
IPFS solves this by 'pinning': you order an IPFS node (e.g. your server) not to flush a certain file so it will always be available from at least one source. they've sadly mixed this up in cryptocurrency, with 'pinning services' which will take payment in crypto to pin your data. my distaste for a technology designed around red queen races aside, I don't know how pinning costs compare to regular hosting costs.
theoretically you could build a social network on a backbone of content-based addressing. it would come with some drawbacks (posts would be immutable, unless you use some indirection to a traditional address-based hosting) but i think you could make it work (a mix of location-based addressing for low-bandwidth stuff like text, and content-based addressing for inline media). in fact, IPFS has the ability to mix in a bit of address-based lookup into its content-based approach, used for hosting blogs and the like.
as for videos - well, BitTorrent is great for distributing video files. though I don't know how well that scales to something like Youtube. you'd need a lot of hard drive space to handle the amount of Youtube that people typically watch and continue seeding it.
aggregation/discovery
the next problem is aggregation/discovery. social media sites approach this problem in various ways. early social media sites like LiveJournal had a somewhat newsgroup-like approach, you'd join a 'community' and people would post stuff to that community. this got replaced by the subscription model of sites like Twitter and Tumblr, where every user is simultaneously an author and a curator, and you subscribe to someone to see what posts they want to share.
this in turn got replaced by neural network-driven algorithms which attempt to guess what you'll want to see and show you stuff that's popular with whatever it thinks your demographic is. that's gotta go, or at least not be an intrinsic part of the social network anymore.
it would be easy enough to replicate the 'subscribe to see someone's recommended stuff' model, you just need a protocol for pointing people at stuff. (getting analytics such as like/reblog counts would be more difficult!) it would probably look similar to RSS feeds: you upload a list of suitably formatted data, and programs which speak that protocol can download it.
the problem of discovery - ways to find strangers who are interested in the same stuff you are - is more tricky. if we're trying to design this as a fully decentralised, censorship-resistant network, we face the spam problem. any means you use to broadcast 'hi, i exist and i like to talk about this thing, come interact with me' can be subverted by spammers. either you restrict yourself entirely to spreading across a network of curated recommendations, or you have to have moderation.
moderation
moderation is one of the hardest problems of social networks as they currently exist. it's both a problem of spam (the posts that users want to see getting swamped by porn bots or whatever) and legality (they're obliged to remove child porn, beheading videos and the like). the usual solution is a combination of AI shit - does the robot think this looks like a naked person - and outsourcing it to poorly paid workers in (typically) African countries, whose job is to look at reports of the most traumatic shit humans can come up with all day and confirm whether it's bad or not.
for our purposes, the hypothetical decentralised network is a protocol to help computers find stuff, not a platform. we can't control how people use it, and if we're not hosting any of the bad shit, it's not on us. but spam moderation is a problem any time that people can insert content you did not request into your feed.
possibly this is where you could have something like Mastodon instances, with their own moderation rules, but crucially, which don't host the content they aggregate. so instead of having 'an account on an instance', you have a stable address on the network, and you submit it to various directories so people can find you. by keeping each one limited in scale, it makes moderation more feasible. this is basically Reddit's model: you have topic-based hubs which people can subscribe to, and submit stuff to.
the other moderation issue is that there is no mechanism in this design to protect from mass harassment. if someone put you on the K*w*f*rms List of Degenerate Trannies To Suicidebait, there'd be fuck all you can do except refuse to receive contact from strangers. though... that's kind of already true of the internet as it stands. nobody has solved this problem.
to sum up
primarily static sites 'hosted' partly or fully on IPFS and BitTorrent
a protocol for sharing content you want to promote, similar to RSS, that you can aggregate into a 'feed'
directories you can submit posts to which handle their own moderation
no ads, nobody makes money off this
honestly, the biggest problem with all this is mostly just... getting it going in the first place. because let's be real, who but tech nerds is going to use a system that requires you to understand fuckin IPFS? until it's already up and running, this idea's got about as much hope as getting people to sign each others' GPG keys. it would have to have the sharp edges sanded down, so it's as easy to get on the Hypothetical Decentralised Social Network Protocol Stack as it is to register an account on tumblr.
but running over it like this... I don't think it's actually impossible in principle. a lot of the technical hurdles have already been solved. and that's what I want the Next Place to look like.
247 notes · View notes
tailschannel · 11 months
Text
/r/Sonic goes dark in protest of Reddit's controversial API changes
Tumblr media
Fans will not be able to view the unofficial Sonic the Hedgehog community on Reddit due to a massive site-wide protest.
/r/SonicTheHedgehog and thousands of Reddit communities went dark on Monday in protest against the platform's planned changes to their app programming interface, or API.
Why is this happening?
Reddit's interface allowed third party services and the platform to communicate with each other. Protestors claimed that the site's motivation to monetize access will threaten to shut down off-site applications, drastically affect moderator tools, limit accessibility-focused programs, and alter the ability to impartially access content.
One popular iOS cilent already announced plans to shut down, as it would cost around $20 million USD per year if it were to continue under the new agreement.
Reddit CEO Steve Huffman defended the changes in a ask-me-anything thread, and claimed that the platform "can no longer subsidize commercial entities that require large-scale data use."
The changes were similarly compared to Twitter's controversial plan to cut off third-party access to its interface in early 2023, which saw many long-standing third-party clients forced to shut down.
The response
As a result, more than 6,000 subreddits elected to go dark in protest of the changes. Users either saw subreddits in a "read-only" state, which meant that there was no ability to create new comments or posts; or in a "private" state, which meant it cannot be viewed in public at all.
The protests were expected to generally last to a minimum of 48 hours; however, a seemingly defensive rebuke from Reddit managed to escalate numerous communities to extend their outage to an indefinite period.
Affecting you
Tumblr media
The Sonic the Hedgehog subreddit went dark just before midnight Eastern time on Monday.
Tails' Channel understands that the blackout will last for a minimum of 48 hours, but it could extend to an undetermined period of time "based on the general climate" of the situation, according to the subreddit's head moderator, @andtails.
"Reddit's entire business model is predicated on the use of free labor to run their communities, so the only way we send a powerful message to their higher ups that what they're doing isn't okay is to strike," said the moderator in a public post.
The statement continued, "even if you are not directly impacted by Reddit's corporate decision-making now, their decision to charge for API access may set a precedence for future corporate decisions down the line that will increasingly hurt the user experience on this platform."
Subreddit users were encouraged to "take a break from Reddit" and submit complaints during the indefinite outage, as the mod argued that the planned changes will fundamentally hurt the platform.
333 notes · View notes
transmutationisms · 2 months
Text
the fact that most content rules/guidelines online are de facto unenforceable at scale (because this would require a massive amount of human moderator labour, and would also kill a significant amount of actual website usage on most socmed platforms) means that when a site does something like delete or ban a user, and they want to justify that move post hoc, they can pretty much always just dig through that person's history and be guaranteed to find something that they can construe as dangerous/prohibited, even though it's invariably the kind of thing that lots of people post all the time. lazy fucking CYA
75 notes · View notes
Text
You know what the stupidest part about Musk wanting to reinstate Trump’s Twitter is? It actually violates Musk's stated goal for Twitter: to make a place where there are transparent rules that are fairly applied.
The problem with Trump's social media bans is that they landed after he had repeatedly, flagrantly flouted the rules that the platforms used to kick off *lots* of other people, of all political persuasions.
Musk's whole (notional) deal is: "set some good rules up and apply them fairly." There are some hard problems in that seemingly simple proposition, but "I would let powerful people break the rules with impunity" is totally, utterly antithetical to that proposition.
Of course, Musk's idea of the simplicity of setting up good rules and applying them fairly is also stupid, not because these aren't noble goals but because attaining them at scale creates intractable, well-documented, well-understood problems.
I remember reading a book in elementary school (maybe "Mr Popper's Penguins"?) in which a person calls up the TV meteorologist demanding to know what the weather will be like in ten years. The meteorologist says that's impossible to determine.
This person is indignant. Meteorologists can predict tomorrow's weather! Just do whatever you do to get tomorrow's weather again, and you'll get the next day's weather. Do it ten times, you'll have the weather in 10 days. Do it 3,650 times and you'll have the weather in 10 years.
Musk - and other "good rules, fairly applied" people - think that all you need to do is take the rules you use to keep the conversation going at a dinner party and do them over and over again, and you can have a good, 100,000,000 person conversation.
There are lots of ways to improve the dumpster fire of content moderation at scale. Like, we could use interoperability and other competition remedies to devolve moderation to smaller communities - IOW, just stop trying to scale moderation.
https://www.eff.org/deeplinks/2021/07/right-or-left-you-should-be-worried-about-big-tech-censorship
And/or we could adopt the Santa Clara Principles, developed by human rights and free expression advocates as a consensus position on balancing speech, safety and due process:
https://santaclaraprinciples.org/
And if we *must* have systemwide content-moderation, we could take a *systemic* approach to it, rather than focusing on individual cases:
https://pluralistic.net/2022/03/12/move-slow-and-fix-things/#second-wave
I disagree with Musk about most things, but he is right about some things. Content moderation is terrible. End-to-end encryption for direct messages is good.
But this "I'd reinstate Trump nonsense"?
Not only do *I* disagree with that, but so does Musk.
Allegedly.
74 notes · View notes
mariacallous · 13 days
Text
In the hours after Iran announced its drone and missile attack on Israel on April 13, fake and misleading posts went viral almost immediately on X. The Institute for Strategic Dialogue (ISD), a nonprofit think tank, found a number of posts that claimed to reveal the strikes and their impact, but that instead used AI-generated videos, photos, and repurposed footage from other conflicts which showed rockets launching into the night, explosions, and even President Joe Biden in military fatigues.
Just 34 of these misleading posts received more than 37 million views, according to ISD. Many of the accounts posting the misinformation were also verified, meaning they have paid X $8 per month for the “blue tick” and that their content is amplified by the platform’s algorithm. ISD also found that several of the accounts claimed to be open source intelligence (OSINT) experts, which has, in recent years, become another way of lending legitimacy to their posts.
One X post claimed that “WW3 has officially started,” and included a video seeming to show rockets being shot into the night—except the video was actually from a YouTube video posted in 2021. Another post claimed to show the use of the Iron Dome, Israel's missile defense system, during the attack, but the video was actually from October 2023. Both these posts garnered hundreds of thousands of views in the hours after the strike was announced, and both originated from verified accounts. Iranian media also shared a video of the wildfires in Chile earlier this year, claiming it showed the aftermath of the attacks. This, too, began to circulate on X.
“The fact that so much mis- and disinformation is being spread by accounts looking for clout or financial benefit is giving cover to even more nefarious actors, including Iranian state media outlets who are passing off footage from the Chilean wildfires as damage from Iranian strikes on Israel to claim the operation as a military success,” says Isabelle Frances-Wright, director of technology and society at ISD. “The corrosion of the information landscape is undermining the ability of audiences to distinguish truth from falsehood on a terrible scale.”
X did not respond to a request for comment by time of publication.
Though misinformation around conflict and crises has long found a home on social media, X is often also used for vital real-time information. But under Elon Musk’s leadership, the company cut back on content moderation, and disinformation has thrived. In the days following the October 7 Hamas attack, X was flooded with disinformation, making it difficult for legitimate OSINT researchers to surface information. Under Musk, X has promoted a crowdsourced community notes function as a way to combat misinformation on the platform to varying results. Some of the content identified by ISD has since received community notes, though only two posts had by the time the organization published its findings.
“During times of crisis it seems to be a repeating pattern on platforms such as X where premium accounts are inherently tainting the information ecosystem with half-truths as well as falsehoods, either through misidentified media or blatantly false imagery suggesting that an event has been caused by a certain actor or state,” says Moustafa Ayad, ISD executive director for Asia, the Middle East, and Africa. “This continues to happen and will continue to happen in the future, making it even more difficult to know what is real and what is not.”
And for those that are part of X’s subscription model and ad revenue sharing model, going viral could potentially mean making money.
Though it’s not clear that any of the users spreading fake or misleading information identified by ISD were monetizing their content, a separate report released by the Center for Countering Digital Hate (CCDH) earlier this month found that between October 7 and February 7, 10 influencers, including far-right influencer Jackson Hinkle, were able to grow their followings by posting antisemitic and Islamophobic content about the conflict. Six of the accounts CCDH examined were part of X’s subscription program, and all 10 were verified users. The high-profile influencers who are part of X’s ad revenue sharing program receive a cut of advertising revenue based on ”organic impressions of ads displayed in replies” to their content, according to the company.
39 notes · View notes
lisafication · 11 months
Text
For those who might happen across this, I'm an administrator for the forum 'Sufficient Velocity', a large old-school forum oriented around Creative Writing. I originally posted this on there (and any reference to 'here' will mean the forum), but I felt I might as well throw it up here, as well, even if I don't actually have any followers.
This week, I've been reading fanfiction on Archive of Our Own (AO3), a site run by the Organisation for Transformative Works (OTW), a non-profit. This isn't particularly exceptional, in and of itself — like many others on the site, I read a lot of fanfiction, both on Sufficient Velocity (SV) and elsewhere — however what was bizarre to me was encountering a new prefix on certain works, that of 'End OTW Racism'. While I'm sure a number of people were already familiar with this, I was not, so I looked into it.
What I found... wasn't great. And I don't think anyone involved realises that.
To summarise the details, the #EndOTWRacism campaign, of which you may find their manifesto here, is a campaign oriented towards seeing hateful or discriminatory works removed from AO3 — and believe me, there is a lot of it. To whit, they want the OTW to moderate them. A laudable goal, on the face of it — certainly, we do something similar on Sufficient Velocity with Rule 2 and, to be clear, nothing I say here is a critique of Rule 2 (or, indeed, Rule 6) on SV.
But it's not that simple, not when you're the size of Archive of Our Own. So, let's talk about the vagaries and little-known pitfalls of content moderation, particularly as it applies to digital fiction and at scale. Let's dig into some of the details — as far as credentials go, I have, unfortunately, been in moderation and/or administration on SV for about six years and this is something we have to grapple with regularly, so I would like to say I can speak with some degree of expertise on the subject.
So, what are the problems with moderating bad works from a site? Let's start with discovery— that is to say, how you find rule-breaching works in the first place. There are more-or-less two different ways to approach manual content moderation of open submissions on a digital platform: review-based and report-based (you could also call them curation-based and flag-based), with various combinations of the two. Automated content moderation isn't something I'm going to cover here — I feel I can safely assume I'm preaching to the choir when I say it's a bad idea, and if I'm not, I'll just note that the least absurd outcome we had when simulating AI moderation (mostly for the sake of an academic exercise) on SV was banning all the staff.
In a review-based system, you check someone's work and approve it to the site upon verifying that it doesn't breach your content rules. Generally pretty simple, we used to do something like it on request. Unfortunately, if you do that, it can void your safe harbour protections in the US per Myeress vs. Buzzfeed Inc. This case, if you weren't aware, is why we stopped offering content review on SV. Suffice to say, it's not really a realistic option for anyone large enough for the courts to notice, and extremely clunky and unpleasant for the users, to boot.
Report-based systems, on the other hand, are something we use today — users find works they think are in breach and alert the moderation team to their presence with a report. On SV, this works pretty well — a user or users flag a work as potentially troublesome, moderation investigate it and either action it or reject the report. Unfortunately, AO3 is not SV. I'll get into the details of that dreadful beast known as scaling later, but thankfully we do have a much better comparison point — fanfiction.net (FFN).
FFN has had two great purges over the years, with a... mixed amount of content moderation applied in between: one in 2002 when the NC-17 rating was removed, and one in 2012. Both, ostensibly, were targeted at adult content. In practice, many fics that wouldn't raise an eye on Spacebattles today or Sufficient Velocity prior to 2018 were also removed; a number of reports suggest that something as simple as having a swearword in your title or summary was enough to get you hit, even if you were a 'T' rated work. Most disturbingly of all, there are a number of — impossible to substantiate — accounts of groups such as the infamous Critics United 'mass reporting' works to trigger a strike to get them removed. I would suggest reading further on places like Fanlore if you are unfamiliar and want to know more.
Despite its flaws however, report-based moderation is more-or-less the only option, and this segues neatly into the next piece of the puzzle that is content moderation, that is to say, the rubric. How do you decide what is, and what isn't against the rules of your site?
Anyone who's complained to the staff about how vague the rules are on SV may have had this explained to them, but as that is likely not many of you, I'll summarise: the more precise and clear-cut your chosen rubric is, the more it will inevitably need to resemble a legal document — and the less readable it is to the layman. We'll return to SV for an example here: many newer users will not be aware of this, but SV used to have a much more 'line by line, clearly delineated' set of rules and... people kind of hated it! An infraction would reference 'Community Compact III.15.5' rather than Rule 3, because it was more or less written in the same manner as the Terms of Service (sans the legal terms of art). While it was a more legible rubric from a certain perspective, from the perspective of communicating expectations to the users it was inferior to our current set of rules  — even less of them read it,  and we don't have great uptake right now.
And it still wasn't really an improvement over our current set-up when it comes to 'moderation consistency'. Even without getting into the nuts and bolts of "how do you define a racist work in a way that does not, at any point, say words to the effect of 'I know it when I see it'" — which is itself very, very difficult don't get me wrong I'm not dismissing this — you are stuck with finding an appropriate footing between a spectrum of 'the US penal code' and 'don't be a dick' as your rubric. Going for the penal code side doesn't help nearly as much as you might expect with moderation consistency, either — no matter what, you will never have a 100% correct call rate. You have the impossible task of writing a rubric that is easy for users to comprehend, extremely clear for moderation and capable of cleanly defining what is and what isn't racist without relying on moderator judgement, something which you cannot trust when operating at scale.
Speaking of scale, it's time to move on to the third prong — and the last covered in this ramble, which is more of a brief overview than anything truly in-depth — which is resources. Moderation is not a magic wand, you can't conjure it out of nowhere: you need to spend an enormous amount of time, effort and money on building, training and equipping a moderation staff, even a volunteer one, and it is far, far from an instant process. Our most recent tranche of moderators spent several months in training and it will likely be some months more before they're fully comfortable in the role — and that's with a relatively robust bureaucracy and a number of highly experienced mentors supporting them, something that is not going to be available to a new moderation branch with little to no experience. Beyond that, there's the matter of sheer numbers.
Combining both moderation and arbitration — because for volunteer staff, pure moderation is in actuality less efficient in my eyes, for a variety of reasons beyond the scope of this post, but we'll treat it as if they're both just 'moderators' — SV presently has 34 dedicated moderation volunteers. SV hosts ~785 million words of creative writing.
AO3 hosts ~32 billion.
These are some very rough and simplified figures, but if you completely ignore all the usual problems of scaling manpower in a business (or pseudo-business), such as (but not limited to) geometrically increasing bureaucratic complexity and administrative burden, along with all the particular issues of volunteer moderation... AO3 would still need well over one thousand volunteer moderators to be able to match SV's moderator-to-creative-wordcount ratio.
Paid moderation, of course, you can get away with less — my estimate is that you could fully moderate SV with, at best, ~8 full-time moderators, still ignoring administrative burden above the level of team leader. This leaves AO3 only needing a much more modest ~350 moderators. At the US minimum wage of ~$15k p.a. — which is, in my eyes, deeply unethical to pay moderators as full-time moderation is an intensely gruelling role with extremely high rates of PTSD and other stress-related conditions — that is approximately ~$5.25m p.a. costs on moderator wages. Their average annual budget is a bit over $500k.
So, that's obviously not on the table, and we return to volunteer staffing. Which... let's examine that scenario and the questions it leaves us with, as our conclusion.
Let's say, through some miracle, AO3 succeeds in finding those hundreds and hundreds and hundreds of volunteer moderators. We'll even say none of them are malicious actors or sufficiently incompetent as to be indistinguishable, and that they manage to replicate something on the level of or superior to our moderation tooling near-instantly at no cost. We still have several questions to be answered:
How are you maintaining consistency? Have you managed to define racism to the point that moderator judgment no longer enters the equation? And to be clear, you cannot allow moderator judgment to be a significant decision maker at this scale, or you will end with absurd results.
How are you handling staff mental health? Some reading on the matter, to save me a lengthy and unrelated explanation of some of the steps involved in ensuring mental health for commercial-scale content moderators.
How are you handling your failures? No moderation in the world has ever succeeded in a 100% accuracy rate, what are you doing about that?
Using report-based discovery, how are you preventing 'report brigading', such as the theories surrounding Critics United mentioned above? It is a natural human response to take into account the amount and severity of feedback. While SV moderators are well trained on the matter, the rare times something is receiving enough reports to potentially be classified as a 'brigade' on that scale will nearly always be escalated to administration, something completely infeasible at (you're learning to hate this word, I'm sure) scale.
How are you communicating expectations to your user base? If you're relying on a flag-based system, your users' understanding of the rules is a critical facet of your moderation system — how have you managed to make them legible to a layman while still managing to somehow 'truly' define racism?
How are you managing over one thousand moderators? Like even beyond all the concerns with consistency, how are you keeping track of that many moving parts as a volunteer organisation without dozens or even hundreds of professional managers? I've ignored the scaling administrative burden up until now, but it has to be addressed in reality.
What are you doing to sweep through your archives? SV is more-or-less on-top of 'old' works as far as rule-breaking goes, with the occasional forgotten tidbit popping up every 18 months or so — and that's what we're extrapolating from. These thousand-plus moderators are mostly going to be addressing current or near-current content, are you going to spin up that many again to comb through the 32 billion words already posted?
I could go on for a fair bit here, but this has already stretched out to over two thousand words.
I think the people behind this movement have their hearts in the right place and the sentiment is laudable, but in practice it is simply 'won't someone think of the children' in a funny hat. It cannot be done.
Even if you could somehow meet the bare minimum thresholds, you are simply not going to manage a ruleset of sufficient clarity so as to prevent a much-worse repeat of the 2012 FF.net massacre, you are not going to be able to manage a moderation staff of that size and you are not going to be able to ensure a coherent understanding among all your users (we haven't managed that after nearly ten years and a much smaller and more engaged userbase). There's a serious number of other issues I haven't covered here as well, as this really is just an attempt at giving some insight into the sheer number of moving parts behind content moderation:  the movement wants off-site content to be policed which isn't so much its own barrel of fish as it is its own barrel of Cthulhu; AO3 is far from English-only and would in actuality need moderators for almost every language it supports — and most damning of all,  if Section 230 is wiped out by the Supreme Court  it is not unlikely that engaging in content moderation at all could simply see AO3 shut down.
As sucky as it seems, the current status quo really is the best situation possible. Sorry about that.
3K notes · View notes
hellyeahscarleteen · 9 months
Text
"The kids are all right: The bill is not a solution to the problems of social media, and in fact, will make the internet much worse for young people. If you’re a young person unsure of where to stand on the bill, here’s a short explainer. 
KOSA’s main goal—to limit access to harmful materials—is unworkable and will lead to censorship. The vague “duty of care” to prevent harms to minors will require overly broad content filtering. We know already that this sort of filtering fails, both at the platform and the user level. Platforms are notoriously bad at content moderation at scale, frequently allowing content that violates their terms of service while penalizing users who post benign content that’s misidentified as dangerous. 
Under KOSA, this sort of flawed moderation will come with legal force. Platforms will be pressured by state attorneys general seeking to make political points about what kind of information is appropriate for young people. So not only will the moderation be inaccurate, but it will sweep in a variety of content that is not harmful. Ultimately, this bill would cut off a vital avenue of access to information for vulnerable youth. Platforms will be required to block important educational content, often made by young people themselves, about how to deal with anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal thoughts. 
Lastly, KOSA would have the practical effect of enabling parental surveillance. The law would unreasonably bucket all people under seventeen into a single category, despite the widespread understanding that older minors should have greater autonomy, privacy, and access to information than younger children, and that not every parent-child dynamic is healthy or constructive. 
Since KOSA was first introduced, it’s become even clearer that online platforms impact young people of varying ages and backgrounds differently, and one-size-fits-all legislation is a bad approach to solving the ills of social media. In March, the American Psychological Association (APA) released a “health advisory” on social media use in adolescence that makes clear that “using social media is not inherently beneficial or harmful to young people.” Rather, the effects of social media depend on multiple factors—in particular, “teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.” 
KOSA has laudable goals, but it also presents significant unintended consequences that threaten the privacy, safety, and access to information rights of young people and adults alike. Teenagers already understand that this sweeping legislation is more about censorship than safety. Now we just need to make sure Congress does, as well."
112 notes · View notes
astraltrickster · 9 months
Text
I feel like we're dealing with a bit of a catch-22 here.
On the one hand, I don't want to be buying tumblr merch and premium options to REWARD the garbage decisions they're making right now, and I know enough about how upper management at tech companies operates to know that they WILL see an influx of money right now as basically saying either "ohhhh, so they LIKE these changes" - or, if they actually listen to the staff members fielding feedback, "ohhhh, so THREATENING to make the user experience worse gets us money!"
On top of which, I don't want to encourage an OVERLY friendly relationship between the company and its userbase. Tumblr may be...by FAR the best we've got at its scale, despite the fact that they literally seem to be trying to hide that fact where they're not threatening to change it outright, but they are still a company. They're still inclined to make shitty decisions and lose touch with the userbase in the interest of Company Bullshit.
On the other hand...if we DON'T try to get them to at least break even, we're going to lose the site eventually, and possibly have some REALLY heinous shit go down in its death throes. Definitely not today or tomorrow. Maybe not for many years; it's hobbled along on life support via changing hands for many years already. But it will happen. They can fake it for a significant time if there's enough demand, enough hope - tumblr's not the only one pulling it off - but a company CAN'T go on forever when it's hemorrhaging money. Money doesn't become a nonissue when it's not YOUR paycheck.
I'm sick of the illusion that the internet is an immaterial, intangible thing...except when we're criticizing mining and energy usage and basically implying it shouldn't EXIST. It's not just a fake thing that exists in our phones and computers and the LITERAL ATMOSPHERIC clouds. Servers cost money to buy or rent, even when the software running on them is a buggy mess. Staff and contractors cost money to pay, even when the skeleton crew your company has is laughably insufficient for the scope of its services - we want them to expand staff to respond to tickets and improve their moderation system faster, well, with what money?? You want these improvements made with whose man-hours?? I wholeheartedly agree with most of the userbase that this Twitter-knockoff layout and some of their other stupid ideas lately are a huge waste of the ones they're paying for, but that doesn't mean they can redirect 1,000 man-hours from an ill-advised project and magically get a 10,000 man-hour project done!
Consider the moderation system. It's bad! It's biased! We've proven this! It's also mostly automated. What are our potential solutions here?
Go back to fully manual: Puts real human people through a PTSD meat grinder. For this to be done even REMOTELY ethically demands hazard pay, short hours, and the best mental health care coverage money can buy. Where are these human moderators getting paid from, let alone if they're going to be paid fairly?
Modify the software: ...they're already trying; retraining a whole system is easier said than done, especially in the very likely event that posts that are taken down by report-brigading innocuous content are feeding BACK into the system as "This Is What A Bad Post Looks Like." I'd love it if they could do it better and faster - but again, with what money?
Train their OWN software from the ground up: Requires EXPERT software engineers to build the framework AND a large human moderation crew in the short term to hit that "good post"/"bad post" button all day; refer to the problems with fully manual moderation. No one is quite sure how to bulletproof a moderation system against report-brigading in a way that won't ALSO deprioritize reports against content so heinous that everyone who sees it reports it. Once again - where is the money for all this labor coming from?
Every option is human labor that must be paid for. Every single possibility.
Anything else that needs doing? Fixing search? Human labor - money. Improving the bot filters to ban more bots and fewer real people? Humans have to do that - needs money!
So the money-seeking WILL continue until they're breaking even or better, or the site shuts down completely. Those are the two options. You cannot anti-capitalist Theory your way out of them. You can have your grand ideas for how things will work in a healthier, restructured economy, but that's not the point we're at. For now? Operating at a deficit = enshittification or shutdown. Those are the options. There is no third one. The level of hostility I see from some users against the very concept of tumblr BREAKING EVEN is absolutely absurd and completely detached from reality.
But what's the conclusion? Where do we go from here? Fuck, man, I have no fucking idea.
112 notes · View notes