For those who might happen across this, I'm an administrator for the forum 'Sufficient Velocity', a large old-school forum oriented around Creative Writing. I originally posted this on there (and any reference to 'here' will mean the forum), but I felt I might as well throw it up here, as well, even if I don't actually have any followers.
This week, I've been reading fanfiction on Archive of Our Own (AO3), a site run by the Organisation for Transformative Works (OTW), a non-profit. This isn't particularly exceptional, in and of itself — like many others on the site, I read a lot of fanfiction, both on Sufficient Velocity (SV) and elsewhere — however what was bizarre to me was encountering a new prefix on certain works, that of 'End OTW Racism'. While I'm sure a number of people were already familiar with this, I was not, so I looked into it.
What I found... wasn't great. And I don't think anyone involved realises that.
To summarise the details, the #EndOTWRacism campaign, of which you may find their manifesto here, is a campaign oriented towards seeing hateful or discriminatory works removed from AO3 — and believe me, there is a lot of it. To whit, they want the OTW to moderate them. A laudable goal, on the face of it — certainly, we do something similar on Sufficient Velocity with Rule 2 and, to be clear, nothing I say here is a critique of Rule 2 (or, indeed, Rule 6) on SV.
But it's not that simple, not when you're the size of Archive of Our Own. So, let's talk about the vagaries and little-known pitfalls of content moderation, particularly as it applies to digital fiction and at scale. Let's dig into some of the details — as far as credentials go, I have, unfortunately, been in moderation and/or administration on SV for about six years and this is something we have to grapple with regularly, so I would like to say I can speak with some degree of expertise on the subject.
So, what are the problems with moderating bad works from a site? Let's start with discovery— that is to say, how you find rule-breaching works in the first place. There are more-or-less two different ways to approach manual content moderation of open submissions on a digital platform: review-based and report-based (you could also call them curation-based and flag-based), with various combinations of the two. Automated content moderation isn't something I'm going to cover here — I feel I can safely assume I'm preaching to the choir when I say it's a bad idea, and if I'm not, I'll just note that the least absurd outcome we had when simulating AI moderation (mostly for the sake of an academic exercise) on SV was banning all the staff.
In a review-based system, you check someone's work and approve it to the site upon verifying that it doesn't breach your content rules. Generally pretty simple, we used to do something like it on request. Unfortunately, if you do that, it can void your safe harbour protections in the US per Myeress vs. Buzzfeed Inc. This case, if you weren't aware, is why we stopped offering content review on SV. Suffice to say, it's not really a realistic option for anyone large enough for the courts to notice, and extremely clunky and unpleasant for the users, to boot.
Report-based systems, on the other hand, are something we use today — users find works they think are in breach and alert the moderation team to their presence with a report. On SV, this works pretty well — a user or users flag a work as potentially troublesome, moderation investigate it and either action it or reject the report. Unfortunately, AO3 is not SV. I'll get into the details of that dreadful beast known as scaling later, but thankfully we do have a much better comparison point — fanfiction.net (FFN).
FFN has had two great purges over the years, with a... mixed amount of content moderation applied in between: one in 2002 when the NC-17 rating was removed, and one in 2012. Both, ostensibly, were targeted at adult content. In practice, many fics that wouldn't raise an eye on Spacebattles today or Sufficient Velocity prior to 2018 were also removed; a number of reports suggest that something as simple as having a swearword in your title or summary was enough to get you hit, even if you were a 'T' rated work. Most disturbingly of all, there are a number of — impossible to substantiate — accounts of groups such as the infamous Critics United 'mass reporting' works to trigger a strike to get them removed. I would suggest reading further on places like Fanlore if you are unfamiliar and want to know more.
Despite its flaws however, report-based moderation is more-or-less the only option, and this segues neatly into the next piece of the puzzle that is content moderation, that is to say, the rubric. How do you decide what is, and what isn't against the rules of your site?
Anyone who's complained to the staff about how vague the rules are on SV may have had this explained to them, but as that is likely not many of you, I'll summarise: the more precise and clear-cut your chosen rubric is, the more it will inevitably need to resemble a legal document — and the less readable it is to the layman. We'll return to SV for an example here: many newer users will not be aware of this, but SV used to have a much more 'line by line, clearly delineated' set of rules and... people kind of hated it! An infraction would reference 'Community Compact III.15.5' rather than Rule 3, because it was more or less written in the same manner as the Terms of Service (sans the legal terms of art). While it was a more legible rubric from a certain perspective, from the perspective of communicating expectations to the users it was inferior to our current set of rules — even less of them read it, and we don't have great uptake right now.
And it still wasn't really an improvement over our current set-up when it comes to 'moderation consistency'. Even without getting into the nuts and bolts of "how do you define a racist work in a way that does not, at any point, say words to the effect of 'I know it when I see it'" — which is itself very, very difficult don't get me wrong I'm not dismissing this — you are stuck with finding an appropriate footing between a spectrum of 'the US penal code' and 'don't be a dick' as your rubric. Going for the penal code side doesn't help nearly as much as you might expect with moderation consistency, either — no matter what, you will never have a 100% correct call rate. You have the impossible task of writing a rubric that is easy for users to comprehend, extremely clear for moderation and capable of cleanly defining what is and what isn't racist without relying on moderator judgement, something which you cannot trust when operating at scale.
Speaking of scale, it's time to move on to the third prong — and the last covered in this ramble, which is more of a brief overview than anything truly in-depth — which is resources. Moderation is not a magic wand, you can't conjure it out of nowhere: you need to spend an enormous amount of time, effort and money on building, training and equipping a moderation staff, even a volunteer one, and it is far, far from an instant process. Our most recent tranche of moderators spent several months in training and it will likely be some months more before they're fully comfortable in the role — and that's with a relatively robust bureaucracy and a number of highly experienced mentors supporting them, something that is not going to be available to a new moderation branch with little to no experience. Beyond that, there's the matter of sheer numbers.
Combining both moderation and arbitration — because for volunteer staff, pure moderation is in actuality less efficient in my eyes, for a variety of reasons beyond the scope of this post, but we'll treat it as if they're both just 'moderators' — SV presently has 34 dedicated moderation volunteers. SV hosts ~785 million words of creative writing.
AO3 hosts ~32 billion.
These are some very rough and simplified figures, but if you completely ignore all the usual problems of scaling manpower in a business (or pseudo-business), such as (but not limited to) geometrically increasing bureaucratic complexity and administrative burden, along with all the particular issues of volunteer moderation... AO3 would still need well over one thousand volunteer moderators to be able to match SV's moderator-to-creative-wordcount ratio.
Paid moderation, of course, you can get away with less — my estimate is that you could fully moderate SV with, at best, ~8 full-time moderators, still ignoring administrative burden above the level of team leader. This leaves AO3 only needing a much more modest ~350 moderators. At the US minimum wage of ~$15k p.a. — which is, in my eyes, deeply unethical to pay moderators as full-time moderation is an intensely gruelling role with extremely high rates of PTSD and other stress-related conditions — that is approximately ~$5.25m p.a. costs on moderator wages. Their average annual budget is a bit over $500k.
So, that's obviously not on the table, and we return to volunteer staffing. Which... let's examine that scenario and the questions it leaves us with, as our conclusion.
Let's say, through some miracle, AO3 succeeds in finding those hundreds and hundreds and hundreds of volunteer moderators. We'll even say none of them are malicious actors or sufficiently incompetent as to be indistinguishable, and that they manage to replicate something on the level of or superior to our moderation tooling near-instantly at no cost. We still have several questions to be answered:
How are you maintaining consistency? Have you managed to define racism to the point that moderator judgment no longer enters the equation? And to be clear, you cannot allow moderator judgment to be a significant decision maker at this scale, or you will end with absurd results.
How are you handling staff mental health? Some reading on the matter, to save me a lengthy and unrelated explanation of some of the steps involved in ensuring mental health for commercial-scale content moderators.
How are you handling your failures? No moderation in the world has ever succeeded in a 100% accuracy rate, what are you doing about that?
Using report-based discovery, how are you preventing 'report brigading', such as the theories surrounding Critics United mentioned above? It is a natural human response to take into account the amount and severity of feedback. While SV moderators are well trained on the matter, the rare times something is receiving enough reports to potentially be classified as a 'brigade' on that scale will nearly always be escalated to administration, something completely infeasible at (you're learning to hate this word, I'm sure) scale.
How are you communicating expectations to your user base? If you're relying on a flag-based system, your users' understanding of the rules is a critical facet of your moderation system — how have you managed to make them legible to a layman while still managing to somehow 'truly' define racism?
How are you managing over one thousand moderators? Like even beyond all the concerns with consistency, how are you keeping track of that many moving parts as a volunteer organisation without dozens or even hundreds of professional managers? I've ignored the scaling administrative burden up until now, but it has to be addressed in reality.
What are you doing to sweep through your archives? SV is more-or-less on-top of 'old' works as far as rule-breaking goes, with the occasional forgotten tidbit popping up every 18 months or so — and that's what we're extrapolating from. These thousand-plus moderators are mostly going to be addressing current or near-current content, are you going to spin up that many again to comb through the 32 billion words already posted?
I could go on for a fair bit here, but this has already stretched out to over two thousand words.
I think the people behind this movement have their hearts in the right place and the sentiment is laudable, but in practice it is simply 'won't someone think of the children' in a funny hat. It cannot be done.
Even if you could somehow meet the bare minimum thresholds, you are simply not going to manage a ruleset of sufficient clarity so as to prevent a much-worse repeat of the 2012 FF.net massacre, you are not going to be able to manage a moderation staff of that size and you are not going to be able to ensure a coherent understanding among all your users (we haven't managed that after nearly ten years and a much smaller and more engaged userbase). There's a serious number of other issues I haven't covered here as well, as this really is just an attempt at giving some insight into the sheer number of moving parts behind content moderation: the movement wants off-site content to be policed which isn't so much its own barrel of fish as it is its own barrel of Cthulhu; AO3 is far from English-only and would in actuality need moderators for almost every language it supports — and most damning of all, if Section 230 is wiped out by the Supreme Court it is not unlikely that engaging in content moderation at all could simply see AO3 shut down.
As sucky as it seems, the current status quo really is the best situation possible. Sorry about that.
3K notes
·
View notes
content: spoilers for JJK0! Satosugu if you squint, but it's left ambiguous enough for it to be either romantic or platonic. This is just me imagining what it'd be like if Suguru got a chance to do-over for his plan on how to save humanity from its bullshit
Reformed!Suguru who, instead of dying in the alley from his fight with Yuta Okkotsu, is saved by Satoru and teleported to Shoko for healing.
Reformed!Suguru who slips away as soon as he can, knowing that Satoru will turn a blind eye, now knowing the truth of his best friend's feelings. Satoru doesn't hate him, never did, and there are mixed emotions when he turns his back on his one and only for the second time.
Reformed!Suguru who returns to his cult, back at square one. He has no curses, so he has to absorb more. His allies have taken a hit, so he needs to find more. His funds are depleted from the Night Parade of a Hundred Demons, so he needs to make more. This should have been the plan that would've ended all suffering, eliminated the monkeys and given birth to a chosen people. It did not work.
Reformed!Suguru who spirals, again. He stays up even later at night, finds himself unable to play the kindly priest as convincingly to the monkeys who flock to him. The curses become hard to stomach again, no longer tempered by the ideals he had followed for the last decade. They were ideas that had proven themselves faulty with the last battle, and he has no use for them anymore.
Reformed!Suguru who thinks about Yuta's bond with Rika, and how they'd beaten him so entirely. He thinks about how he was defeated by love, and understanding, and trust. It's a disgustingly saccharine idea—but is there something worth pursuing in that thought?
Reformed!Suguru who begins to form a plan. He looks into his cult followers, finds those with connections to companies and schools and corporations. He does what he does best, plants thoughts into their heads, inspires undying loyalty to his cause, a new cause. They parrot his message back to their bosses and employees, none the wiser.
Reformed!Suguru who knows how to play the long game, and so he waits. He pulls strings now and then, as he watches the foundations of his idea start to take shape. There is an increase of counselors in schools, with mental health being taught as part of mandatory health education. Companies are being sued for unreasonable working hours, giving rise to a new wave of rules and regulations which set more realistic expectations. Corporations are investing heavily in endeavours and projects which contribute to the general public's satisfaction: parks and public spaces, consumption vouchers for elderly and those in need, elaborate festivals for the holidays.
Reformed!Suguru who starts to receive less and less requests for his help. The curses that he does swallow from the people (he doesn't know when he stopped calling them monkeys) who come to him are minor at best. It gets more difficult to find strong curses to add to his repertoire. The curse users who would've balked at this development have been dismissed long ago; those who have stayed know that all this means that everything is heading in the right direction.
Reformed!Suguru who sees a 5% drop in Japan's overall cursed energy. This comes from a 15% drop of cursed energy in Tokyo, the place he'd chosen as the guinea pig for his plan.
Reformed!Suguru who decides it's time to find his best friend and ask for help. He shows up at Satoru's apartment, not wearing monk robes but casual clothing, a loose sweater and jeans. He's tied his hair back into a bun for old time's sake, hoping it'll score him some brownie points, make Satoru more amicable to the data in the stack of paperwork he's holding.
Reformed!Suguru who stands behind Satoru as the man fights for Suguru's death penalty to be taken off him. Suguru is afraid that he's just walked himself into his own death. But miraculously, the higher ups deem the results of Suguru's implementations to be valuable, letting him live in exchange for his services. Satoru still pushes for Suguru's absolute freedom, threatens to Hollow Purple them and spark a mutiny. No one can defend themselves against Gojo Satoru, not really, and Suguru is a free man.
Reformed!Suguru who hears Satoru say to him, "we're the strongest," for the first time in a decade, and believes it.
Reformed!Suguru who takes up the position of counselor at Jujutsu High. The students are slow to warm up to him, both because he'd been a former enemy, and the thought of talking about their feelings makes them squirm. But Suguru is a patient man, and nothing if not persistent. He knows how isolating being a jujutsu sorcerer can be, went through it firsthand. He's determined to change the rhetoric around emotions within their line of work; he doesn't want this future generation to lose themselves or anyone else.
Reformed!Suguru who is the first person that Itadori Yuji meets after Satoru recruits him. It's him who asks Yuji why he fights, picks apart the boy's mind with thoughtfulness and compassion. When Yuji tells Masamichi Yaga his reasons for being a sorcerer, the principal deems it as satisfactory.
Reformed!Suguru who is convinced—yes, he thinks to himself, this is how we'll make things right.
42 notes
·
View notes
Since I've seen several posts floating around that try to rebut the points of @end-otw-racism by saying that AO3 is too big to moderate, can we PLEASE take a moment and remember that AO3 IS ALREADY HEAVILY MODERATED.
We all know the obvious one - that if you post anything about monetizing fic, you will be moderated into oblivion. We also all know why! This is essential to AO3's survival as a non-profit! No one would propose changing this!
But a lot of people seem to forget that AO3 is moderated in other ways! For a personal example, several years ago I got a "change this or we're removing your work" e-mail. The work in question? A collection of pseudo-ficlets that were somewhere between outlines and sketchy fics. What I got moderated for? I said in the a/n that if anyone wanted to expand my kinda-outliney nonsense into fully fleshed out stories, they were welcome to do so! I was told this was a violation of the ToS, because fics that are collections of prompts aren't allowed, and that if I didn't change the a/n I'd have to delete the work. Of course I edited the a/n, and my work was allowed, and I moved on. I did, immediately, delete another work I'd posted, which WAS literally sets of prompts I'd written free to a good home. It was clearly in violation of those rules, and I just hadn't realized content like that wasn't allowed.
Numerous things are forbidden on AO3 and will get moderated and potentially deleted, including:
advertising
"find a fic" posts
spam
plagiarized works
virus spreading
doxxing
actual CSAM
lots of other stuff as listed in the tos
If the foundation of your argument is "AO3 is too big to moderate" your argument is bad. AO3 is already moderated.
So that means your ACTUAL argument is "I think AO3 is moderated enough as-is and I don't think AO3 should be moderated in a way that might better protect fans of color."
Having that attitude despite the many, many, MANY fans of colors who say "hey, we don't feel safe here, can this be better moderated to protect us?" (when, by the way, the existing harassment policy SHOULD protect them but clearly isn't being implemented in ways that actually DOES protect them!)
Well. That's sure a take.
(TO BE CLEAR: I am anti-censorship. I am pro-moderation. I am pro-AO3. I fucking love your dark fic, your underage works, your non-con stories, your kinky pwp with "no redeeming qualities," your dirty-wrong-bad ego fics. I would NEVER want a solution that removed that content. And I believe AO3 is the greatest thing to happen to fic in my lifetime and I want it to continue to be the wonderful thing it is. I think that what AO3 already is can be reconciled with the demands of fans of color that they be better protected. I believe fans of color when they say that the current policies don't protect them adequately, that they don't feel safe, that this drives them out of white western fandom. I've listened a lot, and I've seen a lot, and I've learned a lot, and I want AO3 to be better than it is in this regard. I'm honestly kinda depressed how many people seem comfortable with the status quo.)
103 notes
·
View notes