#cda230
Explore tagged Tumblr posts
mostlysignssomeportents · 1 year ago
Text
Copyright takedowns are a cautionary tale that few are heeding
Tumblr media
On July 14, I'm giving the closing keynote for the fifteenth HACKERS ON PLANET EARTH, in QUEENS, NY. Happy Bastille Day! On July 20, I'm appearing in CHICAGO at Exile in Bookville.
Tumblr media
We're living through one of those moments when millions of people become suddenly and overwhelmingly interested in fair use, one of the subtlest and worst-understood aspects of copyright law. It's not a subject you can master by skimming a Wikipedia article!
I've been talking about fair use with laypeople for more than 20 years. I've met so many people who possess the unshakable, serene confidence of the truly wrong, like the people who think fair use means you can take x words from a book, or y seconds from a song and it will always be fair, while anything more will never be.
Or the people who think that if you violate any of the four factors, your use can't be fair – or the people who think that if you fail all of the four factors, you must be infringing (people, the Supreme Court is calling and they want to tell you about the Betamax!).
You might think that you can never quote a song lyric in a book without infringing copyright, or that you must clear every musical sample. You might be rock solid certain that scraping the web to train an AI is infringing. If you hold those beliefs, you do not understand the "fact intensive" nature of fair use.
But you can learn! It's actually a really cool and interesting and gnarly subject, and it's a favorite of copyright scholars, who have really fascinating disagreements and discussions about the subject. These discussions often key off of the controversies of the moment, but inevitably they implicate earlier fights about everything from the piano roll to 2 Live Crew to antiracist retellings of Gone With the Wind.
One of the most interesting discussions of fair use you can ask for took place in 2019, when the NYU Engelberg Center on Innovation Law & Policy held a symposium called "Proving IP." One of the panels featured dueling musicologists debating the merits of the Blurred Lines case. That case marked a turning point in music copyright, with the Marvin Gaye estate successfully suing Robin Thicke and Pharrell Williams for copying the "vibe" of Gaye's "Got to Give it Up."
Naturally, this discussion featured clips from both songs as the experts – joined by some of America's top copyright scholars – delved into the legal reasoning and future consequences of the case. It would be literally impossible to discuss this case without those clips.
And that's where the problems start: as soon as the symposium was uploaded to Youtube, it was flagged and removed by Content ID, Google's $100,000,000 copyright enforcement system. This initial takedown was fully automated, which is how Content ID works: rightsholders upload audio to claim it, and then Content ID removes other videos where that audio appears (rightsholders can also specify that videos with matching clips be demonetized, or that the ad revenue from those videos be diverted to the rightsholders).
But Content ID has a safety valve: an uploader whose video has been incorrectly flagged can challenge the takedown. The case is then punted to the rightsholder, who has to manually renew or drop their claim. In the case of this symposium, the rightsholder was Universal Music Group, the largest record company in the world. UMG's personnel reviewed the video and did not drop the claim.
99.99% of the time, that's where the story would end, for many reasons. First of all, most people don't understand fair use well enough to contest the judgment of a cosmically vast, unimaginably rich monopolist who wants to censor their video. Just as importantly, though, is that Content ID is a Byzantine system that is nearly as complex as fair use, but it's an entirely private affair, created and adjudicated by another galactic-scale monopolist (Google).
Google's copyright enforcement system is a cod-legal regime with all the downsides of the law, and a few wrinkles of its own (for example, it's a system without lawyers – just corporate experts doing battle with laypeople). And a single mis-step can result in your video being deleted or your account being permanently deleted, along with every video you've ever posted. For people who make their living on audiovisual content, losing your Youtube account is an extinction-level event:
https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online
So for the average Youtuber, Content ID is a kind of Kafka-as-a-Service system that is always avoided and never investigated. But the Engelbert Center isn't your average Youtuber: they boast some of the country's top copyright experts, specializing in exactly the questions Youtube's Content ID is supposed to be adjudicating.
So naturally, they challenged the takedown – only to have UMG double down. This is par for the course with UMG: they are infamous for refusing to consider fair use in takedown requests. Their stance is so unreasonable that a court actually found them guilty of violating the DMCA's provision against fraudulent takedowns:
https://www.eff.org/cases/lenz-v-universal
But the DMCA's takedown system is part of the real law, while Content ID is a fake law, created and overseen by a tech monopolist, not a court. So the fate of the Blurred Lines discussion turned on the Engelberg Center's ability to navigate both the law and the n-dimensional topology of Content ID's takedown flowchart.
It took more than a year, but eventually, Engelberg prevailed.
Until they didn't.
If Content ID was a person, it would be baby, specifically, a baby under 18 months old – that is, before the development of "object permanence." Until our 18th month (or so), we lack the ability to reason about things we can't see – this the period when small babies find peek-a-boo amazing. Object permanence is the ability to understand things that aren't in your immediate field of vision.
Content ID has no object permanence. Despite the fact that the Engelberg Blurred Lines panel was the most involved fair use question the system was ever called upon to parse, it managed to repeatedly forget that it had decided that the panel could stay up. Over and over since that initial determination, Content ID has taken down the video of the panel, forcing Engelberg to go through the whole process again.
But that's just for starters, because Youtube isn't the only place where a copyright enforcement bot is making billions of unsupervised, unaccountable decisions about what audiovisual material you're allowed to access.
Spotify is yet another monopolist, with a justifiable reputation for being extremely hostile to artists' interests, thanks in large part to the role that UMG and the other major record labels played in designing its business rules:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
Spotify has spent hundreds of millions of dollars trying to capture the podcasting market, in the hopes of converting one of the last truly open digital publishing systems into a product under its control:
https://pluralistic.net/2023/01/27/enshittification-resistance/#ummauerter-garten-nein
Thankfully, that campaign has failed – but millions of people have (unwisely) ditched their open podcatchers in favor of Spotify's pre-enshittified app, so everyone with a podcast now must target Spotify for distribution if they hope to reach those captive users.
Guess who has a podcast? The Engelberg Center.
Naturally, Engelberg's podcast includes the audio of that Blurred Lines panel, and that audio includes samples from both "Blurred Lines" and "Got To Give It Up."
So – naturally – UMG keeps taking down the podcast.
Spotify has its own answer to Content ID, and incredibly, it's even worse and harder to navigate than Google's pretend legal system. As Engelberg describes in its latest post, UMG and Spotify have colluded to ensure that this now-classic discussion of fair use will never be able to take advantage of fair use itself:
https://www.nyuengelberg.org/news/how-explaining-copyright-broke-the-spotify-copyright-system/
Remember, this is the best case scenario for arguing about fair use with a monopolist like UMG, Google, or Spotify. As Engelberg puts it:
The Engelberg Center had an extraordinarily high level of interest in pursuing this issue, and legal confidence in our position that would have cost an average podcaster tens of thousands of dollars to develop. That cannot be what is required to challenge the removal of a podcast episode.
Automated takedown systems are the tech industry's answer to the "notice-and-takedown" system that was invented to broker a peace between copyright law and the internet, starting with the US's 1998 Digital Millennium Copyright Act. The DMCA implements (and exceeds) a pair of 1996 UN treaties, the WIPO Copyright Treaty and the Performances and Phonograms Treaty, and most countries in the world have some version of notice-and-takedown.
Big corporate rightsholders claim that notice-and-takedown is a gift to the tech sector, one that allows tech companies to get away with copyright infringement. They want a "strict liability" regime, where any platform that allows a user to post something infringing is liable for that infringement, to the tune of $150,000 in statutory damages.
Of course, there's no way for a platform to know a priori whether something a user posts infringes on someone's copyright. There is no registry of everything that is copyrighted, and of course, fair use means that there are lots of ways to legally reproduce someone's work without their permission (or even when they object). Even if every person who ever has trained or ever will train as a copyright lawyer worked 24/7 for just one online platform to evaluate every tweet, video, audio clip and image for copyright infringement, they wouldn't be able to touch even 1% of what gets posted to that platform.
The "compromise" that the entertainment industry wants is automated takedown – a system like Content ID, where rightsholders register their copyrights and platforms block anything that matches the registry. This "filternet" proposal became law in the EU in 2019 with Article 17 of the Digital Single Market Directive:
https://www.eff.org/deeplinks/2018/09/today-europe-lost-internet-now-we-fight-back
This was the most controversial directive in EU history, and – as experts warned at the time – there is no way to implement it without violating the GDPR, Europe's privacy law, so now it's stuck in limbo:
https://www.eff.org/deeplinks/2022/05/eus-copyright-directive-still-about-filters-eus-top-court-limits-its-use
As critics pointed out during the EU debate, there are so many problems with filternets. For one thing, these copyright filters are very expensive: remember that Google has spent $100m on Content ID alone, and that only does a fraction of what filternet advocates demand. Building the filternet would cost so much that only the biggest tech monopolists could afford it, which is to say, filternets are a legal requirement to keep the tech monopolists in business and prevent smaller, better platforms from ever coming into existence.
Filternets are also incapable of telling the difference between similar files. This is especially problematic for classical musicians, who routinely find their work blocked or demonetized by Sony Music, which claims performances of all the most important classical music compositions:
https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music
Content ID can't tell the difference between your performance of "The Goldberg Variations" and Glenn Gould's. For classical musicians, the best case scenario is to have their online wages stolen by Sony, who fraudulently claim copyright to their recordings. The worst case scenario is that their video is blocked, their channel deleted, and their names blacklisted from ever opening another account on one of the monopoly platforms.
But when it comes to free expression, the role that notice-and-takedown and filternets play in the creative industries is really a sideshow. In creating a system of no-evidence-required takedowns, with no real consequences for fraudulent takedowns, these systems are huge gift to the world's worst criminals. For example, "reputation management" companies help convicted rapists, murderers, and even war criminals purge the internet of true accounts of their crimes by claiming copyright over them:
https://pluralistic.net/2021/04/23/reputation-laundry/#dark-ops
Remember how during the covid lockdowns, scumbags marketed junk devices by claiming that they'd protect you from the virus? Their products remained online, while the detailed scientific articles warning people about the fraud were speedily removed through false copyright claims:
https://pluralistic.net/2021/10/18/labor-shortage-discourse-time/#copyfraud
Copyfraud – making false copyright claims – is an extremely safe crime to commit, and it's not just quack covid remedy peddlers and war criminals who avail themselves of it. Tech giants like Adobe do not hesitate to abuse the takedown system, even when that means exposing millions of people to spyware:
https://pluralistic.net/2021/10/13/theres-an-app-for-that/#gnash
Dirty cops play loud, copyrighted music during confrontations with the public, in the hopes that this will trigger copyright filters on services like Youtube and Instagram and block videos of their misbehavior:
https://pluralistic.net/2021/02/10/duke-sucks/#bhpd
But even if you solved all these problems with filternets and takedown, this system would still choke on fair use and other copyright exceptions. These are "fact intensive" questions that the world's top experts struggle with (as anyone who watches the Blurred Lines panel can see). There's no way we can get software to accurately determine when a use is or isn't fair.
That's a question that the entertainment industry itself is increasingly conflicted about. The Blurred Lines judgment opened the floodgates to a new kind of copyright troll – grifters who sued the record labels and their biggest stars for taking the "vibe" of songs that no one ever heard of. Musicians like Ed Sheeran have been sued for millions of dollars over these alleged infringements. These suits caused the record industry to (ahem) change its tune on fair use, insisting that fair use should be broadly interpreted to protect people who made things that were similar to existing works. The labels understood that if "vibe rights" became accepted law, they'd end up in the kind of hell that the rest of us enter when we try to post things online – where anything they produce can trigger takedowns, long legal battles, and millions in liability:
https://pluralistic.net/2022/04/08/oh-why/#two-notes-and-running
But the music industry remains deeply conflicted over fair use. Take the curious case of Katy Perry's song "Dark Horse," which attracted a multimillion-dollar suit from an obscure Christian rapper who claimed that a brief phrase in "Dark Horse" was impermissibly similar to his song "A Joyful Noise."
Perry and her publisher, Warner Chappell, lost the suit and were ordered to pay $2.8m. While they subsequently won an appeal, this definitely put the cold grue up Warner Chappell's back. They could see a long future of similar suits launched by treasure hunters hoping for a quick settlement.
But here's where it gets unbelievably weird and darkly funny. A Youtuber named Adam Neely made a wildly successful viral video about the suit, taking Perry's side and defending her song. As part of that video, Neely included a few seconds' worth of "A Joyful Noise," the song that Perry was accused of copying.
In court, Warner Chappell had argued that "A Joyful Noise" was not similar to Perry's "Dark Horse." But when Warner had Google remove Neely's video, they claimed that the sample from "Joyful Noise" was actually taken from "Dark Horse." Incredibly, they maintained this position through multiple appeals through the Content ID system:
https://pluralistic.net/2020/03/05/warner-chappell-copyfraud/#warnerchappell
In other words, they maintained that the song that they'd told the court was totally dissimilar to their own was so indistinguishable from their own song that they couldn't tell the difference!
Now, this question of vibes, similarity and fair use has only gotten more intense since the takedown of Neely's video. Just this week, the RIAA sued several AI companies, claiming that the songs the AI shits out are infringingly similar to tracks in their catalog:
https://www.rollingstone.com/music/music-news/record-labels-sue-music-generators-suno-and-udio-1235042056/
Even before "Blurred Lines," this was a difficult fair use question to answer, with lots of chewy nuances. Just ask George Harrison:
https://en.wikipedia.org/wiki/My_Sweet_Lord
But as the Engelberg panel's cohort of dueling musicologists and renowned copyright experts proved, this question only gets harder as time goes by. If you listen to that panel (if you can listen to that panel), you'll be hard pressed to come away with any certainty about the questions in this latest lawsuit.
The notice-and-takedown system is what's known as an "intermediary liability" rule. Platforms are "intermediaries" in that they connect end users with each other and with businesses. Ebay and Etsy and Amazon connect buyers and sellers; Facebook and Google and Tiktok connect performers, advertisers and publishers with audiences and so on.
For copyright, notice-and-takedown gives platforms a "safe harbor." A platform doesn't have to remove material after an allegation of infringement, but if they don't, they're jointly liable for any future judgment. In other words, Youtube isn't required to take down the Engelberg Blurred Lines panel, but if UMG sues Engelberg and wins a judgment, Google will also have to pay out.
During the adoption of the 1996 WIPO treaties and the 1998 US DMCA, this safe harbor rule was characterized as a balance between the rights of the public to publish online and the interest of rightsholders whose material might be infringed upon. The idea was that things that were likely to be infringing would be immediately removed once the platform received a notification, but that platforms would ignore spurious or obviously fraudulent takedowns.
That's not how it worked out. Whether it's Sony Music claiming to own your performance of "Fur Elise" or a war criminal claiming authorship over a newspaper story about his crimes, platforms nuke first and ask questions never. Why not? If they ignore a takedown and get it wrong, they suffer dire consequences ($150,000 per claim). But if they take action on a dodgy claim, there are no consequences. Of course they're just going to delete anything they're asked to delete.
This is how platforms always handle liability, and that's a lesson that we really should have internalized by now. After all, the DMCA is the second-most famous intermediary liability system for the internet – the most (in)famous is Section 230 of the Communications Decency Act.
This is a 27-word law that says that platforms are not liable for civil damages arising from their users' speech. Now, this is a US law, and in the US, there aren't many civil damages from speech to begin with. The First Amendment makes it very hard to get a libel judgment, and even when these judgments are secured, damages are typically limited to "actual damages" – generally a low sum. Most of the worst online speech is actually not illegal: hate speech, misinformation and disinformation are all covered by the First Amendment.
Notwithstanding the First Amendment, there are categories of speech that US law criminalizes: actual threats of violence, criminal harassment, and committing certain kinds of legal, medical, election or financial fraud. These are all exempted from Section 230, which only provides immunity for civil suits, not criminal acts.
What Section 230 really protects platforms from is being named to unwinnable nuisance suits by unscrupulous parties who are betting that the platforms would rather remove legal speech that they object to than go to court. A generation of copyfraudsters have proved that this is a very safe bet:
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
In other words, if you made a #MeToo accusation, or if you were a gig worker using an online forum to organize a union, or if you were blowing the whistle on your employer's toxic waste leaks, or if you were any other under-resourced person being bullied by a wealthy, powerful person or organization, that organization could shut you up by threatening to sue the platform that hosted your speech. The platform would immediately cave. But those same rich and powerful people would have access to the lawyers and back-channels that would prevent you from doing the same to them – that's why Sony can get your Brahms recital taken down, but you can't turn around and do the same to them.
This is true of every intermediary liability system, and it's been true since the earliest days of the internet, and it keeps getting proven to be true. Six years ago, Trump signed SESTA/FOSTA, a law that allowed platforms to be held civilly liable by survivors of sex trafficking. At the time, advocates claimed that this would only affect "sexual slavery" and would not impact consensual sex-work.
But from the start, and ever since, SESTA/FOSTA has primarily targeted consensual sex-work, to the immediate, lasting, and profound detriment of sex workers:
https://hackinghustling.org/what-is-sesta-fosta/
SESTA/FOSTA killed the "bad date" forums where sex workers circulated the details of violent and unstable clients, killed the online booking sites that allowed sex workers to screen their clients, and killed the payment processors that let sex workers avoid holding unsafe amounts of cash:
https://www.eff.org/deeplinks/2022/09/fight-overturn-fosta-unconstitutional-internet-censorship-law-continues
SESTA/FOSTA made voluntary sex work more dangerous – and also made life harder for law enforcement efforts to target sex trafficking:
https://hackinghustling.org/erased-the-impact-of-fosta-sesta-2020/
Despite half a decade of SESTA/FOSTA, despite 15 years of filternets, despite a quarter century of notice-and-takedown, people continue to insist that getting rid of safe harbors will punish Big Tech and make life better for everyday internet users.
As of now, it seems likely that Section 230 will be dead by then end of 2025, even if there is nothing in place to replace it:
https://energycommerce.house.gov/posts/bipartisan-energy-and-commerce-leaders-announce-legislative-hearing-on-sunsetting-section-230
This isn't the win that some people think it is. By making platforms responsible for screening the content their users post, we create a system that only the largest tech monopolies can survive, and only then by removing or blocking anything that threatens or displeases the wealthy and powerful.
Filternets are not precision-guided takedown machines; they're indiscriminate cluster-bombs that destroy anything in the vicinity of illegal speech – including (and especially) the best-informed, most informative discussions of how these systems go wrong, and how that blocks the complaints of the powerless, the marginalized, and the abused.
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never
Tumblr media
Image: EFF https://www.eff.org/files/banner_library/yt-fu-1b.png
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
677 notes · View notes
thorn-amidst-roses · 5 years ago
Link
I don’t want to get political on main, but since $rump has been bellowing about “repealing 230″ lately, I thought I should do a little recap of what “230″ is.
tl;dr it’s the thing that prevents social media platforms from being liable for what their users post.
In other words, the thing that allows tumblr, twitter, youtube, etc to have user-uploaded content at all without getting sued.
If 230 were repealed, we’d likely have massive censoring at best, or a complete social media shutdown at worst.
SO basically, $rump is threatening to shut down social media platforms if Twitter doesn’t stop flagging his posts as inappropriate.
Chew on that for a minute.
60 notes · View notes
sl-walker · 6 years ago
Link
15 notes · View notes
louisianaprelawland-blog · 3 years ago
Text
Gonzales V. Google And The Communications Decency Act
By Lauren Barrouquere,  University of Louisiana at Lafayette Class of 2024
January 14, 2023
Tumblr media
On April 4th, 2022, a petition for a writ of certiorari was filed in the Supreme Court [3]. This petition would officially mark the beginning of the litigation of Gonzales v Google in the highest court in the United States of America [3]. Gonzales v Google was preceded by the killing of Nohemi Gonzales in 2015 during one of several reported terrorist attacks in Paris [1]. The family of Nohemi is currently suing Google under Section 230 of the Communications Decency Act of 1996 [1].
Section 230 of the Communications Decency Act of 1996 provides parameters for the online moderation of social media websites such as Youtube or Twitter [5]. The exact text of the Section reads “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,”[5].
The petitioners in the case, Gonzales’s family, are arguing two things under the act: first, that Google violated anti-terrorism laws in the United States by allowing ISIS messaging to remain on the Youtube platform, thus radicalizing new recruits, and second, that Google should be held liable for the radicalizing content promoted through its algorithm [4]. In their respondent brief, Google argued that Section 230 implicitly excludes websites that merely host third-party content, therefore they are protected by Congress from liability when sharing information such as a video [2].
Furthermore, more broadly, Google argued that the petitioners failed to adequately link the specific instance of the attack to the ISIS video in question, further muddying the waters around the role of Google in the death of Nohemi Gonzales [6]. In their Brief in Opposition, Google pointed out that “Petitioners do not allege that Google had any role in encouraging or committing the Paris attack. Nor do petitioners allege that any of Ms. Gonzalez’s attackers were recruited via YouTube or used YouTube to plan the Paris attack…” [6]. Essentially, the link between the untimely passing of Nohemi Gonzales and Google is extremely thin.
Even the Biden administration weighed in on the matter, demonstrating how influential the case is. In a legal brief published in December 2022, President Joe Biden, despite generally advocating for tech companies to take more responsibility, argued that Section 230 does not extend to search algorithms [1].
At the core of Gonzales v. Google is whether or not AI and other algorithms are programs that companies may be held liable for. Given that cyber law is a relatively unexplored area, this case has major ramifications for the future of tech companies [1]. On one hand, protections for tech companies could increase under Section 230, if the Supreme Court rules in favor of Google, but, on the other, restrictions may be tightened [1]. Arguments are scheduled to begin in February of 2023 [3].
______________________________________________________________
https://www.cnn.com/2023/01/12/tech/google-supreme-court/index.html
https://www.supremecourt.gov/DocketPDF/21/21-1333/252127/20230112144706745_Gonzalez%20v.%20Google%20Brief%20for%20Respondent%20-%20FINAL.pdf
https://www.scotusblog.com/case-files/cases/gonzalez-v-google-llc/
https://bipartisanpolicy.org/blog/gonzalez-v-google/
https://www.eff.org/issues/cda230
https://www.supremecourt.gov/DocketPDF/21/21-1333/229391/20220705140634781_Gonzalez%20Brief%20in%20Opposition.pdf
0 notes
Text
Section 230: What To Do With The 26 Words That Created The Internet
By Francisco Rios, Georgetown University Class of 2020
February 2, 2021
Tumblr media
 In the final months of his administration, former President Donald Trump announced he would veto the National Defense Authorization Act, a bipartisan defense bill that authorized a topline of $740 billion in military spending and outlined Pentagon policy. In his statement, he cited Congress' refusal to repeal Section 230 of the Communications Decency Act as the main reason for his veto. However, albeit for completely different reasons, President Joe Biden and House Speaker Nancy Pelosi have also expressed their concern regarding this clause. This article will evaluate the criticisms made by both sides of the aisle and discuss how Section 230 plays a role in the more extensive debate surrounding the constitutional right to freedom of speech. 
Section 230 of the Communications Decency Act states that:
Tumblr media
Put, Section 230 protects tech companies and users from facing lawsuits over the content of the platforms (with some exceptions, such as criminal activity, sex trafficking, and intellectual property). At the same time, this clause allows providers of an interactive computer service to take down anything they deem inappropriate or obscene. Thus, these two concessions cited above have become the center of much criticism in Washington D.C. However, each side of the aisle differs on what they do not like about the clause. For instance, Republicans have often criticized the fact that Section 230 has given tech giants – particularly Facebook and Twitter – the power to "silence conservative voices." This claim has gained even more traction after these two companies decided to suspend former President Donald Trump's accounts, along with several other of his allies and accounts related to the far-right Qanon conspiracy theory.[1] Now more than ever, conservatives claim that Silicon Valley promotes liberal ideology and while censoring their views. A Pew Research Center survey showed that "nine-in-ten Republicans and independents who lean towards the Republican Party say it is at least somewhat likely that social media platforms censor political viewpoints they find objectionable." [2] Also, the same poll demonstrated that "69% of Republicans and Republican leaners say major technology companies support the views of liberals over conservatives" [3] and, "71% of Republicans at least somewhat disapprove of [social media companies labeling posts on their platforms from elected officials as inaccurate or misleading]." [4]  Luckily for the Republican base, many of their elected officials in D.C share the same concerns. Specifically, Senator Josh Hawley of Missouri has been relentlessly attacking Section 230 by introducing multiple legislation to eradicate the legal immunity the government grants to the tech giants. However, with Democrats now in control of the Senate and the current political environment, the chances these attempts have of achieving their objective are very slim.
Fortunately for Senator Hawley, Democrats have also expressed concerns about Section 230. Unlike Republicans, however, "Democrats are most concerned about getting big social media companies to take down hate speech, harassment, disinformation, and terrorism-related content." [6] The need to create more restrictive legislation about what can be done on the internet and take away some of the power Section 230 has granted to Silicon Valley has increased in urgency over the past years. Citing foreign interference in the 2016 and 2020 United States Presidential Elections and the dissemination of conspiracy theories believed by those who stormed the Capitol, the left believes it is time to enact more restrictions on the content of social media platforms. In their opinion, the legal immunity granted to social media companies has allowed them to create a space where lies are misconstrued as facts, and hatred and division run rampant if left unregulated. This belief is also expressed in recent polling data. Surveys show that "77% of Democrats and Democratic leaners say social media companies have [a responsibility to regulate offensive content in their platforms]." [7] However, at the same time, only 63% of Democrats trust social media companies' abilities to determine what offensive content should be removed, as opposed to 76% of Republicans.[8] Hence, the situation is clear. Republicans and Democrats both agree that Section 230 should be replaced or at the very least reformed. However, while Democrats want more restrictions on what they deem inappropriate and dangerous, Republicans want less censorship from social media companies.
Nevertheless, even though both sides of the aisle expressed interest in either modifying or replacing Section 230, it has proven extremely difficult. The Electronic Frontier Foundation, one of the most ardent defenders of Section 230, claims that doing so would cause more harm than good. In their view, the internet as we know it would not exist if there were no legal protections for these social media companies. For instance, given the sheer size of a platform such as Twitter or Facebook, it would be unrealistic to think these companies could prevent every objectionable content from their platforms. Instead, the most likely scenario would be "rather than face liability for their users' actions, most [social media platforms] would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online." [9] The CEOs of Facebook and Twitter, Mark Zuckerberg and Jack Dorsey, echoed this sentiment in their testimony before Congress. Not only that, but they also cautioned of some unintended effects this might have on society in general. For instance, if social media companies can be sued for their platforms' content, many new and upcoming social media platforms that rely on people using their services to generate content will likely go bankrupt. This will create a market where only the big companies like Facebook, Twitter, and Google can survive, while others go bankrupt. [10]Additionally, if companies are vulnerable to lawsuits for the content in their platforms, this would ultimately mean that discussions regarding important but controversial topics will not take place. This would mean that movements such as "Black Lives Matter" or "Defund the Police" would never come to fruition. Put, our ability to enact social change would be severely limited.[11] Hence, it is clear that both Big Tech companies and government officials have good arguments in their favor. However, the attack on the Capitol has made it clear that, while there might not be a clear solution to this problem, Silicon Valley and Washington D.C will have to work together in order to preserve the democratic principles on which the United States was founded.
______________________________________________________________
[1] "Twitter Bans Michael Flynn, Sidney Powell and Other QAnon Accounts." CNBC. January 09, 2021. Accessed January 29, 2021. https://www.cnbc.com/2021/01/08/twitter-bans-michael-flynn-sidney-powell-and-other-qanon-accounts.html.
[2] Vogels, Emily A., Andrew Perrin, and Monica Anderson. "Most Americans Think Social Media Sites Censor Political Viewpoints." Pew Research Center: Internet, Science & Tech. September 18, 2020. Accessed January 29, 2021. https://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/.
[3] Ibid
[4] Ibid
[5] Reardon, Marguerite. "What's Section 230? Everything You Need to Know about Free Speech on Social Media." CNET. Accessed January 29, 2021. https://www.cnet.com/news/whats-section-230-the-social-media-law-thats-clogging-up-the-stimulus-talks/.
[6] LaLoggia, John. "U.S. Public Has Little Confidence in Social Media Companies to Determine Offensive Content." Pew Research Center. August 14, 2020. Accessed January 29, 2021. https://www.pewresearch.org/fact-tank/2019/07/11/u-s-public-has-little-confidence-in-social-media-companies-to-determine-offensive-content/.
[7] Ibid
[8] Mullin, Joe, Elliot Harmon, Aaron Jue, David Greene, and Jason Kelley. "Section 230 of the Communications Decency Act." Electronic Frontier Foundation. Accessed January 29, 2021. https://www.eff.org/issues/cda230.
[9] Wagner, Kurt, and Bloomberg. "Zuckerberg, Dorsey to Defend Section 230 Protections in Congress." Fortune. October 27, 2020. Accessed January 29, 2021. https://fortune.com/2020/10/27/zuckerberg-dorsey-section-230-protections-congress/.
[10] Bambauer, Derek E. "What Does the Day after Section 230 Reform Look Like?" Brookings. January 22, 2021. Accessed January 29, 2021. https://www.brookings.edu/techstream/what-does-the-day-after-section-230-reform-look-like/.
0 notes
ohioprelawland · 5 years ago
Text
Big Tech Hearings: The Question of Section 230
By Sophia Ballard, Cedarville University Class of 2023
November 11, 2020
Tumblr media
The Senate, on October 28, convened to questions the giants of the big tech industry such as Mark Zuckerberg, CEO of Facebook; Jack Dorsey, CEO of Twitter; and Sundar Pichai, CEO of Google. Essentially, this hearing was conducted for the purpose of discussing Section 230 of the Communications Decency Act of 1996. Also, this hearing was, in part, due to President Trump’s recent executive order on Preventing Online Censorship. President Trump and other Republicans fear that Section 230 has been allowing the social media giants to censor their tweets, posts, etc. However, the CEOs of such social media and technology companies would disagree. Aside from political issues, Section 230 has been analyzed and questioned with regard to the Free Speech Clause of the First Amendment to the United States Constitution. The Electronic Frontier Foundation claims that “Section 230 is one of the most valuable tools for protecting freedom of expression and innovation on the Internet.” The Electronic Frontier Foundation also describes the section in the following manner: “Online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do.” Hence, the main purpose of this section seems to be to protect the basic idea of free speech which has allowed liberty to thrive and flourish in the United States.
Likewise, Section 230 has many implications for internet users and the owners of companies which guide online discourse and interaction. This type of law, in fact, is unique to the United States which is the home of most big-name tech companies which have international influence. Also, while this section of law does describe the extent of the government’s role in internet policy, it also grants to the technology industry freedoms and rights. For example,  the companies are protected from the civil liabilities of what someone might say on one of their platforms and given authority to restrict access to material of a certain nature on these platforms. Section 230 (c) (2-A) states, “Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Thus, Section 230 stipulates that the private companies have the ability and legal authority to suspend from its platforms information or social media posts that meet the standards described in the aforementioned passage.
Unfortunately, what is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” can become rather subjective when personal opinions and politics are mixed into the equations. Senators from both the Republican and Democratic parties spent most of the October 28 hearing pointing fingers at one another instead of digging to heart of these issues. Obviously, no American politician would advocate for the restriction of free speech and except to again be elected. However, politicians on both sides of the aisle are, at times, concerned that the voices of private, individual citizens can be silenced by the social media giants based on a whim. The “good faith” in which these censoring actions are supposed to take place pose legal problems when two parties disagree on what should or should not be posted on the internet. Also, the question of whether or not Silicon Valley CEOs should have the authority to censor private individuals presents itself. Likewise, a lot of division and controversy clouds the seemingly good intentions of Section 230.
Additionally, the Department of Justice has recommended that the section be revised but not totally abolished. The law, in essence, has aided the internet in growing since 1996, and the government would like this facet of the economy to continue expanding. Likewise, the Department of Justice has identified four basic areas for which Section 230 is “ripe for reform.” Those areas are the following: “Incentivizing Online Platforms to Address Illicit Content; Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content; Promoting Competition; and Promoting Open Discourse and Greater Transparency.” No matter what happens next, Congress should consider either revising or making clarifications concerning Section 230 to ensure that freedom of speech online is indeed protected and is not overtaken by either the federal government or big tech companies.
________________________________________________________________
“Section 230 of the Communications Decency Act.” https://www.eff.org/issues/cda230.
“47 U.S. Code § 230 - Protection for private blocking and screening of offensive material.” https://www.law.cornell.edu/uscode/text/47/230.
“Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996.” https://www.justice.gov/ag/department-justice-s-review-section-230-communications-decency-act-1996.
0 notes
wto518 · 5 years ago
Video
youtube
339:推特血雨腥风,脸书腥风血雨。川普有能力打破CDA230条法案吗?
0 notes
mostlysignssomeportents · 2 months ago
Text
Object permanence
Tumblr media
HEY SEATTLE! I'm appearing at the Cascade PBS Ideas Festival TOMORROW (May 31) with the folks from NPR's On The Media!
Tumblr media
#5yrsago Bus drivers refuse to take arrested protesters to jail https://pluralistic.net/2020/05/30/up-is-not-down/#solidarity
#5yrsago Why I haven't written about CDA 230 https://pluralistic.net/2020/05/30/up-is-not-down/#cda230
#5yrsago Australia caves on "robodebt" https://pluralistic.net/2020/05/30/up-is-not-down/#robodebt
#1yrago Real innovation vs Silicon Valley nonsense https://pluralistic.net/2024/05/30/posiwid/#social-cost-of-carbon
Tumblr media
3 notes · View notes
garudabluffs · 6 years ago
Video
youtube
Aldous Huxley interviewed by Mike Wallace : 1958 (Full)      
Aldous Huxley shares his visions and fears for this brave new world.
LISTEN 28:12
2,600 Comments
Tumblr media
Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of "interactive computer service providers," including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.”
“ CDA 230 also offers its legal shield to bloggers who act as intermediaries by hosting comments on their blogs. Under the law, bloggers are not liable for comments left by readers, the work of guest bloggers, tips sent via email, or information received through RSS feeds. This legal protection can still hold even if a blogger is aware of the objectionable content or makes editorial judgments.
READ MORE https://www.eff.org/issues/cda230
“In practice, this executive order would mean that whichever political party is in power could dictate what speech is allowed on the Internet. If the government doesn’t like the way a private company is moderating content, they can shut their entire website down. The administration claims it’s trying to stop private companies from silencing speech—but this plan would create terrifying new censorship powers for the government to do just that. And the White House isn’t alone in promoting this misguided idea, some top Democrats have also called for weakening CDA 230.
The draft order has already been resoundingly condemned by First Amendment and free speech experts from across the political spectrum. Regardless of your politics, regardless of how you feel about the president, this is a terrible idea that will have the exact opposite impact of its stated purpose of protecting free speech.”
READ MORE https://www.activistpost.com/2019/08/leaked-documents-show-white-house-is-planning-executive-order-to-censor-the-internet.html
0 notes
myillegalhotel · 6 years ago
Text
Illegal Villas in Thailand
Illegal Villas in Thailand and how they will affect your holiday vacation
 With Thailand’s Prime Minister this week saying he wanted illegal hotels brought in line with the laws or if not shut and the owners or those responsible fined and even jailed, it comes as no surprise that on the holiday island of Koh Samui there have been raids on some of these illegal hotels (luxury villas).
As a billion baht luxury villa on Koh Samui is raided by police for not having a Hotel Licence, we would like to warn travellers and holidaymakers about booking illegal accommodations in Thailand because none of the big OTA's will tell you if they are licenced or not, we believe this is morally wrong and publicly deceptive.
We have a list of licenced properties in Phuket + other destinations in Thailand so you can check for yourself at https://mylegalhotel.com/ you'll see that most of the villas in the Dusit acquisition of Elite Havens luxury villas are also illegal, we advise for everyone to check that their accommodation is legal, as holidays can be ruined. There are lots of luxury villas on Airbnb etc. that do not have Hotel Licences and these are run by companies which should know better, you could not tell which ones are illegal without checking.
So how do the authorities find these illegal hotels in Thailand ? One way is through the TM30 form this has to be filled out for every foreigner staying in Thailand, so the authorities can track your whereabouts in Thailand, it’s a handy security measure the government and police use to find and track wanted people. If you stay in a hotel this form is done by the hotel and sent off every day your in the hotel, in theory the computer can find where any foreigner is staying in Thailand, but here is where the problems start for the illegal hotels and especially illegal Airbnb’s as the TM30’s are not filled out. This creates voids in peoples stays in Thailand and the authorities don’t like not knowing where foreigners are, the computer tells them your in the country as your passport gets checked at the airport, then they know where your staying as a TM30 should be filled out every day and when you leave the file is closed.  An illegal Airbnb would not do the TM30 as this would be an admission of them running a hotel/villa/accommodation and they would also be liable to pay all the relevant government taxes etc, so you see why the PM and government now want all hotels to be legal and registered and the illegals don’t want to be.
Lots of Hotels have been closed down already and plenty more will follow so do not get caught out through no fault of your own.  Lots of people have been caught up in these hotel raids in Thailand already and asked to find alternative accommodation at their own expense, The OTA’s will not tell you this as it will affect their profits and if they did it may make you think about looking at another OTA and they don’t want to lose your custom/money.
 The Ilegal villas in Samui are just tip of the iceberg but, most of the Online Travel Agents : Airbnb, tripadvisor, Booking.com, Agoda and Expedia to name but a few, advertise these illegal hotels/villas without the public being aware that most do not have a Hotel Licence to operate legally in Thailand,
We think these OTA’s are deliberately deceiving the public for their own financial gains, they couldn’t care less if the property you book your holiday is raided or closed down before or even during your stay and with most booking being done six months to even a year in advance this also traps the owners/propriators of the unlicenced properties to their commitments with their OTA. It’s a vicious circle and when confronted about their promotions of these illegal hotels the OTA’s will frequently say ‘we are just a platform, we do not know it the hotels are illegal or not, it’s not our responsibility’ and even hide behind CDA230 a computer law invented years ago to allow freedom of speech on the internet.
We believe this to be wrong and deceptive to the public, so in Thailand we have created a website especially so people and check and see if their potential dream holiday property is legal and licenced    https://mylegalhotel.com/  This website is totally free to use and will hopefully save you the hassle of booking a potentially illegal hotel.
We have also created the website https://www.myillegalhotel.com/ for all you need to know about the illegal hotels in Thailand with the laws, exemptions and other handy facts about the Thailand Hotel Act including how to get a Hotel Licence and how you can list your property legally. It also includes information on relevant taxes, insurances and visas needed to run a business in Thailand legally.
  At the following social media accounts you can see what trouble these giant OTA’s are causing in Thailand and throughout the world, with plenty of news about the hotel raids in Thailand and the holidays ruined for innocent people
https://twitter.com/myillegalhotel
https://www.facebook.com/ThailandHotelAct/
https://www.instagram.com/myillegalhotel/
https://www.facebook.com/myillegalhotel/
https://www.youtube.com/user/yaptonsam
https://www.facebook.com/ThailandHotelLicence
https://www.linkedin.com/groups/12126496/
https://www.facebook.com/Universal-Hotel-Act
https://www.facebook.com/groups/ThailandHotelAct/
 We would like to help travellers to Thailand so have created this website for anyone to use for FREE. Airbnb Bookingcom Tripadvisor Agoda Hotelscom and all the other online travel agents will not tell you if your booking a legal or ILLEGAL accommodation. but they’re happy to profit from these illegal hotel at your expense
We have also created https://universalbnb.com/ for all your travel needs, specifically covering Australia, New Zealand, Hong Kong, Singapore, Italy, France, Germany, Portugal, Spain, Great Britain, China, Japan Vietnam and Thailand, again its all FREE to use for hosts and travelers.
https://universalbnb.com/ is also for hosts to list their properties if they are LEGAL and follow the countries laws, we will not entertain an ILLEGALL hotels and would never promote one unlike all other Online Travel Agents.
We also give you the choice of booking an illegal hotel but that choice is down to you to make.........
 Below is a Hotel licence from Thailand, all properties offereing for daily and weekly rentals in Thailand should have one, if not its an illegal hotel......
Tumblr media
0 notes
goldievmb · 8 years ago
Text
They are rallying in dc to amend this today WATCH: http://bit.ly/2Erd6yW JOIN US: We need to urge...
They are rallying in dc to amend this today WATCH: http://bit.ly/2Erd6yW JOIN US: We need to urge Congress to amend #CDA230 on National Human Trafficking Awareness Day. http://p2a.co/1LTNEsS #ListenToSurvivors #IamJaneDoe #PassSESTA #TIMESUP @worldweus @sffny This PSA underscores the urgency to clarify and update Section 230 of the Communications Decency Act in order to better disrupt online sex trafficking of children… (RSS generated with FetchRss) from Jennifer Lawrence on Facebook http://ift.tt/2muJH0o
0 notes
privacyeliberta · 5 years ago
Text
Twitter e Trump: le responsabilità dei social
Twitter ha recentemente segnalato alcuni tweet di trump come fuorvianti e si è subito innescano un dibbattito che pone la lente di ingrandimento sull’influenza che possono avere i social network sulla vita democratica di un paese e riapre il dibbattito sulla respondabilità delle piattaforme.
i principali social network. foto di PixaBay
  I fatti
Twitter ha già da tempo fatto sapere che si sarebbe impegnata a combattere le fake news come si vede da questo articolo del The Print del 21 febbraio, ma solo a fine maggio ha iniziato ad applicare questa policy negli stati uniti flaggando due post di Trump uno che parlava di una presunta frode del ballottaggio via mail come fake news, e un altro riguardante le proteste è stato nascosto in quanto contrario alle policy di twitter sull’esaltazione della violenza.
  ....These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!
Tweet flaggato numero uno
There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone.....
il tweet flaggato con il Fact Check
....Twitter is completely stifling FREE SPEECH, and I, as President, will not allow it to happen!
trump si scaglia contro twitter
  la situazione si è presto infiammata con Trump che ha minacciato di far chiudere twitter, e molti giornali hanno parlato di “censura” nei confronti del presidente, dopo poco tempo è stato rilasciato dalla casa bianca un Executive Order contro la censura online che andrebbe a modificare la cosiddetta provision 230, ma di che si tratta?
Le 26 parole che hanno creato internet
La sezione 230 del Comunications Decency Act del 1996 è una parte fondamentale del diritto americano riguardo internet e dichiara quanto segue:
Da wikipedia : “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Cosa significa? Praticamente asserisce come le piattaforme online non siano da considerarsi editori, ovvero è un esenzione di responsabilità di ciò che altri utenti dicono usando quella piattaforma, inoltre aggiunge,
“The statute in §230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the removal or moderation of third-party material they deem obscene or offensive, even of constitutionally protected speech, as long as it is done in good faith.”
Che quindi consente alle piattaforme di rimuovere i contenuti che vanno contro i loro termini d’uso e servizio e anche quei contenuti che sono considerati “scabrosi,osceni,offensivi perfino discorsi protetti costituzionalmente, purchè sia fatto in buona fede”
provision 230
**I contenuti dell’executive order **
Nell’Executive Order voluto da trump vengono fatte delle dichiarazioni fondamentali che andranno probabilmente a ledere le libertà di rimozione dei contenuti che hanno avuto le varie piattaforme online.
“Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.
As President, I have made clear my commitment to free and open debate on the internet. Such debate is just as important online as it is in our universities, our town halls, and our homes.  It is essential to sustaining our democracy.”
Si inizia con l’attestare come le piattaforme online abbiano un enorme potere nel plasmare le informazioni degli eventi pubblici decidendo non solo quello che viene rimosso ma anche modificando la visibilità di determinati post, e viene posto l’accento sulla responsabilità delle piattaforme di garantire un ambiente libero per garantire il dibattito e il benessere delle democrazie.
Viene fatto notare come twitter abbia colpito, con flag e avvisi, principalmente politici dell’area Repubblicana ignorando non pochi tweet di esponenti dei Democratici che potevano rientrare nelle stesse violazioni che hanno fatto scattare i flag sui tweet di Trump.
Inoltre, vengono richiamati i risultati del Tech bias reporting tool, uno strumento messo a disposizione dalla casa bianca nel maggio dell’anno scorso per permettere agli utenti riportare casi in cui  secondo loro sono stati vittima di discriminazione politica da parte della piattaforma su cui si trovavano, lo strumento ha ricevuto decine di migliaia di segnalazioni soprattutto da parte di utenti di destra che va a mostrare come effettivamente su certe piattaforme ci sia una discriminazione da parte dell’algoritmo.
“Prominent among the ground rules governing that debate is the immunity from liability created by section 230(c) of the Communications Decency Act (section 230(c)).  47 U.S.C. 230(c).  It is the policy of the United States that the scope of that immunity should be clarified: the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”
In quest’ultima parte che vado a commentare si evince la volontà di andare a modificare l’articolo 230 per rendere più chiari i casi in cui la censura e la diminuzione di visibilità siano applicabili, aprendo potenzialmente le porte ad una nuova era sui social network dove le piattaforme o diminuiranno i dispositivi di controllo sui post che già posseggono lasciando alle persone la libertà di  decidere cosa è vero e cosa è falso, la strada che Mark Zukemberg il fondatore di facebook vuole intraprendere, oppure affinare sempre di più gli algoritmi e i controlli manuali in modo da rendere migliore e più efficienti i metodi di rimozione dei contenuti.
Blogpost di Giulio Ciccolo
1 note · View note
instapicsil1 · 8 years ago
Photo
Tumblr media
JOIN US: We need to urge Congress to amend #CDA230 on National Human Trafficking Awareness Day. #ListenToSurvivors #IamJaneDoe #PassSESTA #TIMESUP @worldweus @sffny link in bio please regram or share http://ift.tt/2ml57fg
0 notes
violettemeier · 8 years ago
Video
instagram
Stop sex trafficking! Demand that Congress amend #cda230 to stop #sextrafficking in the USA via the internet.
1 note · View note
tqpannie · 8 years ago
Video
instagram
@Regranned from @kristenanniebell - WATCH full video: http://bit.ly/2Erd6yW JOIN US: We need to urge Congress to amend #CDA230 on National Human Trafficking Awareness Day. http://p2a.co/1LTNEsS #ListenToSurvivors #IamJaneDoe #PassSESTA #TIMESUP @worldweus @sffny - #regrann
0 notes
mostlysignssomeportents · 2 years ago
Text
Solving the Moderator's Trilemma with Federation
Tumblr media
The classic trilemma goes: “Fast, cheap or good, pick any two.” The Moderator’s Trilemma goes, “Large, diverse userbase; centralized platforms; don’t anger users — pick any two.” The Moderator’s Trilemma is introduced in “Moderating the Fediverse: Content Moderation on Distributed Social Media,” a superb paper from Alan Rozenshtein of U of Minnesota Law, forthcoming in the journal Free Speech Law, available as a prepub on SSRN:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4213674#maincontent
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/03/04/pick-all-three/#agonism
Rozenshtein proposes a solution (of sorts) to the Moderator’s Trilemma: federation. De-siloing social media, breaking it out of centralized walled gardens and recomposing it as a bunch of small servers run by a diversity of operators with a diversity of content moderation approaches. The Fediverse, in other words.
In Albert Hirschman’s classic treatise Exit, Voice, and Loyalty, stakeholders in an institution who are dissatisfied with its direction have two choices: voice (arguing for changes) or exit (going elsewhere). Rozenshtein argues that Fediverse users (especially users of Mastodon, the most popular part of the Fediverse) have more voice and more “freedom of exit”:
https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty
Large platforms — think Twitter, Facebook, etc — are very unresponsive to users. Most famously, Facebook polled its users on whether they wanted to be spied on. Faced with overwhelming opposition to commercial surveillance, Facebook ignored the poll result and cranked the surveillance dial up to a million:
https://www.nbcnews.com/tech/tech-news/facebook-ignores-minimal-user-vote-adopts-new-privacy-policy-flna1c7559683
A decade later, Musk performed the same stunt, asking users whether they wanted him to fuck all the way off from the company, then ignored the vox populi, which, in this instance, was not vox Dei:
https://apnews.com/article/elon-musk-twitter-inc-technology-business-8dac8ae023444ef9c37ca1d8fe1c14df
Facebook, Twitter and other walled gardens are designed to be sticky-traps, relying on high switching costs to keep users locked within their garden walls which are really prison walls. Internal memos from the companies reveal that this strategy is deliberate, designed to keep users from defecting even as the service degrades:
https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs
By contrast, the Fediverse is designed for ease of exit. With one click, users can export the list of the accounts they follow, block and mute, as well as the accounts that follow them. With one more click, users can import that data into any other Fediverse server and be back up and running with almost no cost or hassle:
https://pluralistic.net/2022/12/23/semipermeable-membranes/
Last month, “Nathan,” the volunteer operator of mastodon.lol, announced that he was pulling the plug on the server because he was sick of his users’ arguments about the new Harry Potter game. Many commentators pointed to this as a mark against federated social media, “You can’t rely on random, thin-skinned volunteer sysops for your online social life!”
https://mastodon.lol/@nathan/109836633022272265
But the mastodon.lol saga demonstrates the strength of federated social media, not its weakness. After all, 450 million Twitter users are also at the mercy of a thin-skinned sysop — but when he enshittifies his platform, they can’t just export their data and re-establish their social lives elsewhere in two clicks:
Mastodon.lol shows us how, if you don’t like your host’s content moderation policies, you can exercise voice — even to the extent of making him so upset that he shuts off his server — and where voice fails, exit steps in to fill the gap, providing a soft landing for users who find the moderation policies untenable:
https://doctorow.medium.com/twiddler-1b5c9690cce6
Traditionally, centralization has been posed as beneficial to content moderation. As Rozenshtein writes, a company that can “enclose” its users and lock them in has an incentive to invest in better user experience, while companies whose users can easily migrate to rivals are less invested in those users.
And centralized platforms are more nimble. The operators of centralized systems can add hundreds of knobs and sliders to their back end and twiddle them at will. They act unilaterally, without having to convince other members of a federation to back their changes.
Centralized platforms claim that their most powerful benefit to users is extensive content moderation. As Tarleton Gillespie writes, “Moderation is central to what platforms do, not peripheral… [it] is, in many ways, the commodity that platforms offer”:
https://yalebooks.yale.edu/book/9780300261431/custodians-of-the-internet/
Centralized systems claim that their enclosure keeps users safe — from bad code and bad people. Though Rozenshtein doesn’t say so, it’s important to note that this claim is wildly oversold. Platforms routinely fail at preventing abuse:
https://www.nbcnews.com/nbc-out/out-news/sexual-assault-harassment-bullying-trans-students-say-targeted-school-rcna7803
And they also fail at blocking malicious code:
https://www.scmagazine.com/news/threats/apple-bugs-ios-macos_new_class
But even where platforms do act to “keep users safe,” they fail, thanks to the Moderator’s Trilemma. Setting speech standards for millions or even billions of users is an impossible task. Some users will always feel like speech is being underblocked — while others will feel it’s overblocked (and both will be right!):
https://www.eff.org/deeplinks/2021/07/right-or-left-you-should-be-worried-about-big-tech-censorship
And platforms play very fast and loose with their definition of “malicious code” — as when Apple blocked OG App, an Instagram ad-blocker that gave you a simple feed consisting of just the posts from the people you followed:
https://pluralistic.net/2023/02/05/battery-vampire/#drained
To resolve the Moderator’s Trilemma, we need to embrace subsidiarity: “decisions should be made at the lowest organizational level capable of making such decisions.”
https://pluralistic.net/2023/02/07/full-stack-luddites/#subsidiarity
For Rozenshtein, “content-moderation subsidiarity devolves decisions to the individual instances that make up the overall network.” The fact that users can leave a server and set up somewhere else means that when a user gets pissed off enough about a moderation policy, they don’t have to choose between leaving social media or tolerating the policy — they can simply choose another server that’s part of the same federation.
Rozenshtein asks whether Reddit is an example of this, because moderators of individual subreddits are given broad latitude to set their own policies and anyone can fork a subreddit into a competing community with different moderation norms. But Reddit’s devolution is a matter of policy, not architecture — subreddits exist at the sufferance of Reddit’s owners (and Reddit is poised to go public, meaning those owners will include activist investors and large institutions that might not care about your little community). You might be happy about Reddit banning /r_TheDonald, but if they can ban that subreddit, they can ban any subreddit. Policy works well, but fails badly.
By moving subsidiarity into technical architecture, rather than human policy, the fediverse can move from antagonism (the “zero-sum destructiveness” that dominates current online debate) to agonism, where your opponent isn’t an enemy — they are a “political adversary”:
https://www.yalelawjournal.org/article/the-administrative-agon
Here, Rozenshtein cites Aymeric Mansoux and Roel Roscam Abbing’s “Seven Theses On The Fediverse And The Becoming Of Floss”:
https://test.roelof.info/seven-theses.html
For this to happen, different ideologies must be allowed to materialize via different channels and platforms. An important prerequisite is that the goal of political consensus must be abandoned and replaced with conflictual consensus…
So your chosen Mastodon server “may have rules that are far more restrictive than those of the major social media platforms.” But the whole Fediverse “is substantially more speech protective than are any of the major social media platforms, since no user or content can be permanently banned from the network and anyone is free to start an instance that communicates both with the major Mastodon instances and the peripheral, shunned instances.”
A good case-study here is Gab, a Fediverse server by and for far-right cranks, conspiratorialists and white nationalists. Most Fediverse servers have defederated (that is, blocked) Gab, but Gab is still there, and Gab has actually defederated from many of the remaining servers, leaving its users to speak freely — but only to people who want to hear what they have to say.
This is true meaning of “freedom of speech isn’t freedom of reach.” Willing listeners aren’t blocked from willing speakers — but you don’t have the right to be heard by people who don’t want to talk to you:
https://pluralistic.net/2022/12/10/e2e/#the-censors-pen
Fediverse servers are (thus far) nonprofits or hobbyist sites, and don’t have the same incentives to drive “engagement” to maximize the opportunties to show advertisements. Fediverse applications are frequently designed to be antiviral — that is, to prevent spectacular spreads of information across the system.
It’s possible — likely, even — that future Fediverse servers will be operated by commercial operators seeking to maximize attention in order to maximize revenue — but the users of these servers will still have the freedom of exit that they enjoy on today’s Jeffersonian volunteer-run servers — and so commercial servers will have to either curb their worst impulses or lose their users to better systems.
I’ll note here that this is a progressive story of the benefits of competition — not the capitalist’s fetishization of competition for its own sake, but rather, competition as a means of disciplining capital. It can be readily complemented by discipline through regulation — for example, extending today’s burgeoning crop of data-protection laws to require servers to furnish users with exports of their follow/follower data so they can go elsewhere.
There’s another dimension to decentralized content moderation that exit and voice don’t address — moderating “harmful” content. Some kinds of harm can be mitigated through exit — if a server tolerates hate speech or harassment, you can go elsewhere, preferably somewhere that blocks your previous server.
But there are other kinds of speech that must not exist — either because they are illegal or because they enact harms that can’t be mitigated by going elsewhere (or both). The most spectacular version of this is Child Sex Abuse Material (CSAM), a modern term-of-art to replace the more familiar “child porn.”
Rozenshtein says there are “reasons for optimism” when it comes to the Fediverse’s ability to police this content, though as he unpacked this idea, I found it much weaker than his other material. Rozenshtein proposes that Fediverse hosts could avail themselves of PhotoDNA, Microsoft’s automated scanning tool, to block and purge themselves of CSAM, while noting that this is “hardly foolproof.”
If automated scanning fails, Rozenshtein allows that this could cause “greater consolidation” of Mastodon servers to create the economies of scale to pay for more active, human moderation, which he compares to the consolidation of email that arose as a result of the spam-wars. But the spam-wars have been catastrophic for email as a federated system and produced all kinds of opportunities for mischief by the big players:
https://doctorow.medium.com/dead-letters-73924aa19f9d
Rozenshtein: “There is a tradeoff between a vibrant and diverse communication system and the degree of centralized control that would be necessary to ensure 100% filtering of content. The question, as yet unknown, is how stark that tradeoff is.”
The situation is much simpler when it comes to servers hosted by moderators who are complicit in illegal conduct: “the Fediverse may live in the cloud, its servers, moderators, and users are physically located in nations whose governments are more than capable of enforcing local law.” That is, people who operate “rogue” servers dedicated to facilitating assassination, CSAM, or what-have-you will be arrested, and their servers will be seized.
Fair enough! But of course, this butts up against one of the Fediverse’s shortcomings: it isn’t particularly useful for promoting illegal speech that should be legal, like the communications of sex workers who were purged from the internet en masse following the passage of SESTA/FOSTA. When sex workers tried to establish a new home in the fediverse on a server called Switter, it was effectively crushed.
This simply reinforces the idea that code is no substitute for law, and while code can interpret bad law as damage and route around it, it can only do so for a short while. The best use of speech-enabling code isn’t to avoid the unjust suppression of speech — it’s to organize resistance to that injustice, including, if necessary, the replacement of the governments that enacted it:
https://onezero.medium.com/rubber-hoses-fd685385dcd4
Rozenshtein briefly addresses the question of “filter bubbles,” and notes that there is compelling research that filter bubbles don’t really exist, or at least, aren’t as important to our political lives as once thought:
https://sciendo.com/article/10.2478/nor-2021-0002
Rozenshtein closes by addressing the role policy can play in encouraging the Fediverse. First, he proposes that governments could host their own servers and use them for official communications, as the EU Commission did following Musk’s Twitter takeover:
https://social.network.europa.eu
He endorses interoperability mandates which would required dominant platforms to connect to the fediverse (facilitating their users’ departure), like the ones in the EU’s DSA and DMA, and proposed in US legislation like the ACCESS Act:
https://www.eff.org/deeplinks/2022/04/eu-digital-markets-acts-interoperability-rule-addresses-important-need-raises
To get a sense of how that would work, check out “Interoperable Facebook,” a video and essay I put together with EFF to act as a kind of “design fiction,” in the form of a user manual for a federated, interoperable Facebook:
https://www.eff.org/interoperablefacebook
He points out that this kind of mandatory interop is a preferable alternative to the unconstitutional (and unworkable!) speech bans proposed by Florida and Texas, which limit the ability of platforms to moderate speech. Indeed, this is an either-or proposition — under the terms proposed by Florida and Texas, the Fediverse couldn’t operate.
This is likewise true of proposals to eliminate Section 230, the law that immunizes platforms from federal liability for most criminal speech acts committed by their users. While this law is incorrectly smeared as a gift to Big Tech, it is most needed by small services that can’t possibly afford to monitor everything their users say:
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
One more recommendation from Rozenshtein: treat interop mandates as an alternative (or adjunct) to antitrust enforcement. Competition agencies could weigh interoperability with the Fediverse by big platforms to determine whether to enforce against them, and enforcement orders could include mandates to interoperate with the Fediverse. This is a much faster remedy than break-ups, which Rozenshtein is dubious of because they are “legally risky” and “controversial.”
To this, I’d add that even for people who would welcome break-ups (like me!) they are sloooow. The breakup of AT&T took 69 years. By contrast, interop remedies would give relief to users right now:
https://onezero.medium.com/jam-to-day-46b74d5b1da4
On Tue (Mar 7), I’m doing a remote talk for TU Wien.
On Mar 9, you can catch me in person in Austin at the UT School of Design and Creative Technologies, and remotely at U Manitoba’s Ethics of Emerging Tech Lecture.
On Mar 10, Rebecca Giblin and I kick off the SXSW reading series.
[Image ID: A trilemma Venn diagram, showing three ovoids in a triangular form, which intersect at their tips, but not in the middle. The ovoids are labeled 'Avoid angering users,' 'Diverse userbase,' 'Centralized platforms.' In the center of the ovoids is the Mastodon mascot. The background is composed of dead Twitter birds on their backs with exes for eyes.]
96 notes · View notes