#Content Moderation
Explore tagged Tumblr posts
neshamama · 3 months ago
Text
Tumblr media
mature content???
53 notes · View notes
saywhat-politics · 6 months ago
Text
Tumblr media
Meta rolled out a number of changes to its “Hateful Conduct” policy Tuesday as part of a sweeping overhaul of its approach toward content moderation.
META announced a series of major updates to its content moderation policies today, including ending its fact-checking partnerships and “getting rid” of restrictions on speech about “topics like immigration, gender identity and gender” that the company describes as frequent subjects of political discourse and debate. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” Meta’s newly appointed chief global affairs officer, Joel Kaplan, wrote in a blog post outlining the changes.
In an accompanying video, Meta CEO Mark Zuckerberg described the company’s current rules in these areas as “just out of touch with mainstream discourse.”
100 notes · View notes
mostlysignssomeportents · 1 year ago
Text
CDA 230 bans Facebook from blocking interoperable tools
Tumblr media
I'm touring my new, nationally bestselling novel The Bezzle! Catch me TONIGHT (May 2) in WINNIPEG, then TOMORROW (May 3) in CALGARY, then SATURDAY (May 4) in VANCOUVER, then onto Tartu, Estonia, and beyond!
Tumblr media
Section 230 of the Communications Decency Act is the most widely misunderstood technology law in the world, which is wild, given that it's only 26 words long!
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
CDA 230 isn't a gift to big tech. It's literally the only reason that tech companies don't censor on anything we write that might offend some litigious creep. Without CDA 230, there'd be no #MeToo. Hell, without CDA 230, just hosting a private message board where two friends get into serious beef could expose to you an avalanche of legal liability.
CDA 230 is the only part of a much broader, wildly unconstitutional law that survived a 1996 Supreme Court challenge. We don't spend a lot of time talking about all those other parts of the CDA, but there's actually some really cool stuff left in the bill that no one's really paid attention to:
https://www.aclu.org/legal-document/supreme-court-decision-striking-down-cda
One of those little-regarded sections of CDA 230 is part (c)(2)(b), which broadly immunizes anyone who makes a tool that helps internet users block content they don't want to see.
Enter the Knight First Amendment Institute at Columbia University and their client, Ethan Zuckerman, an internet pioneer turned academic at U Mass Amherst. Knight has filed a lawsuit on Zuckerman's behalf, seeking assurance that Zuckerman (and others) can use browser automation tools to block, unfollow, and otherwise modify the feeds Facebook delivers to its users:
https://knightcolumbia.org/documents/gu63ujqj8o
If Zuckerman is successful, he will set a precedent that allows toolsmiths to provide internet users with a wide variety of automation tools that customize the information they see online. That's something that Facebook bitterly opposes.
Facebook has a long history of attacking startups and individual developers who release tools that let users customize their feed. They shut down Friendly Browser, a third-party Facebook client that blocked trackers and customized your feed:
https://www.eff.org/deeplinks/2020/11/once-again-facebook-using-privacy-sword-kill-independent-innovation
Then in in 2021, Facebook's lawyers terrorized a software developer named Louis Barclay in retaliation for a tool called "Unfollow Everything," that autopiloted your browser to click through all the laborious steps needed to unfollow all the accounts you were subscribed to, and permanently banned Unfollow Everywhere's developer, Louis Barclay:
https://slate.com/technology/2021/10/facebook-unfollow-everything-cease-desist.html
Now, Zuckerman is developing "Unfollow Everything 2.0," an even richer version of Barclay's tool.
This rich record of legal bullying gives Zuckerman and his lawyers at Knight something important: "standing" – the right to bring a case. They argue that a browser automation tool that helps you control your feeds is covered by CDA(c)(2)(b), and that Facebook can't legally threaten the developer of such a tool with liability for violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, or the other legal weapons it wields against this kind of "adversarial interoperability."
Writing for Wired, Knight First Amendment Institute at Columbia University speaks to a variety of experts – including my EFF colleague Sophia Cope – who broadly endorse the very clever legal tactic Zuckerman and Knight are bringing to the court.
I'm very excited about this myself. "Adversarial interop" – modding a product or service without permission from its maker – is hugely important to disenshittifying the internet and forestalling future attempts to reenshittify it. From third-party ink cartridges to compatible replacement parts for mobile devices to alternative clients and firmware to ad- and tracker-blockers, adversarial interop is how internet users defend themselves against unilateral changes to services and products they rely on:
https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
Now, all that said, a court victory here won't necessarily mean that Facebook can't block interoperability tools. Facebook still has the unilateral right to terminate its users' accounts. They could kick off Zuckerman. They could kick off his lawyers from the Knight Institute. They could permanently ban any user who uses Unfollow Everything 2.0.
Obviously, that kind of nuclear option could prove very unpopular for a company that is the very definition of "too big to care." But Unfollow Everything 2.0 and the lawsuit don't exist in a vacuum. The fight against Big Tech has a lot of tactical diversity: EU regulations, antitrust investigations, state laws, tinkerers and toolsmiths like Zuckerman, and impact litigation lawyers coming up with cool legal theories.
Together, they represent a multi-front war on the very idea that four billion people should have their digital lives controlled by an unaccountable billionaire man-child whose major technological achievement was making a website where he and his creepy friends could nonconsensually rate the fuckability of their fellow Harvard undergrads.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/02/kaiju-v-kaiju/#cda-230-c-2-b
Tumblr media
Image: D-Kuru (modified): https://commons.wikimedia.org/wiki/File:MSI_Bravo_17_(0017FK-007)-USB-C_port_large_PNr%C2%B00761.jpg
Minette Lontsie (modified): https://commons.wikimedia.org/wiki/File:Facebook_Headquarters.jpg
CC BY-SA 4.0: https://creativecommons.org/licenses/by-sa/4.0/deed.en
246 notes · View notes
tomorrowusa · 7 months ago
Text
Being a content moderator on Facebook can give you severe PTSD.
Let's take time from our holiday festivities to commiserate with those who have to moderate social media. They witness some of the absolute worst of humanity.
More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism. The moderators worked eight- to 10-hour days at a facility in Kenya for a company contracted by the social media firm and were found to have PTSD, generalised anxiety disorder (GAD) and major depressive disorder (MDD), by Dr Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Nairobi. The mass diagnoses have been made as part of lawsuit being brought against Facebook’s parent company, Meta, and Samasource Kenya, an outsourcing company that carried out content moderation for Meta using workers from across Africa. The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.
You can imagine what now gets circulated on Elon Musk's Twitter/X which has ditched most of its moderation.
According to the filings in the Nairobi case, Kanyanya concluded that the primary cause of the mental health conditions among the 144 people was their work as Facebook content moderators as they “encountered extremely graphic content on a daily basis, which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse, horrific violent actions just to name a few”. Four of the moderators suffered trypophobia, an aversion to or fear of repetitive patterns of small holes or bumps that can cause intense anxiety. For some, the condition developed from seeing holes on decomposing bodies while working on Facebook content.
Being a social media moderator may sound easy, but you will never be able to unsee the horrors which the dregs of society wish to share with others.
To make matters worse, the moderators in Kenya were paid just one-eighth what moderators in the US are paid.
Social media platform owners have vast wealth similar to the GDPs of some countries. They are among the greediest leeches in the history of money.
41 notes · View notes
political-us · 4 months ago
Text
Last Week Tonight with John Oliver -Facebook and Content Moderation
youtube
25 notes · View notes
mannyblacque · 5 months ago
Text
Tumblr media Tumblr media
7 notes · View notes
jcmarchi · 5 months ago
Text
Ganesh Shankar, CEO & Co-Founder of Responsive – Interview Series
New Post has been published on https://thedigitalinsider.com/ganesh-shankar-ceo-co-founder-of-responsive-interview-series/
Ganesh Shankar, CEO & Co-Founder of Responsive – Interview Series
Tumblr media Tumblr media
Ganesh Shankar, CEO and Co-Founder of Responsive, is an experienced product manager with a background in leading product development and software implementations for Fortune 500 enterprises. During his time in product management, he observed inefficiencies in the Request for Proposal (RFP) process—formal documents organizations use to solicit bids from vendors, often requiring extensive, detailed responses. Managing RFPs traditionally involves multiple stakeholders and repetitive tasks, making the process time-consuming and complex.
Founded in 2015 as RFPIO, Responsive was created to streamline RFP management through more efficient software solutions. The company introduced an automated approach to enhance collaboration, reduce manual effort, and improve efficiency. Over time, its technology expanded to support other complex information requests, including Requests for Information (RFIs), Due Diligence Questionnaires (DDQs), and security questionnaires.
Today, as Responsive, the company provides solutions for strategic response management, helping organizations accelerate growth, mitigate risk, and optimize their proposal and information request processes.
What inspired you to start Responsive, and how did you identify the gap in the market for response management software?
My co-founders and I founded Responsive in 2015 after facing our own struggles with the RFP response process at the software company we were working for at the time. Although not central to our job functions, we dedicated considerable time assisting the sales team with requests for proposals (RFPs), often feeling underappreciated despite our vital role in securing deals. Frustrated with the lack of technology to make the RFP process more efficient, we decided to build a better solution.  Fast forward nine years, and we’ve grown to nearly 500 employees, serve over 2,000 customers—including 25 Fortune 100 companies—and support nearly 400,000 users worldwide.
How did your background in product management and your previous roles influence the creation of Responsive?
As a product manager, I was constantly pulled by the Sales team into the RFP response process, spending almost a third of my time supporting sales instead of focusing on my core product management responsibilities. My two co-founders experienced a similar issue in their technology and implementation roles. We recognized this was a widespread problem with no existing technology solution, so we leveraged our almost 50 years of combined experience to create Responsive. We saw an opportunity to fundamentally transform how organizations share information, starting with managing and responding to complex proposal requests.
Responsive has evolved significantly since its founding in 2015. How do you maintain the balance between staying true to your original vision and adapting to market changes?
First, we’re meticulous about finding and nurturing talent that embodies our passion – essentially cloning our founding spirit across the organization. As we’ve scaled, it’s become critical to hire managers and team members who can authentically represent our core cultural values and commitment.
At the same time, we remain laser-focused on customer feedback. We document every piece of input, regardless of its size, recognizing that these insights create patterns that help us navigate product development, market positioning, and any uncertainty in the industry. Our approach isn’t about acting on every suggestion, but creating a comprehensive understanding of emerging trends across a variety of sources.
We also push ourselves to think beyond our immediate industry and to stay curious about adjacent spaces. Whether in healthcare, technology, or other sectors, we continually find inspiration for innovation. This outside-in perspective allows us to continually raise the bar, inspiring ideas from unexpected places and keeping our product dynamic and forward-thinking.
What metrics or success indicators are most important to you when evaluating the platform’s impact on customers?
When evaluating Responsive’s impact, our primary metric is how we drive customer revenue. We focus on two key success indicators: top-line revenue generation and operational efficiency. On the efficiency front, we aim to significantly reduce RFP response time – for many, we reduce it by 40%. This efficiency enables our customers to pursue more opportunities, ultimately accelerating their revenue generation potential.
How does Responsive leverage AI and machine learning to provide a competitive edge in the response management software market?
We leverage AI and machine learning to streamline response management in three key ways. First, our generative AI creates comprehensive proposal drafts in minutes, saving time and effort. Second, our Ask solution provides instant access to vetted organizational knowledge, enabling faster, more accurate responses. Third, our Profile Center helps InfoSec teams quickly find and manage security content.
With over $600 billion in proposals managed through the Responsive platform and four million Q&A pairs processed, our AI delivers intelligent recommendations and deep insights into response patterns. By automating complex tasks while keeping humans in control, we help organizations grow revenue, reduce risk, and respond more efficiently.
What differentiates Responsive’s platform from other solutions in the industry, particularly in terms of AI capabilities and integrations?
Since 2015, AI has been at the core of Responsive, powering a platform trusted by over 2,000 global customers. Our solution supports a wide range of RFx use cases, enabling seamless collaboration, workflow automation, content management, and project management across teams and stakeholders.
With key AI capabilities—like smart recommendations, an AI assistant, grammar checks, language translation, and built-in prompts—teams can deliver high-quality RFPs quickly and accurately.
Responsive also offers unmatched native integrations with leading apps, including CRM, cloud storage, productivity tools, and sales enablement. Our customer value programs include APMP-certified consultants, Responsive Academy courses, and a vibrant community of 1,500+ customers sharing insights and best practices.
Can you share insights into the development process behind Responsive’s core features, such as the AI recommendation engine and automated RFP responses?
Responsive AI is built on the foundation of accurate, up-to-date content, which is critical to the effectiveness of our AI recommendation engine and automated RFP responses. AI alone cannot resolve conflicting or incomplete data, so we’ve prioritized tools like hierarchical tags and robust content management to help users organize and maintain their information. By combining generative AI with this reliable data, our platform empowers teams to generate fast, high-quality responses while preserving credibility. AI serves as an assistive tool, with human oversight ensuring accuracy and authenticity, while features like the Ask product enable seamless access to trusted knowledge for tackling complex projects.
How have advancements in cloud computing and digitization influenced the way organizations approach RFPs and strategic response management?
Advancements in cloud computing have enabled greater efficiency, collaboration, and scalability. Cloud-based platforms allow teams to centralize content, streamline workflows, and collaborate in real time, regardless of location. This ensures faster turnaround times and more accurate, consistent responses.
Digitization has also enhanced how organizations manage and access their data, making it easier to leverage AI-powered tools like recommendation engines and automated responses. With these advancements, companies can focus more on strategy and personalization, responding to RFPs with greater speed and precision while driving better outcomes.
Responsive has been instrumental in helping companies like Microsoft and GEODIS streamline their RFP processes. Can you share a specific success story that highlights the impact of your platform?
Responsive has played a key role in supporting Microsoft’s sales staff by managing and curating 20,000 pieces of proposal content through its Proposal Resource Library, powered by Responsive AI. This technology enabled Microsoft’s proposal team to contribute $10.4 billion in revenue last fiscal year. Additionally, by implementing Responsive, Microsoft saved its sellers 93,000 hours—equivalent to over $17 million—that could be redirected toward fostering stronger customer relationships.
As another example of  Responsive providing measurable impact, our customer Netsmart significantly improved their response time and efficiency by implementing Responsive’s AI capabilities. They achieved a 10X faster response time, increased proposal submissions by 67%, and saw a 540% growth in user adoption. Key features such as AI Assistant, Requirements Analysis, and Auto Respond played crucial roles in these improvements. The integration with Salesforce and the establishment of a centralized Content Library further streamlined their processes, resulting in a 93% go-forward rate for RFPs and a 43% reduction in outdated content. Overall, Netsmart’s use of Responsive’s AI-driven platform led to substantial time savings, enhanced content accuracy, and increased productivity across their proposal management operations.
JAGGAER, another Responsive customer, achieved a double-digit win-rate increase and 15X ROI by using Responsive’s AI for content moderation, response creation, and Requirements Analysis, which improved decision-making and efficiency. User adoption tripled, and the platform streamlined collaboration and content management across multiple teams.
Where do you see the response management industry heading in the next five years, and how is Responsive positioned to lead in this space?
In the next five years, I see the response management industry being transformed by AI agents, with a focus on keeping humans in the loop. While we anticipate around 80 million jobs being replaced, we’ll simultaneously see 180 million new jobs created—a net positive for our industry.
Responsive is uniquely positioned to lead this transformation. We’ve processed over $600 billion in proposals and built a database of almost 4 million Q&A pairs. Our massive dataset allows us to understand complex patterns and develop AI solutions that go beyond simple automation.
Our approach is to embrace AI’s potential, finding opportunities for positive outcomes rather than fearing disruption. Companies with robust market intelligence, comprehensive data, and proven usage will emerge as leaders, and Responsive is at the forefront of that wave. The key is not just implementing AI, but doing so strategically with rich, contextual data that enables meaningful insights and efficiency.
Thank you for the great interview, readers who wish to learn more should visit Responsive,
7 notes · View notes
reading-writing-revolution · 6 months ago
Text
Tumblr media Tumblr media Tumblr media
A thread on Zuckerberg's bullshit with Joe Rogan, America's douchebro.
10 notes · View notes
macmanx · 1 year ago
Text
I've seen things you people wouldn't believe! Attacks on moderators off the shoulder of Orion. I watched misinformation glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like Twitter after Elon.
Tumblr media
35 notes · View notes
lastweeksshirttonight · 5 months ago
Text
youtube
This week, John discusses Meta and Facebook's content moderation - both its historical efforts to put the rotting genie back in the bottle and its recent rollback of their safety features, the difficulties in content moderation in general, and how to deal with the coming flood of misinformation and slurs. Also, John talks about downloading porn over dial up, which is obviously important enough for me to mention here.
If you feel you cannot delete your Meta apps, you can visit johnoliverwantsyourraterotica.com to follow steps to make you and your data much less valuable to them.
9 notes · View notes
gwydionmisha · 5 months ago
Text
youtube
Content Moderation: Last Week Tonight with John Oliver (HBO)
7 notes · View notes
tomorrowusa · 5 months ago
Text
youtube
John Oliver did a major piece on Mark Zuckerberg's Meta. The main but not singular focus is content moderation. Zuck's total capitulation to MAGA means that real content moderation is now a thing of the past there.
In his typical way, John does a funny takedown of Mark Zuckerberg. It's worth watching the vid just for those bits.
Near the end he tells people who are still on Facebook, Instagram, and other Meta platforms how to make it less valuable to MAGA Meta Mark. It's your way to "defund Meta".
For your convenience, here's the link...
How to change your settings to make yourself less valuable to Meta
Of course you should just leave Meta entirely. But if you're still Zuck-curious then that's a fair first step. Share that link with people still using Zuck platforms.
One of the the things John Oliver also recommends at that link is using the Firefox browser. Firefox by Mozilla is the best general use browser for privacy and I use it myself. Chrome is just a vacuum cleaner for data for Google.
27 notes · View notes
lisafication · 1 year ago
Text
for the love of god if anyone reading this ever ends up in a position where they're running a social media site never try Posting Through It as a PR stratagem to address controversy
21 notes · View notes
justinspoliticalcorner · 4 months ago
Text
Sarah Jones at PoliticusUSA:
The power of the broligarchy grows as new reporting comes out that Elon Musk, who runs a government department called DOGE and thus can be seen arguably as the government when he speaks, pressured Reddit CEO Steve Huffman on content moderation.
“Elon Musk pressured Reddit’s CEO on content moderation.
Not only has Musk been waging a months plus long campaign on his social media platform X against content posted on Reddit, but “he was also privately messaging Reddit CEO Steve Huffman,” The Verge reported today, based on sources they cite as familiar with the matter. Musk was angry that after his “Nazi” salute at Trump’s inauguration, some subreddits banned links to X (formerly known as Twitter). When that didn’t change anything, the Verge notes he “posted that Reddit users advocating for violence against Department of Government Efficiency (DOGE) employees had “broken the law.” In February of this year, the BBC reported that Reddit had “temporarily banned one of its communities - and removed another - after X owner Elon Musk claimed comments made by the site's users about his employees were breaking the law.” This happened after a subreddit that posts funny posts from Elon’s social media platform “posted comments calling for violence against members of the Musk-led Department of Government Efficiency (Doge)” while “responding to reports which suggested some Doge staff have been granted access to sensitive personal information of millions of Americans.” But it turns out Reddit banned hundreds of comments that did not call for violence or doxxing, according to today’s reporting in The Verge: Shortly after the two CEOs exchanged text messages, Reddit enacted a 72-hour ban on the “WhitePeopleTwitter” subreddit that hosted the thread about DOGE employees, citing the “prevalence of violent content.” The specific threadMusk shared on X was also deleted, including hundreds of comments that didn’t call for violence or doxxing. (So far, Reddit doesn’t appear to have intervened in any moderator decisions to ban X links from the subreddits they oversee.) In other words, Elon Musk illegally took our private information and then went crying to the CEO of Reddit because people were angry about his theft of their private information. Publicly calling for violence is not okay and Reddit is a publicly traded business that can set that limit on “free speech” wherever it choses because it is NOT the government, but it is notable that there are hate reddits that appear to dox other public figures and comments plotting against children that Reddit seemingly does not take action on.
So the real issue here is not that some people called for violence, but that Reddit took action to silence hundreds of commenters who did not call for violence after the head of a government agency contacted their CEO to complain about people who were upset about him breaking into systems and stealing their private information.
[...] Musk also put on that big show of the “Twitter Files,” which got me and a few other journalists written about (shamed?) for calling out those who carried out stenography for a billionaire. One of the false narratives of the Twitter Files was that the government had “illegally coerced Twitter into censoring a 2020 New York Post article about Hunter Biden.” [...] Whatever his reasons, though, the bottom line is this is the Trump administration’s attacks on free speech. They do not respect the fundamental freedom of free speech. They do not respect what makes America great. They want to silence everyone who disagrees with them because they seek the ultimate, unlimited and unchecked power.
Petulant manchild co-President Elon Musk forced Reddit to censor comments critical of Musk and DOGE in the r/WhitePeopleTwitter subreddit. This shows that freedom of speech is eroding in America.
12 notes · View notes
soosxcial · 5 months ago
Text
ai moderation is a helpful tool for handling the massive amounts of content online and catching obvious violations quickly and efficiently. it's fast and can process thousands of posts in the blink of an eye.
but the downside? it often lacks the nuance, context, and understanding of human intent. that’s where human moderation steps in, bringing empathy, insight, and a deeper understanding of cultural differences. even though sometimes there is a downside of possibly traumatizing the moderators
this is why ai should definitely be used as a first round filter, flagging content for review, but anything flagged should go through human moderation for a second, more thoughtful look. balancing both is key to keeping online spaces safe without losing the personal touch.
5 notes · View notes
alpaca-clouds · 1 year ago
Text
Social Media is Nice in Theory, but...
Tumblr media
Something I cannot help but think about is how awesome social media can be - theoretically - and how much it sucks for the most part.
I am a twitter refugee. I came to tumblr after Elmo bought twitter and made the plattform a rightwing haven. But, I mean... There is in general an issue with pretty much all social media, right? Like, most people will hate on one plattform specifically and stuff, while upholding another plattform. But let's be honest... They all suck. Just in different ways.
And the main reasons for them sucking are all the same, right? For one, there is advertisement and with that the need to make the plattform advertiser-friendly. But then there is also just the impossibility to properly moderate a plattform used by millions of people.
I mean, the advertisement stuff is already a big issue. Because... Sure, big platforms need big money because they are hosting just so much stuff in videos, images and what not. Hence, duh, they do need to get the money somewhere. And right now the only way to really make enough money is advertisement. Because we live under capitalism and it sucks.
And with this comes the need to make everything advertiserfriendly. On one hand this can be good, because it creates incentives for the platform to not host stuff like... I don't know. Holocaust denial and shit. The kinda stuff that makes most advertisers pull out. But on the other hand...
Well, we all know the issue: Porn bans. And not only porn bans, but also policing of anything connected to nude bodies. Especially nude bodies that are perceived to be female. Because society still holds onto those ideas that female bodies need to be regulated and controlled.
We only recently had a big crackdown on NSFW content on even sides that are not primarily advertiser driven - like Gumroad and Patreon. Because... Well, folks are very intrested in outlawing any form of porn. Often because they claim to want to protect children. The truth is of course that they often do quite the opposite. Because driving everyone away from properly vetted websites also means, that on one hand kids are more likely to come across the real bad stuff. And on the other hand, well... The more dingy the websites are, that folks consume their porn on, the more likely it is to find some stuff like CP and snuff on those sides. Which will get more attention like this.
But there is also the less capitalist issue of moderating the content. Which is... kinda hard on a lot of websites. Of course, to save money a lot of the big social media platforms are not really trying. Because they do not want to for proper moderators. But it makes it more likely for really bad stuff to happen. Like doxxing and what not.
I mean, like with everything: I do think that social media could be a much better place if only we did not have capitalism. But I also think that a lot about the way social media is constructed (with the anonymity and stuff, but also things like this dopamine rush when people like your stuff) will not just change, if you stop capitalism.
15 notes · View notes