Don't wanna be here? Send us removal request.
Text
Blog Post #10 - Week 13 (due 4/24)
Tweeting, Protesting, and Policing Free Speech
How did the Patriot Act institutionalize previously illegal surveillance practices, and what are the implications for civil liberties?
The Patriot Act formalized many surveillance practices previously conducted unlawfully, such as unauthorized wiretapping and email monitoring. By reducing the legal standards for obtaining FISA warrants, the Act essentially granted federal agencies near-automatic surveillance powers under vague justifications like “foreign intelligence.” This expansion eroded civil liberties, weakening the Fourth Amendment’s protections against unreasonable searches. As Parenti notes, “the cumulative overall effect of such measures is corrosive of popular democratic rights and traditions” (2003, p. 200), signifying a dangerous normalization of state overreach in the name of national security.
In what ways did government surveillance initiatives like TIA and SEVIS redefine the relationship between the state and ordinary citizens, particularly immigrants?
Programs like Total Information Awareness (TIA) and SEVIS dramatically reshaped the state-citizen relationship by transforming everyday behavior into data points for suspicion and control. SEVIS, for instance, turned universities into surveillance tools, requiring them to report foreign student activities. TIA sought to integrate all aspects of civilian life - from health records to library use - into a massive database. These efforts institutionalize distrust and racial profiling, particularly targeting immigrants and non-citizens. As Parenti argues, such surveillance “subordinate[s] the population” (2003, p. 203), creating a society where being watched becomes the norm, not the exception.
What does the failure of the TIPS program reveal about public attitudes toward mass surveillance, and how did it reflect broader concerns about state power?
The public backlash against the Terrorism Information and Prevention System (TIPS), which aimed to recruit citizens as informants, revealed deep unease about mass surveillance and state overreach. The program’s failure, despite high-level support, underscored Americans’ resistance to becoming instruments of state control, especially in their private lives. As Parenti notes, the program’s premise - a society “where citizens mistrust each other” and rely on unchecked state power - was fundamentally flawed (2003, p. 204). TIPS’ collapse showed that even post-9/11, the public maintained critical limits on how far they would tolerate surveillance, especially when it blurred into authoritarianism.
To what extent does the criminalization of Twitter-based protest coordination reflect broader concerns about the erosion of free speech in digital spaces?
The criminalization of Elliot Madison and Michael Wallschlaeger’s Twitter activity highlights a troubling trend where digital free speech intersects with law enforcement overreach. As Power notes, Madison’s lawyer called the charges “absolutely protected speech” (2010, p. 2). The incident reflects fears that tools used for public assembly are being reframed as criminal threats. The irony is stark when Power recalls that “the State Department asked Twitter to delay scheduled maintenance” during Iranian protests, supporting the same conduct abroad that was punished as home (2010, p. 2). This contradiction suggests a selective application of free speech, especially when dissent challenges domestic authority.
How does the vague language of federal anti-riot statutes allow for subjective and potentially abusive legal interpretations?
Federal anti-riot statutes, such as 18 USC §2101, are criticized for their vagueness and potential to criminalize lawful dissent. Power writes that the law enables prosecution for merely attempting to “organize, promote, encourage, participate in, or carry on a riot” (2010, p. 3), a language so broad it risks encompassing peaceful protest coordination. Lawyer Martin Stolar warns this “starts to criminalize dissent, to conflate terrorism with demonstrations” (Power, 2010, p. 3). The undefined scope allows law enforcement to use these laws preemptively or punitively, especially against marginalized or politically unpopular groups, undermining constitutional protections for protest and free expression.
Word Count: 481
Parenti, C. (2003). Fear as Institution: 9/11 and Surveillance Triumphant. In The Soft Cage. Basic Books.
Power, M. (2010). How your Twitter account could land you in jail. Mother Jones. https://www.motherjones.com/politics/2010/03/police-twitter-riots-social-media-activists/
5 notes
·
View notes
Text
Blog Post #9 - Week 12 (due 4/17)
Power and Privacy Online
What role do private corporations play in expanding state surveillance according to Surveilled?
Surveilled emphasizes that private tech corporations are key players in enabling mass surveillance, often doing the government’s data collection for them. As one expert in the film states, “Private companies collect our data more efficiently than the government ever could” (O’Neill & Peltz, 2020). This cooperation creates a surveillance ecosystem where data from social media, phones, and apps is monetized and shared, sometimes without users’ knowledge or consent. The film critiques the lack of transparency and regulation, showing how surveillance capitalism incentivizes companies to exploit personal data. It raised urgent ethical concerns about privacy, civil liberties, and democratic accountability.
How does Ronan Farrow’s investigative approach in Surveilled enhance the audience’s understanding of surveillance’s personal consequences?
Ronan Farrow’s storytelling in Surveilled humanizes the abstract concept of surveillance by focusing on individuals affected by it. Through intimate interviews, he reveals the psychological toll it takes - especially on whistleblowers, journalists, and activists. One subject shares, “They knew where I went, who I talked to - it felt like I was being hunted” (O’Neill & Peltz, 2020). Farrow’s compassionate yet persistent style highlights how surveillance erodes trust, silences dissent, and breeds fear. By embedding himself in these stories, Farrow connects broader political systems to personal experiences, helping viewers grasp the emotional and societal stakes of living under constant digital scrutiny.
How do virtual hair blogs function as spaces of resistance and self-affirmation for Black women in the face of dominant institutional beauty standards?
Virtual hair blogs operate as a counter-space to institutionalized oppression by affirming the natural Black body and fostering community. As Lee explains, “hair blogs operating as a virtual homeplace have become a site of affirmation, a space to discuss issues of concern, provide support, elevate spirits and also resist hatred” (Lee, 2015, p. 93). These spaces challenge dominant aesthetics by “allow[ing] Black women to (re)formulate their own ideas of the body” (Lee, 2015, p. 92). In resisting the erasure and disciplining of Black hair, these blogs act as healing environments, empowering users to celebrate and normalize natural hairstyles despite external pressures to conform.
To what extent can ethnic online communities (EOCs) balance the tension between profit-making and cultural empowerment, and what risks emerge when profit is prioritized over purpose?
Ethnic online communities face a critical dilemma: commodifying identity for financial gain often compromises cultural authenticity and empowerment. As McLaine notes, “Profit and community make curious bedfellows” (p. 234) and the push for market share can result in “homogenizing and ignoring differences” (McLaine, 2001, p. 236). Sites like BlackPlanet.com may generate ad revenue, but metrics like “page views” or “time on site” cannot fully capture meaningful engagement. Ignoring cultural specificity to appeal broadly may alienate users these platforms aim to serve. Thus, sustainable EOCs must prioritize culturally relevant content and user empowerment alongside financial viability.
How does the Gamergate movement reveal deeper issues of gender and privilege in online spaces, beyond its stated concern for ethics in game journalism?
The Gamergate movement, while outwardly about ethics in game journalism, was largely a reaction to shifting demographics and the challenge to white male dominance in gaming culture. Hathaway notes, “They ‘just want to play games,’ without complicating things by discussing how those games portray women and minorities” (Hathaway, 2014). This resistance to change reflects a discomfort with confronting privilege, as projects like Anita Sarkessian’s critiques “pierced that bubble of privilege” (Hathaway, 2014). The movement’s core was not ethics, but a backlash against inclusivity and evolving gamer identities.
Word Count: 485
Hathaway, J. (2014, October 10). What is Gamergate, and why? An explainer for non-geeks. Gawker.
Lee, L. (2015). Virtual Homeplace: (Re)Constructing the Body through Social Media.
McLaine, S. (2001). Ethnic Online Communities: Between Profit and Purpose. In D. Gauntlett (Ed.), Web.Studies: Rewiring Media Studies for the Digital Age. Arnold Publishers.
O'Neill, M., & Peltz, P. (Directors). (2020). Surveilled [Film]. HBO Documentary Films.
1 note
·
View note
Text
Blog Post #8 - Week 11 (due 4/10)
Digital Resistance and Decolonial Power
How does Black Twitter function as a form of resistance against racial bias in mainstream media, and what does this reveal about the potential of digital spaces to challenge systemic oppression?
Black Twitter serves as a powerful site of resistance where users reframe narratives imposed by biased media coverage. By employing “textual poaching,” users subvert mainstream portrayals of Black individuals, as seen in the viral #APHeadlines hashtag. This movement exemplified “textual poaching as resistance,” using satire to call out implicit bias: “Through facetious comedy and jokes Twitter users were able to create a space that allowed them to voice their anger” (Lee, 2017). Digital platforms like Black Twitter demonstrate the potential for marginalized voices to reclaim narratives and influence public discourse, proving that virtual spaces can serve as modern battlegrounds for justice.
To what extent can online activism challenge traditional power structures, and how effective are Internet-based methods compared to Internet-enhanced ones in generating tangible political change?
Online activism has proven capable of bypassing traditional power structures by creating alternative networks of information and mobilization. While Internet-enhanced activism raises awareness efficiently, Internet-based activism often forces direct confrontation. For example, the Electronic Disturbance Theatre’s “FloodNet” disrupted government websites, representing “electronic civil disobedience” aimed at drawing attention, not causing harm (Vegh, 2003). However, as Vegh notes, “the most successful online advocacy campaigns seem to be the ones that combine the different types of lobbying and mobilization” (Vegh, 2003), suggesting that hybrid approaches are more effective than purely digital actions in achieving long-term political change.
To what extent does Christian Fuchs challenge the idea of “Twitter revolutions” in his critique of Castells’ view on social media’s role in movements like the Arab Spring and Occupy, and what implications does this have for understanding the true drivers of collective action?
Fuchs critically challenges Castells’ techno-deterministic view by arguing that revolutions are not caused by the Internet itself but by people embedded in societal struggles. He states, “Castells’ model is simplistic: social media results in revolutions and rebellions” (Fuchs, 2014). Instead, Fuchs emphasizes that media effects are shaped by context, power relations, and strategies. This critique implies that understanding collective action requires attention to underlying political and economic structures, not just communication technologies. The notion of “Twitter revolutions” overlooks the human agency and systemic conditions that drive movements, thus oversimplifying the complexities of real-world social change.
How did social media transform the nature of Indigenous-led activism during the #NoDAPL movement, particularly in amplifying Indigenous voices and disrupting dominant narratives?
Social media transformed #NoDAPL from a local protest into a global movement by amplifying Indigenous voices that had long been marginalized. Nicholet A. Deschine Parkhurst (2021) notes, “social media activism of the #NoDAPL movement extended beyond recognition and consciousness raising to garnering solidarity in virtual and physical spaces” (Deschine Parkhurst, 2021). Through livestreams, posts, and check-ins, Native people and allies created a counter narrative to mainstream media’s erasure or distortion of their struggle. This use of digital platforms allowed for real-time mobilization and education, demonstrating that social media is not just a tool of visibility, but of resistance and Indigenous sovereignty.
How does the framing of online activism as disruption rather than mere connective action reshape our understanding of Indigenous resistance in the #NoDAPL movement?
Framing online activism as disruption rather than just connective action foregrounds its political potency in Indigenous resistance. Parkhurst (2021) critiques the reduction of Indigenous digital efforts to apolitical networking, stating, “describing these acts simply as connective actions makes them almost devoid of political significance” (Deschine Parkhurst, 2021). Instead, these digital traces serve as tools of decolonization, challenging settler colonial structures. By disrupting mainstream narratives and state systems - from the courts to the Army Corps of Engineers - #NoDAPL reclaims Indigenous agency and redefines activism as rooted in sovereignty, not settler validation. This framing affirms that digital activism can enact real-world systemic change.
Word Count: 493
Deschine Parkhurst, N. A. (2021). From #Mniwiconi to #StandwithStandingRock. In J. B. Hurlbut & M. L. Gray (Eds.), Connected activism: Indigenous uses of social media for political change. University of Washington Press.
Fuchs, C. (2014). Social media: A critical introduction. SAGE Publications.
Lee, L. A. (2017). Black Twitter: A response to bias in mainstream media. Social Sciences, 6(1), 26. https://doi.org/10.3390/socsci6010026
Vegh, S. (2003). Classifying forms of online activism: The case of cyberprotests against the World Bank. In M. McCaughey & M. D. Ayers (Eds.), Cyberactivism: Online activism in theory and practice. Routledge.
#TCL25#DigitalResistance#NoDAPL#DecolonizeTheInternet#BlackTwitterPower#IndigenousSovereignty#OnlineActivism
5 notes
·
View notes
Text
Blog Post #7 - Week 10 (due 3/20)
Race, Technology, and Online Abuse
How does Seeking Mavis Beacon explore the intersection of race, exploitation, and technology in the context of Mavis Beacon’s story, and what does this reveal about systemic inequality in the tech industry?
Seeking Mavis Beacon uncovers how Mavis Beacon’s image was exploited without her consent or compensation, reflecting broader systemic inequality in the tech industry. The documentary reveals that the game’s creator used her likeness to sell a widely successful product while denying her recognition or profits, highlighting the industry’s exploitative practices, especially toward women of color. As the film states, “Mavis was told it would be a photoshoot, not knowing her likeness would be used in a highly profitable game for years” (Jones, 2024). This raises questions about how marginalized groups are often erased or undervalued in spaces where they could otherwise thrive. The documentary emphasizes the ongoing racial and gender disparities within the tech world.
How does the internet’s accessibility and reliability, as described by Elin, contribute to both positive civic engagement and the spread of radical ideologies?
Elin highlights the internet’s positive role in fostering communication and networking, describing it as “cheap, fast, and dependable” (Elin, page 13). This accessibility makes it an essential tool for civic engagement, enabling marginalized groups and activists to organize and advocate for social change. However, the same qualities that make the internet beneficial also facilitate the spread of radical ideologies, allowing fringe groups to connect and amplify their messages. This duality underscores the internet’s power to both unite communities for positive change and serve as a breeding ground for extremism.
How has the internet transformed white supremacist social movements from localized efforts to global networks, and what implications does this shift have for combating hate-based ideologies?
The internet has enabled white supremacist movements to transcend national boundaries and evolve into global networks. Daniels (2025) emphasizes that “the Internet facilitates the formation of a transnational, explicitly racist white identity” (p. 43). This transformation allows disparate groups to unify under shared ideologies, amplifying their influence. The global nature of these movements complicates efforts to counter hate, as tactics effective in one region may not apply universally. Addressing this challenge requires international collaboration, online monitoring, and community resilience to combat the global spread of supremacist ideologies.
How did Twitter’s response to Leslie Jones’s online abuse highlight the platform’s shortcomings in handling hate speech, and what does this incident reveal about the challenges of moderating harmful content on social media?
Twitter’s response to Leslie Jones’s online abuse revealed the platform’s inadequacies in addressing hate speech. Despite acknowledging the severity of the harassment and taking action against some accounts, Twitter admitted to falling short of effectively preventing and managing such behavior. As Twitter stated, “We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree” (Silman, 2016). This incident underscored the challenges of moderating harmful content, as abusive tweets continued despite enforcement efforts. Jones’s experience demonstrated the gap between policy intentions and practical enforcement, raising questions about the efficacy of social media regulations in curbing targeted harassment.
How does Danielle Keats Citron differentiate between traditional stalking and cyber harassment, and what impact does the anonymity of the internet have on these behaviors?
Citron (2014) explains that while traditional stalking involves offline behaviors such as vandalism or physical assault, cyber harassment “involves threats of violence, privacy invasions, reputation-harming lies, calls for strangers to physically harm victims, and technological attacks” (p. 3). The internet exacerbates the impact by extending the lifespan of abusive content, as posts can be indexed and retrieved long after they are first published, causing enduring harm. The anonymity of the internet amplifies these behaviors, as perpetrators can act without fear of identification or direct consequences, making it easier to target victims repeatedly.
Word Count: 488
Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.
Daniels, J. (2025). White Supremacist Social Movements Online and In a Global Context. In Social Movements in the Information Age (Chapter 4).
Elin, L. (2025). The Radicalization of Zeke Spier: How the internet contributes to civic engagement and new forms of social capital. Journal of Digital Sociology.
Jones, J. (2024). Seeking Mavis Beacon: A documentary on exploitation and racial representation in technology. [Film].
Silman, A. (2016). A timeline of Leslie Jones's horrific online abuse.
6 notes
·
View notes
Text
Blog Post #6 - Week 8 (due 3/13)
Feminism, Technology, and Gender Dynamics in Digital Media and Beyond
How does Haraway’s concept of the cyborg challenge traditional boundaries between human and machine, and what implications does this have for feminist theory?
Haraway’s concept of the cyborg fundamentally challenges the binary distinctions between human and machine, nature and culture. She asserts that the cyborg is a “hybrid of machine and organism” that resists traditional dualisms (Haraway, 1991, p. 354). By rejecting essentialist notions of identity, Haraway prompts a post-gender world that transcends rigid categorizations. This perspective reshapes feminist theory by emphasizing fluidity and intersectionality, as opposed to fixed identities. Through this lens, feminist politics can become more inclusive, embracing diversity and multiplicity rather than reinforcing exclusionary practices rooted in biological determinism.
In what ways does Haraway use the metaphor of the cyborg to critique capitalism and patriarchy, and how does this critique relate to socialist-feminism?
Haraway employs the cyborg metaphor to critique capitalism and patriarchy by illustrating how technology and science are entangled with capitalist production and patriarchal control. She quotes, “It is oppositional, utopian, and completely without innocence” (Haraway, 1991, p. 354). She critiques the way science and technology serve capitalist interests, creating fragmented identities that perpetuate oppression. However, the cyborg also symbolizes resistance and transformation, embodying the potential to subvert these power structures. Socialist-feminism benefits from this critique by adopting the cyborg as a figure of coalition and solidarity, advocating for collective action that transcends identity boundaries. Haraway envisions a politics that is not about purity but about coalition and transformation.
How does the digital era transform the role and participation of women in white supremacist movements compared to the print-only era, and what does this shift indicate about gender dynamics within these movements?
The digital era has transformed women’s participation in white supremacist movements by providing them with more open and interactive platforms, like Stormfront.org, where they actively contribute to discourse. Unlike the print-only era, where men held dominant roles and women played more symbolic or supportive functions, women now have designated spaces to express their ideologies. Daniels (2009) notes that “white supremacy in the digital era… offers more openness and dissent within white supremacist discourse” (p. 62). This shift highlights an evolving gender dynamic that simultaneously maintains male dominance while giving women a more pronounced voice.
What are the implications of white supremacist websites creating “ladies-only” forums on the perception of gender roles within these movements?
The creation of “ladies-only” forums on white supremacist websites like Stormfront.org reveals a contradiction between promoting male dominance and acknowledging women’s increasing involvement. These spaces reflect both an attempt to preserve traditional gender roles and an acknowledgment of women as active participants. Daniels (2009) asserts that these forums “illustrate both the growing engagement of women in white supremacy and… the male dominance that is central to both” (p. 62). Thus, the forums both challenge and reinforce traditional gender norms, reflecting tension between maintaining patriarchal authority and integrating women’s participation.
How does the representation of femininity in digital media, like Ananova, reflect cultural and technological ideologies?
The representation of femininity in digital media, such as Ananova, reflects both cultural and technological ideologies. Ananova, designed as a friendly and attractive female figure, exemplifies how technology can be gendered, often reinforcing traditional gender roles. These simulations are often crafted to appeal to masculine ideals, positioning the female body as both desirable and “natural” within digital spaces. As O’Riordan notes, these digital personae “personalize the impersonal” technology, creating an emotional and psychological connection with users (O’Riordan, 2000, p. 245). Such figures signify the blending of femininity with technology in ways that reinforce societal norms and cultural expectations.
Word Count: 480
Daniels, J. (2009). Gender, white supremacy, and the internet. In Cyber Racism: White Supremacy Online and the New Attack on Civil Rights (pp. 61–90). Rowman & Littlefield Publishers.
Haraway, D. J. (1991). A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. In Simians, Cyborgs, and Women: The Reinvention of Nature (pp. 354-359). Routledge.
O'Riordan, K. (2000). Gender, technology and visual cyberculture. In Cyberculture: The politics of the internet (pp. 243-254). Sage Publications.
#TCL25#CyborgFeminism#VirtualFemininity#DigitalFeministRevolution#CulturalIdeologiesInTech#WomenInTheDigitalWorld
7 notes
·
View notes
Text
Blog Post #5 - Week 7 (due 3/6)
Exploring Bias, Racism, and Social Validation in the Digital Age
How do social media platforms perpetuate racial biases through their design and algorithms?
Social media platforms often perpetuate racial biases through their design and underlying algorithms, which reflect the biases of their creators and the broader society. Senft and Noble (2014) highlight that “racism is part of life on the internet… it is now a global reality” (p. 112). For instance, search algorithms may prioritize content that aligns with prevailing societal biases, resulting in the marginalization of minority voices. This perpetuation occurs because algorithms are not neutral; they are designed by individuals who may unconsciously embed their own biases into these systems. Consequently, users are often presented with information that reinforces existing racial prejudices, thereby maintaining systemic inequities within digital spaces.
How does Ruha Benjamin critique the notion that technology is naturally unbiased, and what examples does she use to support her argument?
Benjamin (2019) challenges the notion that technology is naturally unbiased, arguing that “coded inequity makes it easier and faster to produce racist outcomes” (p. 48), even when no one is explicitly racist. She illustrated this through examples like biased healthcare algorithms that allocate fewer resources to Black patients, reinforcing systemic disparities. Such technologies claim objectively but reflect the social values of their creators. By embedding discrimination into automated systems, technology amplifies racial injustices under the guide of efficiency, making systemic biases less visible and harder to challenge. Addressing this requires ethical design and accountability.
How does the presence of white supremacy online challenge traditional understandings of race, racism, and civil rights in the digital era?
The presence of white supremacy online complicates common perceptions of race and racism by demonstrating how digital spaces are not neutral but rather sites of ideological struggle. Daniels (2009) highlights how white supremacists have used the Internet to advance their political goals, often through sophisticated and deceptive means such as cloaked websites. These sites “appear to be legitimate sources of civil rights information yet actually disguise-or cloak-white supremacist content several page-layers down” (p. 6). This demonstrates how digital media can be weaponized to undermine racial equality and distort historical truths. Consequently, critical digital literacy is essential for recognizing and challenging online white supremacy.
How does the social rating system in Nosedive reflect real-world anxieties about social media validation?
The episode highlights the dangers of a society obsessed with external validation, where individuals are judged solely by their social ratings. Lacie’s desperation to increase her score reflects modern anxieties about online approval. As she states, “I’m on my way up. I can feel it” (Brooker & Wright, 2016), her worth is entirely dependent on others’ perceptions. This mirrors real-life issues where people curate their lives for likes and followers, leading to anxiety and inauthentic interactions. The episode serves as a warning about the psychological and societal consequences of excessive reliance on digital validation.
In what ways does Nosedive critique the illusion of a “perfect” life portrayed on social media?
The episode critiques the fabricated nature of online personas, as characters maintain forced positivity to maintain high ratings. Lacie’s friend Naomi exemplifies this, saying, “I mean, 4.2 is still good, but I want to keep my circle… high” (Brooker & Wright, 2016), showing how friendships are based on status rather than authenticity. This reflects how social media encourages people to project unrealistic perfection while hiding struggles. Lacie’s breakdown demonstrates the unsustainable pressure of maintaining an idealized self. The episode ultimately reveals the emptiness behind digital perfectionism, urging viewers to embrace authenticity over superficial online approval.
Word Count: 493
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
Brooker, C. (Writer), & Wright, J. (Director). (2016). Nosedive (Season 3, Episode 1) [TV series episode]. In C. Brooker (Creator), Black Mirror. Netflix.
Daniels, J. (2009). Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Rowman & Littlefield Publishers.
Senft, T. M., & Noble, S. U. (2014). Race and social media. In J. Hunsinger & T. M. Senft (Eds.), The social media handbook (pp. 107-125). Routledge.
#TCL25#DigitalBias#SocialMediaCritique#OnlineRacism#NosediveAnalysis#SocialMediaReality#MediaManipulation
5 notes
·
View notes
Text
Blog Post #4 - Week 6 (due 2/27)
Race and Power in the Digital Age
How does the invisibility of race in cyberspace impact online discussions about racial identity and discrimination?
The invisibility of race in cyberspace can lead to both the erasure of racial identity and the silencing of discussions about discrimination. Kolko, Nakamura, and Rodman (2000) explain that online interactions often treat race as a binary - either “an invisible concept because it’s simultaneously unmarked and undiscussed, or … a controversial flashpoint for angry debate and overheated rhetoric” (p.1). Without visible racial markers, many assume a “race-neutral” space, which can reinforce dominant narratives that disregard systemic racism. However, as the authors argue, “whether we like it or not, in the real world, race does matter a great deal” (p. 4). This invisibility can allow for both progressive identity exploration and the perpetuation of exclusionary practices.
How does the game Shadow Warrior use humor and parody to justify racial stereotypes, and what are the consequences of this approach?
Shadow Warrior uses humor and parody to disguise its racial stereotypes, making them appear harmless or acceptable. The game’s creators claim it is a parody of kung-fu films, yet it “continued to promote its racist and sexist agendas” (Ow, 2000, p. 54). This justification allows players to engage with these stereotypes without critically questioning their impact. As Ow states, the game presents “a colonizing narrative where conquest and exploration, rather than upholding justice, become the primary goals” (p. 58). Ultimately, the game’s humor masks deeper issues of racism and exclusion in digital spaces.
How does Pokemon GO reflect real-world racial and social divisions?
Pokemon GO may seem like a fun, lighthearted game, but it unintentionally highlights racial and economic inequalities. The game requires players to move around different neighborhoods, but this can be dangerous for some people. As Omari Akil noted, “Let’s just go ahead and add Pokemon GO to the extremely long list of things white people can do without fear of being killed” (p. 1). Minority players, particularly Black and Asian Americans, often face suspicion or even violence in certain areas. This shows how digital games are not separate from real-world issues but actually reinforce them.
How does the Internet make white supremacist messages more accessible to the public, and why is this a concern?
The Internet makes white supremacist content easy to find and share, which increases its influence. As David Duke states, the Internet allows white supremacists to spread their message faster than ever before, giving “millions access to the truth that many didn’t even know existed” (Daniels, 2009, p.1). This is dangerous because anyone, including young people, can come across these ideas, sometimes without realizing they are reading racist content. “Cloaked websites further obscure defining what constitutes white supremacy in the digital era” with messages as educational resources (Daniels, 2009, p. 6). Search engines often present all websites as equally credible, people may struggle to tell accurate historical information from white supremacist propaganda.
Why is it difficult to track and limit white supremacist content online?
Tracking and limiting white supremacist content online is challenging because of the strategies these groups use to hide their message and identities. Many white supremacists use “difficult-to-detect authorship and hidden agendas” to spread their ideology while appearing as legitimate sources (Daniels, 2009, p. 4). Additionally, estimating the number of hate sites is difficult because “ownership, residence, and server location of a domain name - all three of which can be different” (p. 6). The U.S. also protects many of these websites under free speech laws, making it harder to remove them, even when their content promotes racism and misinformation.
Word Count: 515
Daniels, J. (2009). Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Rowman & Littlefield Publishers.
Kolko, B. E., Nakamura, L., & Rodman, G. B. (2000). Race in cyberspace: An introduction. Routledge.
Nakamura, L. (2016). The Race Card: Ludo-Orientalism and the Gamification of Race.
Ow, J. A. (2000). The rape of digital geishas and the colonization of cybercoolies in 3D Realms' Shadow Warrior. In B. E. Kolko, L. Nakamura, & G. B. Rodman (Eds.), Race in Cyberspace (pp. 51-72). Routledge.
8 notes
·
View notes
Text
Blog Post #3 - Week 4 (due 2/13)
Racial Bias and Technology
How does Anna Everett’s idea of “black technophilia” challenge common beliefs about the digital divide, and how does Black history with technology add complexity to this view?
Everett challenges the common idea of the “digital divide” by showing that African Americans have often been early users of new technology, rather than being left behind. She highlights Black “technolust,” stating that “the swelled ranks of black people throughout the African diaspora connecting to the Internet, particularly to the World Wide Web, have forced a new reckoning with the rapidly changing configuration of the new electronic frontier” (Everett, 2002, p.133). This shifts the view of Black digital participation from being behind to being actively involved in tech progress. She also compares this to past technologies like the printing press, radio, and film, showing that Black communities have embraced and shaped technological advancements despite facing barriers. This challenges the idea that technology gaps exist only because of lack of access and instead highlights the creativity and adaptability of Black communities in using new media.
How does Everett’s view of the digital public sphere challenge traditional ideas of the Habermasian public sphere, and how does the Internet create new spaces for Black communities to share their voices?
Everett challenges the Habermasian idea of the public sphere by showing how Black communities have been excluded from mainstream discussions but have still created their own spaces for expression. She references Houston A. Baker Jr., who says Black Americans “are drawn to the possibilities of structurally and affectively transforming the founding notion of the bourgeois public sphere into an expressive and empowering self-fashioning” (Everett, 2002, p. 141). The Internet helps Black counterpublics form, as seen with the Million Woman March, where “working and so-called ‘under-class’ black women made ingenious uses of the new technology to further their own community uplift agendas” (Everett, 2002, p. 131). This shows how the Internet can empower marginalized voices beyond traditional media limitations.
How does Ruha Benjamin define the “New Jim Code,” and how does it continue systemic racial inequalities in technology?
Benjamin defines the “New Jim Code” as “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era” (Benjamin, 2019, p. 23). This concept highlights how racism is not eradicated by technological advancement but is instead embedded within algorithms and automated systems. For example, predictive policing and biased AI hiring tools reflect historical inequalities under the guise of neutrality. These systems entrench racial hierarchies while appearing race-neutral, making discrimination harder to challenge. Thus, the New Jim Code is a continuation of systemic oppression through digital means.
How does Benjamin challenge the idea that technology is neutral, and what are the risks of ignoring its racial biases?
Benjamin challenges the assumption of technological neutrality by arguing that “encompasses a range or discriminatory designs - some that explicitly work to amplify hierarchies, many that ignore and thus replicate social divisions” (Benjamin, 2019, p. 26). She emphasizes that technology does not simply reflect society but actively shapes and reinforces systemic biases. Ignoring the racialized design of technology allows injustices to be obscured under claims of efficiency and objectively. For example, algorithmic hiring systems often replicate historical patterns of exclusion, disproportionately disadvantaging Black applicants. By assuming technology is impartial, society enables these systems to operate unchecked, making discrimination even more difficult to detect and challenge.
How does Safiya Umoja Noble show that search engines reinforce racial and gender biases, and why does this matter for society?
Noble argues that search engine algorithms reinforce racial and gender biases by privileging crucial events over the concerns of human rights. She critiques the notion that search engines are neutral, showing how Google’s algorithm prioritizes content based on profit rather than ethical considerations. For example, early Google searches for “Black girls” returned hypersexualized content, revealing how “we need a full-on reevaluation of the implications of our information resources being governed by corporate-controlled advertising companies” (Noble, 2018, p. 34). This bias shapes public perception and access to knowledge, reinforcing systemic inequalities. Without accountability, these technologies continue to reproduce harmful stereotypes that disproportionately affect marginalized communities.
Word Count: 576
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity.
Everett, A. (2002). The revolution will be digitized: Afrocentricity and the digital public sphere. Social Text, 71(20.2), 125-146.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
5 notes
·
View notes
Text
Blog Post #2 - Week 3 (due 2/6)
Cyberfeminism, Technology, and Digital Inequality
Does cyberfeminism help fight gender and racial inequality online, or does it sometimes reinforce these inequalities?
Cyberfeminism aims to create online spaces for gender equality, but it can also overlook racial differences. Some cyberfeminist ideas assume a white, middle class perspective, leaving out the voices of women of color. Fernandez and Wildling note that much of cyberfeminist writing is targeted toward an “educated, white, upper-middle-class, English-speaking” audience, which can unintentionally exclude others (Daniels, 2009, p. 104). This highlights the need for a more inclusive approach that considers race, class, and access to technology. Additionally, digital activism led by women of color often operates outside mainstream cyberfeminist discourse, reflecting a broader need for intersectionality. While some platforms provide opportunities for marginalized voices, others replicate offline hierarchies, limiting real progress. By expanding cyberfeminism to actively address these exclusion, the movement can become more effective in advocating for digital equity.
Can people truly escape gender and racial identity online, or do digital spaces still reflect real-world inequalities?
Some early cyberfeminists believed that the internet allowed people to leave behind gender and racial identities. However, research shows that digital spaces often reflect real world inequalities. Daniels explains that instead of changing identities online, people “actively seek out online spaces that affirm and solidify social identities along with axes of race, gender, and sexuality” (Daniels, 2009, p. 110). Additionally, many online platforms use algorithms that reinforce existing biases, making marginalized identities more visible and subject to scrutiny. While some individuals may feel a sense of anonymity, structural inequalities persist in the ways people interact, build networks, and gain access to digital resources.
How do cyberfeminist practices differ in the Global North and Global South, and what challenges do women in developing nations face when engaging with digital technologies?
Cyberfeminist practices vary significantly between the Global North and Global South due to differences in economic resources, access to technology, and sociopolitical contexts. In industrialized nations, cyberfeminism often focuses on online activism, digital art, and gender representation in media. In contrast, women in developing nations frequently use digital technology as a tool for survival, resistance, and economic empowerment. Daniels highlights that “while it is true that many affluent women in the global North have ‘depressingly familiar’ practices when it comes to the Internet, this sort of sweeping generalization suggest a lack of awareness about the innovative ways women are using digital technologies to re-engineer their lives” (Daniels, 2009, p. 103). However, barriers such as limited internet access, censorship, and economic inequality continue to restrict their engagement. Addressing these disparities requires cyberfeminist movements to integrate global perspectives and advocate for digital inclusivity on a broader scale.
How do race and technology intersect to perpetuate systemic biases in digital spaces, and what can be done to address these issues?
Nicole Brown discusses how racial biases are embedded in technology, from facial recognition software to algorithmic decision making. These technologies often reinforce systemic inequalities rather than eliminate them. Brown highlights that “facial recognition software has been proven to misidentify Black and Brown individuals at significantly higher rates than white individuals, leading to real world consequences such as wrongful arrests and surveillance” (Brown, 2023). Addressing these issues requires greater accountability in tech development, including diverse representation in AI design, policy changes to regulate biased technologies, and increased advocacy for ethical digital practices. By critically examining the intersection of race and technology, we can work toward creating digital spaces that are equitable for all users.
How does automation in public services contribute to inequality, and what are its impacts on marginalized communities?
Virginia Eubanks argues that automation in public services disproportionately harms low-income and marginalized communities by making access to essential resources more difficult. Automated decision-making systems in welfare programs, housing assistance, and healthcare often reinforce pre-existing biases, leading to further exclusion. Eubanks notes that “they are shaped by our nation’s fear of economic insecurity and hatred of the poor; they in turn shape the politics and experience of poverty” (Eubanks, 2018, p.7). These technologies strip people of their autonomy and create barriers rather than solutions. To address this issue, we must push for transparency in algorithmic decision-making and ensure that automated systems are designed with fairness and social justice in mind.
Word Count: 603
Daniels , J. (2009). Rethinking cyberfeminism(s): Race, gender, and embodiment | request PDF. Project Muse . https://www.researchgate.net/publication/236786509_Rethinking_Cyberfeminisms_Race_Gender_and_Embodiment
Eubanks, V. (2018). (PDF) Virginia Eubanks (2018) automating inequality: How high-tech tools profile, police, and punish the poor. New York: Picador, St Martin’s press. https://www.researchgate.net/publication/337578410_Virginia_Eubanks_2018_Automating_Inequality_How_High-Tech_Tools_Profile_Police_and_Punish_the_Poor_New_York_Picador_St_Martin’s_Press
[Nicole Brown]. (2020, September 18). Race and Technology [Video]. Youtube. https://www.youtube.com/watch?v=d8uiAjigKy8
10 notes
·
View notes
Text
Blog Post #1 - Week 2 (due 1/31)
The Time My GPS Sent Me to Nowhere
Let me tell you about the day my GPS betrayed me in the scariest way possible.
It all started when my best friend invited me to her birthday party at a new restaurant downtown. She sent me the address, and like any normal person, I plugged it into my GPS, trusting it to guide me. After all, technology never fails, right? Or so I thought.
Everything seemed fine at first. The soothing robotic voice directed me through traffic, and I was feeling good - until I noticed the surroundings were getting sketchy. I was expecting a trendy restaurant with fairy lights and overpriced appetizers, but instead, I was driving past abandoned warehouses and empty parking lots.
Still, I trusted the GPS. Maybe the restaurant was in some cool, hidden spot? Maybe it was one of those “exclusive” places that didn’t even look like a restaurant from the outside? I convinced myself I was fine.
Then, my GPS proudly announced:
“You have arrived at your destination.”
I looked around.
I was in the middle of nowhere. No restaurant. No people. Just an empty lot with a single, flickering street light that looked like the beginning of a horror movie.
I checked the address again, and that’s when I realized: I had been sent to the completely wrong city. Not just the wrong street. Not just the wrong neighborhood. A whole different city - 30 minutes away from where I was supposed to be.
Panic set in. My phone was on 5%, and I still had no idea where the actual restaurant was. I called my friend, and of course, my phone decided to die mid-call. At this point, I had two choices: Accept that technology had failed me and go home, or attempt to find my way using my horrible sense of direction.
Long story short - I arrived at the party an hour late, sweaty, frustrated, and very close to throwing my GPS into the ocean. My friends laughed at me for the rest of the night, and I’m still convinced my GPS had a personal grudge against me that day.
Moral of the story? Never trust technology blindly. And always, always double-check the address before leaving.
Word Count: 378
2 notes
·
View notes