dv230
dv230
10 posts
Don't wanna be here? Send us removal request.
dv230 · 3 months ago
Text
Blog Post #7 Week 10 due March 20
1.) How do the different types of online "trolls" reflect human behavior/interactions in online communities?
There are different types of online "trolls" across digital spaces and social media. According to Phillips in "The Origins of Trolling" while researching online they encountered different types such as "Anti-trolls, also known simply as "antis," which publicly denounced other trolls, then proceeded to troll as many trolls as possible." Additionally there is also factions which Phillips mentions as "rogue trolls (those not associated with a stable group of trolling friends) tended, at least in my experience, to be unpredictable and extremely mis-anthropic. Pack trolls were much more social, and were often quite eager to help with my research, either by answering questions or putting me in touch with other trolls." demonstrating the different types of online interactions between trolls. This may also reflect different human behaviors as many people come into contact with trolls online. Some may choose to retaliate such as the anti trolls while others may continue the trolling itself. This mirrors real life human interaction as trolls are regular people with online access who choose to act this way.
2.) How does Leslie Jones’s case reflect misogyny and structural neglect? Jones was attacked with racism, sexism, and leaked private photos after starring in Ghostbusters. Her case reveals how Black women face more violent, graphic abuse, especially when visible or successful compared to their counterparts. Despite her fame, platforms didn’t act until public outcry forced them to. It’s a clear example of how misogynistic values and behaviors are enabled by technology and reinforced by structure. If someone with such a big platform and fanbase can be the victim of online abuse just think of those who suffer from it and don’t have the fans or platform to back them up.
3.) What does the Pew Research report tell us about who experiences the most harmful types of online harassment?
The Pew research report provides us with great insights into online harassment . As it states that about 40% of internet users report harassment, with young women facing the most dangerous kinds, like sexual threats and stalking. The report proves that online abuse is often targeted towards communities who are already vulnerable offline (like women of color and LGBTQ+ people) . This makes the user who are part of these communities experience worse treatment in digital spaces. Rather than putting rules or guidelines in place these online spaces or digital platforms often fail to protect them.
4.)What does Danielle Citron argue about the seriousness of online abuse? Citron personal beliefs are that online harassment is past annoying and goes beyond, claiming it is a blatant civil rights violation. Often time those who are victims lose jobs, mental stability, and their right to speak online. She explains that coordinated abuse for example like doxxing and impersonation often intentionally drives women and marginalized communities offline. The internet has become unsafe and a cesspool for hatred and online abuse due to its inability to keep up with itself. As more hatred and abuse is spread online there must be better rules and regulations put into place to combat it.
5.) How can online communities better protect users from harm?
In order to truly change the landscape and stop the spread of online abuse there needs to change in digital spaces and communities. To begin we must both the platforms and use accountable for their actions. Real accountability requires platform responsibility, stronger anti-harassment laws, and cultural change. As Citron suggests, laws must recognize the seriousness of cyber abuse, and platforms must take proactive steps to protect vulnerable users. Community members also play a role in calling out hate and supporting those who speak out.
Bergstrom, Kelly. “Don’t Feed the Trolls.” First Monday, vol. 16, no. 8, 2011.
Citron, Danielle Keats. Hate Crimes in Cyberspace. Harvard UP, 2014.
Duggan, Maeve. “Online Harassment.” Pew Research Center, Oct. 22, 2014.
Silman, Anna. “A Timeline of Leslie Jones’s Horrific Online Abuse.” The Cut, 2016.
1 note · View note
dv230 · 3 months ago
Text
Blog Post #10 Week 13 due 4/24/25
1. What is the main goal of the Data Detox Kit, and how does it relate to digital resistance? 
The Data Detox Kit is designed to help users take back control over their digital lives. It was created by the company Tactical Tech and the kit offers a series of steps to help people reduce their digital footprint, upgrade  their online privacy, and gain an understanding of how digital surveillance works. The Data Detox kit  promotes digital resistance, particularly for marginalized communities which are targeted  by surveillance and data  practices. As the data detox kit mentions, your stored data could “reveal important details about you and your habits, like where you live, where you work, and where you like to hang out with your friends.” so it is important to be aware of how your data is being stored and spread through the technology you use daily. By using the kit it enables users to be able to reclaim their digital autonomy while also engaging in digital practices that value safety, informed consent, and the right to exist online without exploitation. 
2. How does Christian Parenti describe the post-9/11 surveillance landscape in 'Fear as Institution'? 
In “Fear as Institution” Christian Parenti argues that the 9/11 attacks didn't create the surveillance culture we see today but rather it evolved its growth. He explores how existing surveillance technologies and programs were rapidly carried out under the guise of national security and protection. Parenti criticizes laws such as the USA Patriot Act, which normalized the slow erasure of privacy rights by reducing legal barriers for wiretaps, searches, and data collection. Parenti states “In many ways the frightening thing about the postattack crackdown has been how much of everyday life was prefabricated to fit neatly into a new and larger project of intensified state observation and repression.” Summarizing how as a society our lives have become okay with surveillance even when it is having a detrimental effect on our rights. 
3. In what ways does the criminalization of social media threaten protest and dissent?
In “How Your Twitter Account Could Land You in Jail.” Matthew Power writes about  how the use of social media can be weaponized against activists. In reference to the 2009 G20 protests Power writes “ Thousands of protesters had descended on the city ,presenting demands ranging from curbs on carbon emissions to the outright abolition of capitalism.” Many of those activists who were  using Twitter to share updates about police movements were arrested and charged with serious crimes. This criminalization of digital communication further blurs the line between public speech and state surveillance. It also highlights how protestors were treated as criminals simply for spreading information, exposing how technology meant to empower us can also become a force of oppression. As a society now more than ever we have seen that using technology to express defiance or organize collectively can now be misinterpreted as criminal actions, especially for marginalized voices that are already heavily policed. This relates to society today, as we have learned DHS has begun to screen immigrants activity on social media and will use that for grounds of denying immigration requests. 
4. What role does fear play in normalizing surveillance, and how is this especially harmful for marginalized communities? 
Fear is a critical mechanism used to normalize government surveillance under the veil of “protection for its citizens”. After 9/11, the government used the aftermath to further enhance  a climate of fear and  push through laws and technologies that increased its ability to further surveil citizens. This approach disproportionately harms marginalized groups-especially immigrants, Muslims, and political activists who become prime targets of these laws. Programs such as SEVIS created a surveillance infrastructure which tracked a specific group such as foreign students. These systems don't just simply  track behavior, they also  shape it, forcing submission through fear/scare tactics. For communities that already experience injustice, the normalization of fear leads to greater isolation, reducing political participation, and an effect on their freedom of expression. 
5. How can we reconcile the need for safety with the risk of over-surveillance? 
Reconciling safety and surveillance requires us to redefine  what 'safety' truly means. While governments often argue that surveillance ensures public safety, they frequently overlook how such monitoring unevenly impacts marginalized communities. Tools like the Data Detox Kit I previously mentioned,  allows individuals to reclaim their digital lives and question the security of their information/privacy. When it comes to “true” safety I believe it has to be based on consent, transparency, and  accountability across all digital spaces and technology. As we have seen over the years, surveillance compromises democratic values when it becomes normalized in society. Perhaps instead of relying on  control, we need to put systems into place which value users digital rights, provide resistance against data extraction, and build our own knowledge by learning technological literacy. 
Works Cited 
Parenti, Christian. *The Soft Cage: Surveillance in America from Slavery to the War on Terror*. Basic Books, 2003. 
Power, Matthew. 'How Your Twitter Account Could Land You in Jail.' *Mother Jones*, Mar/Apr 2010.
'Data Detox Kit.' Tactical Tech. https://datadetoxkit.org/en/home/
4 notes · View notes
dv230 · 4 months ago
Text
Blog Post #9- week 12
1. How does the concept of a “virtual homeplace” empower Black women in digital spaces?
Latoya Lee uses bell hooks’ concept of Homeplace to explain how natural hair blogs serve as a digital refuge for Black women. Latoya Lee states in "Women of Color and Social Media Multitasking" that a virtual homeplace can be defined as a "(real or imagined) place that offers comfort and nurture, where one can seek safe harbor against the racial and sexual oppression they may face on a daily basis". These blogs such as CurlyNikki, Afrobella, and BlackGirlLongHair are more than spaces for beauty tips. They become communities where Black women reclaim autonomy over their bodies and identities. Lee suggests these blogs can serve as "counter-disciplinary spaces that get created to allow one to re-envision their body within restricting and dominating forces' (Lee, 2015, p. 92). The blogs allow users to share stories, validate each other's experiences, and resist mainstream expectations of beauty. By their creation, these virtual homeplaces empower women by provide affirmation and healing, all while challenging systemic racism and gendered norms.
2. In what ways do platforms like BlackPlanet or MiGente blur the line between community and commodification?
Steven McLaine critically analyzes how ethnic online communities are often designed with profit in mind. Although these platforms are initially intended to unite marginalized communities they often submit to commodification. Platforms like BlackPlanet and MiGente gather detailed user data such as income, education, location and use it to sell ad space. In “Ethnic Online Communities: Between Profit and Purpose.” McLaine states that "profit and community make curious bedfellows" (McLaine, 2003, p. 234). The promise of empowerment is often overshadowed by corporate interests that choose to push products rather than genuine representation. Advertisements for credit cards and loans on BlackPlanet often ignore or exploit the community’s real financial challenges. While these platforms might provide a sense of belonging, their reliance on commodification ultimately limits their transformative potential.
3. What does the Gamergate controversy teach us about identity and power in online communities?
The Gamergate movement is a case study in how online communities can maintain exclusion under the appearance of shared values. Jay Hathaway shows that despite claims of concern for ethics in journalism, the movement primarily targeted women journalist like Zoë Quinn and Anita Sarkeesian with harassment. In the article “What Is Gamergate, and Why? An Explainer for Non-Geeks.” Hathaway claims "the movement was focused on destroying Zoë Quinn first, reforming games reporting second" (Hathaway, 2014). This reveals how identity policing in gaming spaces can reinforce misogyny and racism. Gamergate depicts how online communities can become hostile and preserve the status quo rather than create inclusive spaces for all users.
4. How did Black Twitter respond to the backlash against Halle Bailey’s casting in The Little Mermaid, and why is this significant?
In her 2023 article “Are Y’all Ready for a Black Mermaid? How Black Twitter Challenges White Supremacist Imaginations.”, Dr. Latoya Lee highlights how Black Twitter became a cultural battleground following Halle Bailey’s casting as Ariel. The platform quickly came to the defense of Bailey, using humor, memes, and references to resist the racist backlash. Lee states Black Twitter is a "space where Black Twitter users engage in the creation of hashtags, jokes, and memes, and has historically also served as a space for people to voice anger and frustrations with White supremacy and systemic racism" (Lee, 2023). Users challenged the white supremacist idea that mermaids must be white by creating the trending hashtag #MyArielIsBlack. Black Twitter’s intervention illustrates the power of digital platforms to foster community, celebrate representation, and rewrite cultural narratives.
5. Can online communities effectively resist dominant ideologies despite their limitations?
Yes, online communities can resist dominant ideologies, though not without tension. As Lee writes "there are a few preliminary ways that these blogs/vlogs operate as virtual homeplaces, including as sites of support, affirmation, networking and healing, there are a few limitations also found in these spaces ". Natural hair blogs and platforms like Black Twitter provide marginalized users with spaces to critique oppressive systems, share lived experiences, and build alternative worldviews. However, threats like data commodification, censorship, and trolling can undermine these efforts. The key lies in how communities are structured and maintained. When digital spaces embrace purpose over profit and empower users to shape content and discourse, they become vital tools for challenging inequality and reclaiming identity in technoculture.
Works Cited
Hathaway, Jay. “What Is Gamergate, and Why? An Explainer for Non-Geeks.” *Gawker*, 10 Oct. 2014. Lee, Latoya. “Virtual Homeplace: (Re)Constructing the Body through Social Media.” In *Women of Color and Social Media Multitasking*, edited by Keisha Edwards Tassie and Sonja M. Brown Givens, Lexington Books, 2015. Lee, Latoya. “Are Y’all Ready for a Black Mermaid? How Black Twitter Challenges White Supremacist Imaginations.” *Ms. Magazine*, 17 Apr. 2023. McLaine, Steven. “Ethnic Online Communities: Between Profit and Purpose.” In *Cyberactivism: Online Activism in Theory and Practice*, edited by Martha McCaughey and Michael D. Ayers, Routledge, 2003.
2 notes · View notes
dv230 · 4 months ago
Text
Blog post #8 due April 10th
How does Ian Bogost argue that Animal Crossing reflects political ideologies?
Ian Bogost challenges the widespread belief that Animal Crossing is an innocent form of digital escapism. Instead, he claims that the game subtly reinforces values such as self-reliance, consumerism, and debt. In the article "Animal Crossing Isn’t Escapist; It’s Political, The Atlantic," Bogost states that “Even though it can function as escapism, Animal Crossing isn't a fantasy-world replacement from real life, absent all its burdens.” . The game presents a vision of society where players willingly enter into debt with Tom Nook and then spend countless hours working to pay it off, reflecting a normalized cycle of economic obligation. Implying that even virtual worlds marketed as relaxing are reflections of real-world political and economic ideologies.
2. What role did social media play in the Arab Spring according to Christian Fuchs?
Christian Fuchs offers a critical analysis of the role social media played during the Arab Spring, rejecting simplistic narratives of a “Twitter Revolution.” He emphasizes that while platforms like Facebook and Twitter helped mobilize protests, they were part of a broader ecosystem of communication and resistance. In the article "Communication Power and the Arab Spring" by Fuchs he states, “The media are not the only factors that influence the conditions of protest - they stand in contradictory relations with politics and ideology/culture that also influence the conditions of protest..” He warns that these platforms are owned by corporations, which means they can not only create resistance but also be the force that restricts movements. This is something we have seen over the past months with Elon Musks purchase of twitter, now known as X.
3. How does Sandor Vegh categorize online activism and what examples does he provide?
Vegh divides online activism into three key categories: awareness/advocacy, organization/mobilization, and action/reaction. These stages illustrate how digital tools may be used to improve traditional activism. For example, during anti-globalization protests, activists used the internet to coordinate international demonstrations efficiently and affordably. In the article "Classifying Forms of Online Activism, in Cyberactivism: Online Activism in Theory and Practice", Vegh explains, “Only the Internet allows an activist to distribute a message to thousands of people all over the world at once.” He also explores more aggressive forms of protest, such as “virtual sit-ins” and “hacktivism,” like the Electronic Disturbance Theater’s FloodNet tool used to disrupt websites. These methods demonstrate how activists can repurpose digital tools for political protest.
4. How does Latoya Lee describe Black Twitter’s role in resisting racial bias in media?
In her article "Black Twitter: A Response to Bias in Mainstream Media, Social Sciences", Latoya A. Lee argues that Black Twitter functions as a “digital homespace,” a space used as a tool for black women and men to (re)construct their bodies and identities offering a collective, cultural response to racism and bias in mainstream media. Lee also states that through “Textual poaching as resistance,... the user produces content that challenges dominant (oppressive) cultural ideologies and norms, including racial bias” Through hashtags like #IfTheyGunnedMeDown and #APHeadlines, users create counter-narratives to reclaim Black identity and critique injustice. These hashtags force institutions to recognize implicit bias; for example, the Associated Press reworded a problematic tweet about the killing of Renisha McBride after backlash on Twitter. Black Twitter, according to Lee, is not only reactive but also strategic and community-driven, using humor, satire, and solidarity to challenge dominant narratives.
Works Cited
Bogost, Ian. Animal Crossing Isn’t Escapist; It’s Political, The Atlantic, 2020.
Fuchs, Christian. Communication Power and the Arab Spring, 2013.
Vegh, Sandor. Classifying Forms of Online Activism, in Cyberactivism: Online Activism in Theory and Practice, 2003.
Lee, Latoya A. Black Twitter: A Response to Bias in Mainstream Media, Social Sciences, 2017.
3 notes · View notes
dv230 · 5 months ago
Text
Blog post #6 Week 8 due March 13th
How does Haraways idea and explanation of  the cyborg challenge the societal norms placed on gender and identity? 
In “ A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century “ by author Donna J. Haraway she introduces her concept of the cyborg as she explains a “A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction.” This presents the concept of a cyborg as a being that blends humanity and technology while rejecting traditional gender roles.  Haraway continues this concept by adding “Gender, race, or class consciousness is an achievement forced on us by the terrible historical experience of the contradictory social realities of patriarchy, colo-nialism, and capitalism.”Reinforcing the concept of the cyborg being a way for individuals to reinvent themselves or their identities. The idea of Haraways cyborg challenge social norms as it is adaptable and removes social categories, allowing for a more fluid identity .
2. How has the internet changed the way harmful harassment is spread and what are the real life consequences of this ? 
With the growing advancement of the internet, one growing concern is online/real life harassment and cyberbullying across social media platforms. As Daniels mentions in “Gender and white supremacy” there is a concern in what some “have termed cyberbullying, schoolyard bullies use various digital technologies, such as MySpace pages and text messages sent from mobile phones, to target other young people for online harassment. This type of ha-rassment is often based on physical characteristics (like size, disability, or age) or social identities (such as gender, sexuality, race, or ethnicity” The access to technology and communication with others allows for people to spread hateful ideologies across the internet. Daniels mentions “  a 2007 incident with some striking similarities to the Jouhari case, a white supremacist from Roanoke, Virginia, with the redundantly appropriate name William White published the home addresses and phone numbers of the families of six Louisiana high school students known as the Jena 6.” This was caused by white youths with racist ideologies targeting black youth and led to retaliation which ended in only the black youth receiving prison sentences and those who attacked them received no prison time. These are some of the harmful ideologies and harassment that can now travel and spread faster than ever before due to the internet.
3. How do the development of virtual assistants like Ananova and Mya in earlier technology mirror societal norms/expectations of gender and human interaction in digital spaces?
The development of virtual assistants like Ananova and Mya mirrors societal gender norms in several ways. In "Gender, Technology, and Visual Cyberculture Virtually Women " by Kate O'Riordan she mentions "these personae are young, attractive, female celebrities reflecting the cultural idealization of femininity, youth, and beauty." The virtual assistants were based on real life humans as Riordan states "Ananova, for example, is reported to be a combination of popular iconic figures Posh Spice,Kylie Minogue, and Carol Vorderman" reinforcing traditional gender roles of women as emotionally nurturing. Kate O'Riordan furthers explains that the women were "single, and stereotypically attractive," alignin the idea that women in technology should be warm and relatable, which mirrors the societal expectations placed on women as emotional caregivers.
4. What are the implications of the female personae and what does it imply in regards to the societal perceptions of gender?
The virtual female personas have dangerous implications for sociatal standards of gender . As the reality is merged by the blending of the line between the real and digital. Kate O'Riordian states that this may "produce a new experience of having been (having material traces)" and "provide a way of looking at how the symbolic and psychic produce material realities and residue." These virtual bodies are rendered as "real" agents with which users form relationships, signifying "a successful rendering of the technology as 'friend'" and embodying "new normative bodies because of their visible signification as bodies." Despite their idealization, these bodies are not representations but go further into "fantasies of the female" that further morph perceptions of gender.
Works Cited
“ A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century “ by Donna J.
“Gender and White Supremacy” Jessie Daniels
"Gender, Technology, and Visual Cyberculture Virtually Women " by Kate O'Riordan
4 notes · View notes
dv230 · 5 months ago
Text
Blog Post #5 Week 7 due March 6th
How do online videos across the internet contribute to the critique of stereotypes and social interactions?
Across the internet there is videos which highlight and depict the casual racism and stereotypes in social interactions. In "The Social Media Handbook" authors  Jeremy Hunsinger and Theresa Senft mention viral videos such as "Sh*t girls say" and "Sh*it black girls say" across social media spaces such as Myspace. More specifically the authors mention Francesca Ramsey’s video "Sht White Girls Say to Black Girls"* which most critics pointed out was the "first to offer a popular and critical examination of race"(King, 2012). Frances Ramsey's video received over 1.5 million views on the internet highlight how many people related to the micro-aggression and phrases that were presented in her video. Online videos can be used to critique the stereotypes involved in casual racism through peoples lived in experiences.
How does the advancement of technology depict the intersection of racism and capitalism?
The advancement of technology across companies such as Microsoft , apple, and google depict the intersection of racism and capitalism with the advancement of AI and speech to text such as Siri and Amazon's Alexa . In "Race after Technology: Abolitionist Tools for the New Jim Code" author Benjamin Ruha mentions the development of Apple's Siri and how a former "Apple employee who noted that he was "not Black or Hispanic" described his experience on a team that was developing speech recognition for Siri, the virtual assistant program. As they worked on different English dialects- Australian,Singaporean, and Indian English - he asked his boss:What about African American English?" To this his boss responded: "Well, Apple products are for the premium market." " This comment by the apple employees boss highlights how big corporations choose to exclude marginalized minorities in the advancement of AI. Depicting the intersection of racism and capitalism as it reflects how big companies may deem certain demographics as not as profitable and therefore excluding them from important advancements.
How has the threat of white supremacy values spread in the digital era ?
The advancements of digital era technologies such as social media and digital spaces allows for groups to spread their ideologies to a wider community, for better or worse. Along with this comes "the least-recognized and, hence, most insidious— threat posed by white supremacy online is the epistemological menace to our accumulation and production of knowledge about race, racism, and civil rights in the digital era." as stated by Daniels in "White Supremacy in the Digital Era" . This creates a bigger threat to society's civil rights that many have fought for over the years since Daniel's states "as the lived experience of the civil rights movement fades with time, hard-won political truths about racial equality, secured at great cost, slide into mere personal opinion, open to multiple interpretations." This leads to users having to make the decision of what is biased and what should be a violation of civil rights. Since "the issue of deciding what is biased and what is not is inherently political. And on the Internet making this distinction is even more complicated." sates Daniels, highlighting the threat that white supremacy has spread across the digital era
How does the Black Mirror episode "Nosedive" depict the idea of a meritocracy?
The Black mirror episode titled "Nosedive" depicts an altered reality based on a meritocracy in which citizens earn privileges through credits and a ranking system from 1-5 stars. The protagonist Lacie is obsessed with her ranking as it would provide her with better jobs, homes, and rent prices amongst other incentives. This reality somewhat mirrors our society as many countries have credit scores which depict the type of loan you can get or can lower your monthly house payment. It also demonstrates society’s growing obsession with social medial and the superficial portrayal of our images on the internet. ultimately Lucy's own obsession leads to her downfall as her life starts spiraling down leaving her ostracized in society.
Works Cited
Hunsinger, Jeremy, and Theresa M. Senft. The Social Media Handbook. Routledge, 2015. 
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2020. 
Daniels_White Supremacy in the Digital Era
Netflix Black Mirror Season 3 episode 1 "Nosedive"
1 note · View note
dv230 · 5 months ago
Text
Blog Post #4 due 2/27/25
How do the beliefs or ethics of video game developers reinforce stereotypes through their products/games? 
As video games are the product of coding and the work of developers it is important to note that their product will and is influenced by the developers own personal ideologies and perceptions. This can be seen through modern video games such as Call of Duty, GTA, and even Street Fighter. These video games are influenced by the developers own perception of society. As can be seen throughout these games certain aspects of war and gang violence are glorified through the missions and objectives in the game. As Jeffrey A. Ow mentioned in “ The Revenge of the YellowFaced Cyborg” a video game known as Shadow Warrior by 3d Realms reinforces stereotypical beliefs as you play as an asian character with exaggerated traits such as his accent. The main character's own name “Lo Wang” can be seen as a stereotype/joke regarding male genitalia. According to Jeffrey A. Ow in order to gain health in the game “if injured the gamer searches for fortune cookies to regain health”. When confronted with the underlying stereotypes of the games 3D realms responded by saying “ We are having fun with the whole Asian culture, and we blatantly mixed up all the elements and cultures to make a fun game. (Millerand Broussard)” meaning this was all done intentionally. 3D realms own beliefs affected the way the game was made and how the asian characters were portrayed throughout the story. 
2.) How does  gender and race intersect with ones cyberself across virtual spaces ?
 One's own “cyberself” is the version they choose to present across virtual platforms. As stated in “Race in Cyberspace” by author Beth Kolko “ because the self that exists in cyberspace is the result of purposeful choices, it is possi- ble to trace those decisions back, from the avatar (or virtual projection) to the person who first chose to represent [themselves] in a particular way” . Since our language and gender are parts of our identity, there lies a connection between our virtual and real life.  Our gender, race, and language intersect with our cyberself as they reflect in our online interactions throughout virtual spaces. These intersections of race, gender, and language throughout virtual spaces can both challenge and reinforce social constructs in society. 
3.) What are the risks of dismissing problematic games as “just entertainment”? If society continues to treat games such as Shadow Warrior as harmless fun it only further ignores their role in normalizing racist, sexist, and colonial fantasies. When gamers say “just play another game,” they dismiss real harmful stereotypes and assumptions. As Fickle puts it, “Games are escapes not because they are more free, but because they are differently constrained: their rules provide a substitute for existing relations of power and systems of valorization, swapping out one set of rules for another..” meaning that while games may feel free they simply substitute one force of values or rules for others. This is important for us to understand as we can analyze how games are not neutral ground rather they often times employ/reinforce racist or sexist beliefs through gameplay and characters .
4.)Why is the idea of so called “choice” in identity misleading in cyberspace and games? Kolko and Fickle both show that the supposed freedom to choose race or gender in digital spaces is often times illusionary. Identity is not just personal performance it is also shaped by structural power. For example, choosing a female or Asian avatar doesn’t protect users from harassment or stereotyping rather it may actually increase it. Online spaces like multiplayer games may me treated as neutral but they are based on existing hierarchal systems. So while customizing your character in an online video game may seem like a "choice" it is still subject to the invisible rules of that game.
5.). How do the authors complicate the idea that digital spaces are “free” from identity politics?
While all three works reject the idea that digital that game spaces are "free" from identity politics they each criticize different aspects. Kolko shows that race is often kept “off” in cyberspace to avoid causing public discomfort. While on the other hand Fickle reveals how even “colorblind” games reinforce systemic bias through cultural appropriation and stereotypical assumptions. Along with that, Ow’s analysis of Shadow Warrior shows how racism is often times masked and marketed as humor or nostalgia to gamers, making it generally harder to challenge.
Works Cited
Kolko, Beth E. Race in Cyberspace. 2000. 
Kolko, Beth, et al. Race in Cyberspace. Routledge, 2013. 
Fickle, Tara. The Race Card: From Gaming Technologies to Model Minorities, New York, USA: New York University Press, 2019.
3 notes · View notes
dv230 · 6 months ago
Text
Blog Post #3 Week 4 due 2/13/25
How does the advancement of technology and algorithms reinforce social bias in society rather than overcoming the problem?
The advancement of technology reinforces racial bias by targeting certain demographics through coding and algorithms. These algorithms are used by companies throughout the work force to evaluate candidates and plenty of times have racial bias embedded into the coding. As mentioned by Ruha Benjamin in “Race after Technology” rather than “challenging or overcoming the cycles of inequity, technical fixes too often reinforce and even deepen the status quo” this is due to the fact that oftentimes these codes discriminate based on names or race. These algorithms are impacting the livelihood of these minorities by barring them from job opportunities and career advancement. As Benjamin also mentions a “classic study of how names impact people's experience on the job market, researchers show that, all other things being equal, job seekers with White-sounding first names received 50 percent more callbacks from employers than job seekers with Black-sounding names” demonstrating how algorithms can affects a person's ability to get a job solely based on their names. Rather than each job seeker being individually evaluated, some never even have the opportunity to present themselves because the algorithm flagged their name on the application. Society likes to think technology would have solved these racial bias problems but in actuality it is only reinforcing them by targeting certain demographics.
What is the concept of redlining in relation to technology and algorithms ? How are certain groups oppressed by this? 
 Redlining is a discriminatory practice in which financial services are predetermined to neighborhoods based on their race and ethnicity and has been illegal since 1968.  In relation to technology and algorithms, redlining is the use of technology in order to be biased against people based on race or ethnicity. This could be through internet speeds, algorithms, or broadband infrastructure. Certain groups can be marginalized and oppressed due to this as they may not have access to the internet, high internet speeds, or the coding could contain its creator's own racist ideology. Minorities and women are most likely to suffer as Sofia Noble mentions in her article “Algorithms of Oppression”  the everyday racism and “commentary on the web is an abhorrent thing in itself, which has been detailed by others; but it is entirely different with the corporate platform vis-à-vis an algorithmically crafted web search that offers up racism and sexism as the first results” as when searching for certain topics the first results yielded sexually implicit websites specifically regarding women. The concept of redlining further marginalized minority groups and oppressed by proposing these searches as the most relatable result. This goes even further as women are more likely to be targeted and those who created the algorithm are going without repercussions. As technology grows and is more involved in our lives it is important to learn and combat these concepts so that groups and minorities are not targeted and oppressed . 
What is the digital divide? How does digital literacy play a role in the digital divide ? 
The digital divide is explained as the divide between those who have access to internet and technology and those who do not. Digital literacy plays a role in the divide as even though many groups could possibly have access to the same technology, one group may understand it and be able to use it more efficiently. Digital literacy is an individuals ability to find, evaluate, and communicate using technologies. For example, two people may both have an iphone but if one has better digital literacy they will know how to update ios, use siri, and facetime, while the other may only know how to call. Digital literacy plays a role in bringing groups together for a cause and is vital in creating spaces for voices to be heard. As Anna Everette mentions in her article “The Revolution Will Be Digitized” with the growing “power and dominance of global media conglomerates, it is evident that the revolutionary digital public sphere developing in cyberspace represents the hope and promise for the ongoing survival of the independent black  presses, established ones and upstarts alike.” As the creation of these spaces allows for independent presses to remain vocal about ongoing issues and for communities to remain hopeful that their voices and opinions will be heard.
References
Everette, Anna. The Revolution Will Be Digitized: Afrocentricity and the Digital Public Sphere. Duke University Press, 2002. 
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2020. 
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018. 
5 notes · View notes
dv230 · 6 months ago
Text
Blog Post #2 Week 3 due 2/6/2025
How does  Cyberfeminism relate to the relationship between gender and technology? How does the political economy play a role in the engagement of cyberfeminism ?    Cyberfeminism  focuses on challenging gender roles and stereotypes through the use of technology and social media. It relates to the relationship between gender and technology by enabling individuals to challenge the patriarchy in society through social media. As stated by Jessie Daniel in “Rethinking Cyberfeminism(s): Race, Gender, and Embodiment,” it is “neither a single theory nor a feminist movement with a clearly articulated political agenda.” Rather it is the combination of various theories, debates, and practices in relation to gender and digital technology. Along with these advancements in technology comes the involvement of the political economy and their role in cyberfeminism. As mentioned by Jessie Daniels, in the United States research shows that “most of the apparent 'digital divide' in computer ownership and Internet access has been the effect of class (or socioeconomic status) more than of gender and race (Norris 2001).” It is important to evaluate the role of the political economy in cyberfeminism as we only have such technologies due to the exploitation of labor in other countries. The very people making this technology possible may not have access to it themselves. While, as Daniels states, it provides a “mechanism for resisting such gendered and racialized practices, at the same time that they reinforce established hierarchies of gender and race.”
2. How is facial recognition software being shaped by political worldviews? How can the use of these programs be detrimental to minorities ?
Facial recognition software is being shaped by political worldviews as it is used in certain instances by police to capture criminals. This can be detrimental to minorities as one big major problem with facial recognition is its inability to differentiate between minorities. According to “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match” by Kashmir Hill in 2019, a “national study of over 100 facial recognition algorithms found that they did not work as well on Black and Asian faces.” This has led to innocent people being falsely accused and imprisoned for crimes they did not commit, particularly minorities. As Kashmir Hill also mentioned, Nijeer Parks spent “10 days in jail and paid around $5,000 to defend himself” due to being falsely imprisoned by identification through facial recognition. Not everyone has the money to defend themselves against a charge like this. By allowing facial recognition to be used, it could lead to more false imprisonments and innocent people being placed in jail.
How does the use of technology in healthcare insurance and government relate to the exploitation of the lower class ? 
Many insurance companies now rely on algorithms when evaluating insurance claims and policies. This means some people will be flagged for fraud due to the algorithm, but who are the groups most affected by this decision? According to “Automating Inequality” by Virginia Eubanks, “Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the health-care system, or cross national borders.” This process of intense surveillance enables insurance companies to exploit the lower class as they do not have the resources or time to fight the claims. For example, social services using EBT cards as a tracking device is just one form of surveillance burdened upon the lower class. This also extends towards healthcare policies as marginalized groups are most likely to be flagged for insurance fraud. The use of technology only furthers  exploitation of the lower class as Eubanks states “We manage the individual poor in order to escape our shared responsibility for eradicating poverty.”
4. What are the dialectical contradictions between technology and race? How do social media platforms play a role in enabling these contradictions in society ? 
Social media platforms enable dialectical contradictions between technology and race as algorithms suggest certain videos and post to your feed or timeline. For a country such as the United States that values free speech this causes a contradiction as free speech may be hidden by the algorithm. Over the past week protests took place over President Donald Trump's immigration policies all over california. Many groups marched out in support of their friends and families yet in order for me to find any videos relating to the protest I had to search for them on my own. Coming from a latino household I was very intrigued by the fact that the algorithm in any of my social media accounts was not suggesting these videos. This is a dialectical contradiction as social media such as X is for free speech but perhaps they may actually be suppressing these ideas. 
References 
Eubanks, V. (2018). Automating Inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Daniels, J. (2009). “Rethinking Cyberfeminism(s): Race, Gender, and Embodiment”
Hill, K. (2020). “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match”
3 notes · View notes
dv230 · 6 months ago
Text
Blog Post #1 week 2 due 1/31
For being such a technologically advanced society, there are times when even technology ends up holding us back. For myself it was when I was about to graduate with my ADT from Citrus College and was working on my applications to UC’s and CSU’s for the upcoming spring. I wanted to transfer to a 4- year university as I wanted to earn my bachelor’s degree. For something so important in my educational journey, I procrastinated until the last day to submit my applications for UC’s. It was my first time looking at the applications, and I had no idea how time- consuming they would be. It also caught me by surprise as many questions were supposed to be short essay answers. I was logged into my account and chose the 3 questions that related most to me. I started typing my answer and made edits until I had the best possible response to each question and still had a few hours to spare before the application was due. I clicked save and then I hit refresh on the page and my heart dropped as all my short essay answers vanished from the page. I thought it was some sort of mistake so I logged out and logged back in but my answers were gone. I had worked so hard on those responses but technology had let me down. I now only had about 4 hours left to redo each short essay answer and turn in my application on time. This time I did not trust the save button on the UC application so I wrote the answers on a google doc and proceeded to copy and paste them onto the application. I was able to submit the application on time even due to the technological mishap. To make matters worse the very next day I received an email saying the application deadline had been extended one more week so I rushed myself for absolutely no reason. Although it helped me learn my lesson of not procrastinating, as technology will not always live up to its expectations and could end up sabotaging me.
2 notes · View notes