Text
Bog post 10
How can we resist together our online narratives?
In Virtual Homeplace: (Re)Constructing the Body through Social Media, Latoya A. Lee suggests that online spaces can empower marginalized individuals by providing opportunities to construct and retake their identities. By resisting narratives that misrepresent, we can create a "virtual homeplace"—a supportive environment that encourages authentic self-expression and representation. This virtual homeplace allows individuals to share their voices, build connections with like-minded people, and collectively resist labels. By sharing personal stories, perspectives challenges stereotypes, and helps create an inclusive community that values diverse experiences. Together communities can reshape online etiquette, setting standards that promote inclusivity and individuals to take control of their narratives.
What are some tools for online communities to thrive and profit without alienating their ethnic online group?
In Ethnic Online Communities Between Profit and Purpose, Steven McLaine emphasizes the importance of using key tools to ensure that ethnic online communities can thrive and profit without alienating their members. McLaine suggests that transparency is essential; openly sharing how profits are directed toward projects and outreach programs helps build trust, particularly within marginalized communities. Additionally, empowering members to participate in decision-making strengthens unity and allows the group to work toward a shared mission, creating a "community-led" environment. Regular feedback mechanisms, such as surveys or suggestion forums, ensure members feel their voices are heard and allow administrators to respond to concerns quickly. McLaine argues that these strategies together help to create a respectful, inclusive online space where cultural identity and shared purpose are prioritized, enabling the community to grow sustainably while retaining members’ trust and engagement.
how can women game journalists fight back against gamergate?
In What Is Gamergate, and Why? An Explainer for Non-Geeks, Jay Hathaway outlines the challenges faced by women game journalists during the Gamergate controversy. A way women are fighting back us by using their voices and taking control of their own narratives through social media and independent platforms. Women journalists are speaking out and sharing their experiences, women can counteract the harassment they face and build support networks with allies in the gaming and journalism communities. Hathaway also suggests that promoting inclusive content in gaming journalism can help shift the conversation away from toxic behavior and challenge the culture of sexism. By strengthening online safety measures and implementing more moderation tools encourage platforms to take a stronger stance against harassment towards women.
Lee, Latoya A. “Virtual Homeplace: (Re)Constructing the Body through Social Media”.
McLaine, Steven. “Ethnic Online Communities Between Profit and Purpose”.
Hathaway, Jay. “What Is Gamergate, and Why? An Explainer for Non-Geeks”.
3 notes
·
View notes
Text
Blog post week 11
Who keeps mainstream media accountable for their bias?
Black Twitter plays a major role in holding mainstream media accountable by actively challenging biased narratives and advocating for greater representation of Black voices and the issues impacting their communities. Black Twitter users not only call out biased coverage but also spotlight stories that are often overlooked, pressuring media organizations to address a system shaped by systemic bias. Accountability serves as a powerful tool, demanding fair and inclusive coverage of issues relevant to Black communities and pushing for media representation.
Can online activism cause physical harm?
While advocacy and organization in online activism are typically nonviolent, S. Vegh argues that certain forms of actions can lead to indirect physical harm. For example, disruptions to digital services or cyber-attacks targeting vital public infrastructure, such as healthcare, can impact critical emergency services in vulnerable areas. Although some of these activism attacks might not cause harm directly, these types of online actions can have real-world consequences by disrupting the public services people rely on for essential needs and security.
Can online social movements be influenced by outside influence?
Fuchs argues that online social movements are not secure from outside influence. Digital platforms make it easy for activists to mobilize and communicate, but they also expose movements to external threats like government surveillance and corporate data control. For example, during the Arab Spring, governments in countries like Egypt used digital surveillance to monitor activists' online communications. This led to the arrest of organizers and suppression of the protesters. Social media companies can also limit the reach of certain messages or censor content by government pressure. In some cases, the government will temporarily shut down social media platforms during protests to disrupt the organization of communication by the protesters. While activists may use counter-surveillance like encryption and VPNs to improve security, online movements remain vulnerable to external interference.
Fuchs, Christian . Communication Power and the Arab Sping.pdf
Vegh, Sandor. Cyberactivism: Online Activism in Theory and Practice
Lee, Latoya. Black twitter: A response to bias in mainstream media.
3 notes
·
View notes
Text
Never Go Full Troll
When does trolling become a positive disrupter?
Even though trolling was originally used as humor among strangers, it can and has become a major issue for many people who share themselves online. While anonymity allows trolling to become hostile and sometimes harmful, in Defining Terms: The Origins and Evolution of Subcultural Trolling, Phillips argues that trolling can be a positive disruptor when it challenges power structures or highlights social injustices. Phillips describes how trolling, when used constructively, can break norms and encourage critical thinking, making marginalized viewpoints more visible. This type of positive trolling pushes boundaries constructively rather than destructively. However, Phillips cautions that trolling must avoid falling into harassment or personal attacks to remain a positive disruption.
should mods automatically remove the trolls?
When trolling escalates and creates a toxic environment, moderators should intervene to protect users. In Don’t Feed the Trolls: Shutting Down Debate about Community Expectations on Reddit.com, Kelly Bergstrom explains that automatic removal of trolls can be problematic, as trolls can easily create new profiles to continue disruptive behavior. While maintaining a healthy online space through moderation is important, Bergstrom cautions against over-relying on automatic removal, as this can lead to gatekeeping and potentially limit open discussion. Instead, balancing direct intervention with community-driven norms, like ignoring trolls, can often foster a safer and more inclusive environment.
Do we need federal protections from online hate?
Federal protections are needed to safeguard individuals from online hate and severe trolling, as current regulations are insufficient for ensuring safety in cyberspace. In Hate Crimes in Cyberspace, Citron argues that online hate and harassment have severe consequences, from psychological trauma to physical harm, and that marginalized groups are particularly vulnerable. She highlights that anonymity online, while important for free expression, often enables harmful behaviors that current laws cannot adequately address. Citron advocates for stronger legal frameworks to protect against these threats, suggesting that government intervention could balance free speech with the urgent need for protection against cyber harassment.
Should platforms be liable for severe trolling?
I believe that many platforms are completely liable for the extreme trolling that happens to people online, such as Leslie Jones. The online harassment and racist comments she received went too far for any platform to allow this behavior for so long. Women have been sexually harassed, threatened, and bullied with revenge porn without any consequences from social media companies. These platforms exploit these toxic online behaviors, and they’re protected by a broken system that can’t keep women safe, even if they plead and beg for help.
Phillips, W. The Origins of Trolling.
Bergstrom, Kelly. “Don’t feed the troll”: Shutting down debate about community expectations on Reddit.com.
Silman, Anna. A Timeline of Leslie Jones’s Horrific Online Abuse.
K. Citron, Danielle. Hate Crimes in Cyberspace.
1 note
·
View note
Text
blog numero siete
Day-to-Day Avatar
This is the avatar I created on Bitmoji for my daily use to share with friends and family. For my choice of clothing, I wanted to represent the Lakers as they are my favorite NBA team. I added devil horns to showcase my Halloween costume for this year. I created my avatar to reflect my personality of being goofy with my family and friends. When you become a cyborg in the digital space, you have the power to portray who you want to be. I didn’t feel the need to hide my identity, interests, or race with my avatar because I feel safe sharing it with family and friends through our text conversations.
Dating Avatar
This is my attempt at creating a dating site avatar using Bitmoji. If I were looking for a date, I would want to portray myself as someone with a bit of style, while still keeping a casual personality. I would struggle to come up with a username, knowing that some people make decisions based on usernames alone. I would share my ethnicity, gender, and age to provide some details about myself. In class, we learned that even though we get to choose our preferences and settings, companies often exploit our data and manipulate algorithms to match us with what they think we want. This can be problematic, as it introduces biases into the system and takes away trust and confidence in having control over your own life.

Gaming Avatar
My gaming avatar comes from my Meta VR account. I wanted to showcase my military experience by wearing a military uniform, even though I know no one would know I served, and anyone can wear the outfit. I like to mess with my friends, talk smack, and when I win, I remind them that a guy in sandals beat them! When we can hide our faces on a platform, we gain the power to stand up for ourselves without fear of repercussions, which gives many people a voice. While we can freely interact and share our thoughts, we can also become victims of abuse. Platforms try to protect users, but many loopholes and methods are used to exploit people in open social platforms.

3 notes
·
View notes
Text
Blog numero seis
How can social interaction foster a safe space?
In the Race and Social Media chapter of The Social Media Handbook by Hunsinger and Senft, it explore how race is represented on social media platforms. To foster a safe space for social interaction online, this chapter emphasizes the importance of intentional community-building and moderation. Social media users and platforms need to create an inclusive environment by actively moderating harmful content. They need to encourage a positive dialogue and promote diversity in the representation of race. When social media spaces prioritize equity and inclusivity, they allow a safer environment for all users
How can we make race inclusive for the next generation on the internet?
Race After Technology by Ruha Benjamin examines how technology is seen as neutral and objective but reinforces racial inequalities. Benjamin explains how the algorithms, artificial intelligence, and other digital technologies are coded with biases that marginalized groups, especially people of color. Most platforms have this racial discrimination programmed and embedded in many cases. These codes are impartial technologies that need to be challenged to create a more just society.
Does social media help or harm victims of femicidio?
The speaker professor Miriam Hernandez explained how social media has both helped and harmed victims of femicidio in Latin America. On one hand, it amplifies feminist movements like #NiUnaMenos and #VivasNosQueremos. These social media tags raise awareness, organize protests, and demand justice for women victims of violence and pressuring governments for change. However, it can also harm victims by exposing them to online harassment, revictimization, and the spread of misinformation, which can trivialize their experiences.
How can we change a culture of men to stand up for women?
To change the toxic men and to stand up for women online, we need to use technology and social media to challenge misogyny and violence against women. These movements create awareness campaigns, encourage men to engage in conversations about toxic masculinity, and promote holding abusers accountable both online and offline. social media need to be use as a tool for education and advocacy, men can recognize their role in combating gender violence and supporting women publicly or online.
Benjamin, R. (2020). Race after technology abolitionist tools for the new Jim code. Polity.
Hunsinger, J., & Senft, T. M. (2015). The Social Media Handbook. Routledge.
Hernández, Dr. Miriam. Digital Defenders: Using Social Media to Challenge Violence Against Women. 9 October. 2024. Presentation
1 note
·
View note
Text
Blog numero cinco!
Do Games Promote Game Characters Equally?
In the article "Race in Cyberspace," it is argued that virtual environments, including video games, can reinforce many racial stereotypes and inequalities seen in the real world. Most popular games are typically promoted with a white male character as the hero, while minorities are often portrayed as villains or depicted in poverty. The options for customizing an avatar are limited for many individuals in numerous games, reflecting a lack of awareness and biases among developers. Games and virtual reality have the power to connect people, but many times, they fall short in providing equal representation. We may not realize the inequality and discrimination emphasized in cyberspace, but our online interactions can negatively impact marginalized communities.
How Do Stereotypes in Video Games Become So Exaggerated?
Jeffrey A. Ow's article, "The Revenge of the Yellowfaced Cyborg Terminator: The Rape of Digital Geishas and the Colonization of Cyber-Coolies in 3D Realms' Shadow Warrior," critiques how Asian characters in games like Shadow Warrior are represented through exaggerated portrayals of Asian culture. Many times, these games are developed by individuals who do not represent the Asian community, lacking the ability to incorporate input from non-Asian creators. Games like Shadow Warrior reflect the Western stereotypes that have arisen from the historical context of colonization and racism perpetuated by Western countries. The media has a history of portraying Asians and minorities through stereotypes, exploiting them in entertainment for the benefit of power and wealth.
Can Innocent Games Contribute to Stereotypes?
Tara Fickle's article "Ludo-Orientalism and the Gamification of Race" explores how video games and gaming technologies contribute to racial stereotypes. Characters from specific cultures or races may be designed in ways that align with long-standing, harmful stereotypes. These biases can reinforce negative perceptions of race and culture, shaping players' views of the world both inside and outside the game. For example, minorities playing Pokémon GO in predominantly white neighborhoods were often profiled and warned against entering areas deemed "dangerous," while white players had more freedom to explore without the same level of scrutiny. Innocent games can still contribute to real-world discrimination, revealing the complex ways in which digital interactions affect societal issues.
Kolko, B. E., Nakamura, L., & Rodman, G. B. (2000). Race in cyberspace. Routledge.
Fickle, T. (2019). The race card: From gaming technologies to model minorities. New York University Press.
Ow, Jeffrey A. “The Revenge of the Yellowfaced Cyborg Terminator: The Rape of Digital Geishas and the Colonization of Cyber-Coolies in 3D Realms’ Shadow Warrior.” Asian America.Net: Ethnicity, Nationalism, and Cyberspace.
3 notes
·
View notes
Text
Blog number 4
How to Challenge the Digital Divide and Poverty
In "The Revolution Will Be Digitized: Afrocentricity and the Digital Public Sphere," Anna Everett focuses on the intersection of African Americans and their efforts to bridge the digital divide. African Americans, referred to as "black technophiles," have been driven to break barriers in the digital workspace. They advocate for the use of social platforms and technology as tools to empower education, businesses, and political activism within African American communities.
Many technological platforms and corporations are designed with biases against poor communities, which often lack the resources for modern digital infrastructure necessary to empower the next generation. Black technophiles challenge a system that has neglected marginalized African American communities, demanding equal access to the opportunities of the digital age.
How to Break the Modern "Jim Codes"
In the article "The New Jim Codes," Ruha Benjamin examines the historical parallels between past and present forms of discrimination, focusing on how technological inequality resembles the tactics used during the Jim Crow era. Benjamin argues that algorithms and artificial intelligence disproportionately discriminate against people of color, compared to white Americans, based on the biased data used to program these platforms.
The "New Jim Codes" are unethical in minority communities due to the lack of human oversight and the potential for flawed technology to perpetuate discrimination.
At What Age Should Digital Data Begin Being Collected?
In "Algorithms of Oppression," Safiya Noble provides critical insight into how the data collected from young people through surveillance can reinforce racial and gender biases, limiting opportunities. This data is often used to perpetuate systems of inequality, frequently without the knowledge or consent of the individuals involved. Innocent people, particularly children, are caught between biased algorithms and security risks, with little accountability from agencies meant to provide neutral oversight.
Are You Protected from Safe Searches?
In the introduction to "Algorithms of Oppression," Safiya Noble argues that while search engines are often perceived as neutral, they are, in reality, filled with racism and sexism. Google search algorithms, in particular, tend to prioritize content that often works against women of color, especially Black women. As a result, minority women and girls are frequently sexualized on the internet, with harmful platforms gaining access to innocent users without safeguards or accountability. Protecting users should be a priority for search engines, but these algorithms often serve to exploit and maintain power over the user instead.
Everett, A. (2002). The Revolution Will Be Digitized. Duke University Press.
Benjamin, R. (2019). Race After Technology.
Noble, S. U. (2018). Algorithms of oppression. https://doi.org/10.2307/j.ctt1pwt9w5
7 notes
·
View notes
Text
Blog numero 3
How has cyberfemism disrupt barriers that oppresses women from adapting in a world of technology?
Technology has connected people globally, interlinking networks with economical wealth but to this day women are being marginalized in a system that favors men. Cyberfeminism empowerment in digital technology has transform the structure in the framework and distributed a system that has oppressed women. Donna Haraway’s “cyborg” theory of becoming part human and part machine has given women the ability to challenge traditional gender roles or resist oppressive patriarch structure. The influence of cyberfeminism has help protect women with technology and is demonstrated on the website Hollabacknyc that encourage women to document and report unwanted harassment. The platform brought awareness to the unsafe environment for women and a protection to combat unwanted advances.
Why do we create bias algorithms?
Technology was supposed to help humanity share information quicker and improve people's lives but instead technology and its algorithms help corporations, health care and banking systems with the ability to target and marginalized groups of people. These specialized algorithms that monitor your data and retrieve your information are automated to deny benefits to people of color in a higher percentage and can create economic hardships such in the reading in Automating Inequality (Eubanks). A system that is programmed to make money instead of helping is a flawed tool and shows that technology is manipulated to create barriers especially in underserved communities.
We need policies by government and social groups to monitor how healthcare is being provided and administered to communities.
Why is technology leaving minority women behind?
In the United States technology is more accessible to certain communities than others but women are the most affected, especially women of color. In Rethinking Cyberfeminism, women in countries with developing infrastructure are being left behind in technology and not being integrated to the economic system. Development and innovation are being manufactured in a high rate, but the power continue to create barrier to women because of systemic issues that affect gender roles in many developing countries.
Is technology our new security system?
In the video Race and Technology by Nicole Brown highlights how technology has been automated to police citizens and minorities communities in many different sectors of daily life. These algorithms are created to monitor and target groups with biased data, and it can be considered racial profiling. Surveillance has been implemented with technology by using artificial intelligence which is a flawed system known to make mistakes. How secure should we feel when the systems in place to make you feel safe can end up targeting you because of how you look?
How do we combat facial recognition when it is wrong?
How do you prove your innocence when a system is programmed to be correct 99 percent of the time? In the Nijeer Parks story of how he was wrongly accused and jailed for a crime he did not commit because of the mistake of facial recognition technology was used to find a suspect. Another Arrest and Jail Time Dude to a Bad Facial Recognition Match by Kashmir Hill covers how technology and police surveillance is not being criticized by any outside safety nets. We need accountability and a system that protects people from being detained for data and manipulated algorithms.
Brown, N (2020). Race and Technology.
Daniels, J (2009). Rethinking Cyberfeminism(s): Race, Gender, and Embodiment. The Feminist Press
Eubanks, Automating Inequality. Pdf
4 notes
·
View notes
Text
Blog numero dos!
Critical Thinking is not Enough in the Age of Influencers
Reading the article “Critical Introduction to Social Media” by Christian Fuchs, I have no doubt social media can be used primarily influence individuals or groups as a system to maintain in power over a mass of people. This is done through our lack of awareness to think critically because we are constantly being programmed through algorithms to influence our behavior. Cristian Fuchs finds power is the motive for owners of these platforms and its critical for them to maintain control through activism, politics and even users to help exploit others and stay in control, “Power asymmetries mean that there are groups of people who benefit in society at the expense of others… (p.13)“ . Media has always been used as a tool to shape the political system, but some social media platforms run rampant with information that can be misleading or false without anyone holding them accountable.
We are giving up our power to these platforms and consent to relinquish our rights and protections to scroll and navigate without questioning what you see, hear, and read. We see it with platforms allowing politics to influence our opinion with false narratives by content creators with the help of foreign countries meddling in our elections. It has become more apparent recently when the FBI discovered both Democrats and Republican party politicians being targeted and working with influencers are being finance by countries that wish to suppress our vote. How can these companies have all this power to let outside influence? We have government officials giving their power, influence, and voice to these platforms. Who is protecting who, when not even the government is thinking critically of themselves?
1 note
·
View note
Text
Blog post numero uno
A time when technology failed me, was when I was a young private in the Army back in the day, and my alarm didn’t go off. It was definitely a long day, I can still hear the sleeping beauty remarks from my platoon while I crawled through the mud. I was definitely not happy but I guess what happened was; my phone battery was low and when it reached a significant point it started to update automatically. I forgot to mentioned that this specific morning was an important day due to training for a couple weeks out in the field (woods).
2 notes
·
View notes