discussionswithgyetti
discussionswithgyetti
DiscussionsWithGyetti
10 posts
Hello! Welcome to Gyetti's discussion corner!
Don't wanna be here? Send us removal request.
discussionswithgyetti · 2 months ago
Text
Blog Post #10 (4/24)
Is it morally acceptable to make opting out of data collection so inaccessible/inconvenient to the public?
As we discussed in-class, a necessary protective measure that people really should opt into more is the cleaning of their digital footprint. Although many may consider their “digital footprint” as just things posted on social media websites, we are now in an age in which anything we do on our devices gets stored as data, with our names, locations, and information written all over it. One of the most shocking aspects of this discussion was the “Apple AI/ Siri” feature on our phones, and not only that someone has to turn the feature off once, in its designated settings location, but also they need to turn it off in every app that they do not want this feature located in. This design choice, and the deliberate choice to forcibly turn it on in every update of apple systems, is incredibly inaccessible for individuals who want to protect their data. It treads the line of making it not truly opt outable, and discourages people from looking into their digital footprint. 
How can the implications of “terrorist” be used to surveille against political counterparts in the U.S.?
In the Parenti article (2003) we learn about the patriot act, which was originally created to surveille against individuals who were potential “terrorists” who threaten democracy in the U.S. government. This fearful terminology, has been used liberally as an excuse to surveille against anyone that government officials deem as dangerous. In recent years protestors for the BLM movement were labeled as terrorists, while individuals who made an insurrection against the U.S. were labeled as patriots. This undefined, fear mongering word comes with to detriment for civil activists in the U.S., and it only becomes more apparent as the current administration continues to wreak havoc. This terminology makes it so that nobody ends up being truly safe. 
Has the slow acceptance of surveillance caused younger generations to be less concerned with digital surveillance as AI has grown more prominent?
Parenti (2003) makes a good point that hits home for me as I see what is going on currently within the current administration. We, now as young adults, have lived after the patriot act, in which our information, data, privacy, etc. were often not considered things we were really allowed to have, or own. From parents and teachers checking our phones and backpacks, to learning about the consequences of digital posting and the lack of anonymity you truly have. I never once questioned who had a right to information about me, rather I learned that my information was something anyone can find at anytime, and therefore I better work hard to make sure its spotless. Now that AI is becoming more prominent within our devices, I think I truly feel the results of this slow boil violation of privacy that the article describes. Many people, without a second thought (including myself), didn’t really blink an eye when AI was introduced to my phone. Of course, now that I am more aware of its implications I care a lot more, but it definitely was a bi-protuct of the desensitization of privacy violations I grew up with. 
How does political indictment of online activity put all people who engage online in danger. 
In the Power (2010) article we learned about how Madison and Wallschlaeger used social media (specifically twitter), to aid protesters by providing information to people in the protest. We see this type of social media use all the time in recent years, especially in websites such as X (formerly twitter), TikTok, instagram live, and Reddit. People act as independent journalists, as information correspondents, and as activists from everywhere across the world. But not only does direct posting, but also indirect advocacy such as comments stating “Writing this for the algorithm so that more people can see it”, all can act as parts of “active protesting” for “riots” all across the globe. Nobody is truly safe in posting, reposting, liking, or commenting on social media. 
References
Parenti, C. (2003). Fear as Institution: 9/11 and Surveillance Triumphant. In The Soft Cage. Basic Books.
Power, M. (2010). How your Twitter account could land you in jail. Mother Jones. https://www.motherjones.com/politics/2010/03/police-twitter-riots-social-media-activists/
1 note · View note
discussionswithgyetti · 2 months ago
Text
Blog Post #9 (4/17)
How has social media blogs made people conplicent to market fundings, as apposed to revolutionaries of social and economic change? 
One topic that was addressed in Dr. Lee’s article (2015) is the use of blogs to promote black owned hair care buisness’, however one negative of this that was reported was the fact that pricing for black owned hair care products tend to be a lot more expensive then commercial products. She mentions that black entrepeneurs tend to be satisfied with market prices, and adding money to include costs of resources and novelties. I think that living within a capitalistic view of activism does end up causing issues for marginalized, low income populations. Capitalism tends to be exploitative in general, so it would take having a more reformative form of activism to greater help these communities. 
How has the decreased popularization of blog posts websites affected these “homeplaces” for communities?
Short form media content has taken over social media apps due to their high interest content. Youtube, instagram, TikTok, and Twitter are filled with short form content, while blogging websites like Tumblr are tending to loose steam. This might mean that these homeplaces may be less accessible, or less frequented, for marginalized populations. Individuals belonging to marginalized groups may also be resourting to this short form content (especially comments) to garner senses of community. However, there is less activism that is able to be done in these spaces. 
In what ways can dishonest, or malicious use of diverse online social communities impact the benefit that people are able to get from said communities. 
In the Mclaine (2001)  article, the author discusses multiple ethnic online communities, how information is garnered and spread amongst those populations, and how social divides still occur in those spaces. What striked me the most, was the type of information that tended to bemost popular on those spaces, especially the sex, love, and money category. I feel as though one detriment to online communities is the fact that trolls are able to get to them, that it is hard to moderate, and that misinformation can be spread so rampetly. It is through this that I wonder wether or not these communities are more susceptible to trolling, or malicious information spreading, and wether or not there are ways to limit or combat it. 
How is #gamergate an example of how online information can be used to silence activistic movements. 
We know from #gamergate (2014) that people are able to, very easily, compile information about adversaries that can potentially ruin there lives. Although in this case Zoe Quinn was not necessarily a political activist, her status as a women developer in the gaming industry was a catalist for a whole onslaught of stuff to happen. I wonder then, as online spaces become more and more poltiically aligned, how this information collection can impact other political adversaries. Is social media even a safe place to try and organize anymore? 
References
Hathaway, J. (2014). What is Gamergate, and why? An explainer for non-geeks. Gawker.
Lee, L. (2015). Virtual Homeplace: (Re)Constructing the Body through Social Media.
McLaine, S. (2001). Ethnic Online Communities: Between Profit and Purpose. In D. Gauntlett (Ed.), Web.Studies: Rewiring Media Studies for the Digital Age. Arnold Publishers.
3 notes · View notes
discussionswithgyetti · 3 months ago
Text
Blog Post #8 (4/10)
How does social media encourage people from other ethnic groups to engage in resistance of common social narratives online? 
In Dr. Lee’s (2017) article, we know that one of the main findings was that the use of hashtags were used to critically push back against narratives in traditional media, stating that the victims of police brutality deserve what happens to them because they are criminals. Not only was this found amongst African American userbases, but we also see activism amongst white participants with the #crimingwhilewhite thread that shed light on racial injustices in the prosecution of white people vs people of color. I think has the potential to, therefore, encourage and lift outergroup commentary about social issues through its engaging nature. It is more easily accessible to listen to the experiences of marginalized racial groups and act accordingly as a group with more institutional and social privilege, unveiling unjust experiences. 
How can the surveillance and organization on social media act as a double-edged sword? Are there ways to combat this, disburse information, and keep it away from people who can use it negatively?
In Dr. Lee’s (2017) article, we learn that social media, especially black Twitter, was used during protests and boycotts to disburse information amongst protestors. Although this was very useful for people actively participating in these protests, we also know that it poses a potential threat to protestors because officers can collect their information. This can potentially threaten online political organizations; if people are too afraid to speak up online, it limits the protest pool. To get the best out of social media information disbursement, announcements for group formation should be made on bigger platforms, with an active goal of organing individuals in smaller, more thread and text-based networks (such as Discord or any platform with encryption). 
What are some issues of individual journalism online, especially when discussing prejudice and injustice for other populations?
As discussed by the speaker on Wednesday, we know that social media can play an excellent role in spreading awareness of social injustice and knowledge about social movements. However, the perspective of these social movements may be taken away from the main affected populations and replaced by interpretations from outside views. This is a similar issue with mainstream media. However, the positive of online political engagement is, that the voices of the main affected populations can be amplified. 
How has more consumitory and less participatory social media possibly impact the ability of people to engage in political movements in real life?
In the Fuchs (2014) passage, we know that Twitter, Facebook, and Twitter tend to create opportunities for people to choreograph movements, with people being led to different social movements and information online. As Castel argues, social media has been used for mass communication as a main catalyst for social movements. However, in recent politics, we see a lack of young people going out to protests in real life, even though social media consumption is high. It may be possible that as social media becomes less participatory and more consummatory, people have been unable to utilize the organizational power of social media. 
Work Cited
Lee, L. (2017). Black twitter: A response to bias in mainstream media. Social Sciences, 6(1), 26. https://doi.org/10.3390/socsci6010026 
Fuchs, C. (2014). Social media: A critical introduction.
4 notes · View notes
discussionswithgyetti · 3 months ago
Text
Blog Post #7 (3/20)
How does cyber activism act as a more accessible way of political engagement? 
In the Elin (2013) chapter discussing political activism, we learn about Zeke Spier, a college student turned activist, and how social media was used within his activism process. One of the major themes within the chapter revolved around online mobilization, the ability to find like-minded people from coast to coast, and, therefore, starting the political protest process before finding actual in-person groupings. This shows us that online spaces can be used as a starting point in political mobilization, especially for individuals who do not have a community readily available to them. Online communities can act as an entry point, a stepping stone to more political activism, or more accessible political activisms (ex, individuals who have chronic illness and disability or who have socio-economic barriers that prevent them from protesting). 
How do privacy laws affect people's ability to engage in cyberactivism? 
In the early 2000s, technology to track down others was not as developed as it is now. The right to organize online has been almost hijacked, as data from multiple “usual” social media websites actively collect user data, which can be semi-easily traced back to real-life individuals. The lack of privacy now, therefore, has made organizations online a lot more sketchy. However, people have already started to combat this, using encrypted software or VPNs to make it harder for them to be tracked. 
Does “fact check” on sites such as Twitter (X) fix the issue of misinformation in white supremacy? 
In Daniel's chapter (2019) a point is made that participatory information disbursement leads to an increase in white supremacist rhetoric and info online, with the absence of a fact checker. In recent years, fact-checking has been deemed the responsibility of the social media website. For example, something like community notes on X is made so the community can try to fact-check false information. However, it is still a participatory data analysis, and if the environment is prejudiced, then fact-checking can be used for ulterior motives. The biggest danger, in my opinion, is the assumption that because something is “fact-checked” online, then it is seen as the truth. I feel we still have the concerns of the past, but with an assumption that things are safer now. 
What mechanisms within social media cause changes in racial justice empitology to happen? 
In the conclusion of Daniels's chapter (2019), she suggests that it truly isn't mobilization that is the main issue regarding social media and political engagement; rather, it is social changes in attitude toward social justice that are the main issue. I would like to know what mechanisms online promote this form of radicalization. We know that when it comes to forms, such as the one stated in the text, a self-created echo chamber can accrue, which causes the development of problematic attitudes. However, when considering other social media sites, such as TikTok or YouTube, we know that algorithms greatly influence where you “end up” and what type of content you look at. I wonder, then, if that has been a major reason why we see a shift in right-wing extremism. 
References: 
Daniels, Jesse. (2019). Gender, White Supremacy, and the Internet.
Elin, L. (2013). The radicalization of Zeke Spier: How the Internet contributes to civic engagement and new forms of social capital. In Cyberactivism (pp. 97-114). Routledge.
1 note · View note
discussionswithgyetti · 3 months ago
Text
Blog Post #6 (3/13)
How do online bodies impact the societal expectations of "femininity."
We know that the female body has constantly been objectified and that the expectation of women to be sexy but friendly and "not too much and not too dreary" has always been the case. However, O’Riordan (2022) describes how an emphasized femininization of technology and the sexualization of women in games and technology adds an emphasis on the creation of women as helpers and tools to be used while being functional and "hot." The real and cyber gets blurred as it pertains to attitudes towards women, and mixing these expectations with real women can cause real issues for the objectified and technologized.
How do the male domination of online spaces discourse and women being created as assistants online interact? 
Daniels () discusses the white male domination of online content preference online, and how, in her article, the online white supremacist groupings gave most attention to the members who were men. On the other hand, women proceeded to create their own spaces but were considered less popular or interacted with less. We also know that women are often made into online objects, assistants, mothers, and friends. That means that gendered roles and positioning are still occurring in this kind of expansive online world.
How does online discourse and discussion promote deintellectualization: 
We see a study in which Daniels (2019) investigates a female white supremacist online group. One of the most shocking findings is that although women within this group tended to reproduce more feminist rhetoric (while also being racist in their arguments), there is a constant lack of critical analysis of race and gender social hierarchy. I feel as though de-intellectualization has become more and more popular lately. I wonder if there is something about online communication that deters this type of critical analysis or talk. Is it the context in which these messages come from? Is there a cultural shift that occurs when conversation occurs online?
Does this intellectualization online cause issues when it comes to fighting white supremacy?
Gender relations, sexual orientation relations, and racial discussions all interact online as they pertain to understanding societal roles and the lack of an attempt to deconstruct and address societal issues.  Daniels (2019) discusses this through her analysis of the online group that, despite feminist rhetoric, there was extreme white supremacist rhetoric occurring. Unlike when leaders talk to followers, individuals now have a space to think and justify their issues (Daniels, 2019). This causes interaction of ideals, individuals reappraising their views individually to fit within a white supremacist narrative, which is a big issue as it keeps people within these communities without the need, or want, to re-evaluate their bias’. It gives people a place to create cognitive dissonance. 
Work Cited: 
Daniels, Jesse. (2019). Gender, White Supremacy, and the Internet. O’Riordan, K. (2022). Gender, Technology, and Visual
Cyberculture. Critical Cyberculture Studies, 243–254. https://doi.org/10.18574/nyu/9780814708903.003.0025
2 notes · View notes
discussionswithgyetti · 4 months ago
Text
Blog Post #5 : 3/6
How is colorblind ideology and the increased humor and stereotyping of different races, both used to promote white supremacy in online spaces? 
Colorblind ideology in regards to digital media specifically is the idea that online spaces do not concern themselves with race, and therefore cannot be racist (Senft & Noble, ). Despite that ideology, we also know that increased racial based humor and racist rederic is still incredibly prominent online. The fact that both are true is contradictory towards one another, as a colorblind social space would not continuously premote racially degrading humor. Therefore, the colorblind ideology promotes white supremacy through the ignoring of racial discrimination, acting as an excuse for individuals to not become upset at the speech that they see online. 
How is false marketing on online movie websites (Netflix), exploitative to marginalized groups and exploitative? 
They reading from Race after technology(Benjamin, 2019) ,  one of the first things we read is about the use of advertising, and how websites like Netflix will promote advertising screens on shows to show it “has diversity” even if the content within the show itself isnt very diverse. This practice is exploitative in that it takes a population that already has a hard time garning a significant amount of representation in traditional media, and uses that demand for their own personal financial gain. By the time someone watches the show, it is already two late, and their data is being used to add more money to Netflix’s million dollar corporation. 
According the the race and technology article, in what ways does “reasonableness” embed itself within the topics of racism online? 
According to Benjamins article (2019), reasonableness is a presumption people have about rhetoric on social media. Reasonableness, alongside colorblind racism, is used as an argument as to why racism “doesnt exist” in online spaces. The assumption that most people are reasonable and therefore cant be racist may stem from a virture oriented currency on social media. Just like in this weeks episode of Black mirror, individuals are constantly being assessed by their virtue and actions online, with online activity having real world consequences. The idea that people are reasonable then, may be a result of a heightened status quo online, that demotes critical thinking. 
In what ways does “basic” knolwedge about being critical online actually backfire in terms of spreading white suprematic rederic?
In Daniels white supremacy in digital media article (2009), we learn that often times websites are treated completely equal in terms of quality of information. I remember personally growing up learning about critically understanding information online in school, so this section in particular stuck out to me. The online popularizaiton of playing devils advocate, of taking into consideration information that is defimational because it “may be just as valid” may be exasperated by colorblind racist ideology. Esentially, believing everything you read online can stem from the ignorance of racial prejudice online, thereby unkowingly spreading white suprematic ideology. 
References: 
Benjamin, R. (2019). Race after technology: Abolitionist Tools for the New Jim Code. Polity.
Daniels, J. (2009). Combating Global White Supremacy in the Digital Era.
T. Senft, S. Noble (2015) Race and social media. In J. Hunsinger and T. Senft (Eds.), The Social Media Handbook (pp. 102-125). Routledge
1 note · View note
discussionswithgyetti · 4 months ago
Text
Blog Post #4
In what ways does being critical of security with technology protect tech consumers from being exploited?
In the speaking event for this week, we discussed media literacy and, more importantly, the concept of surveillance and how we can limit the data that large technology companies get from our usage. Regarding exploitation, we know that our data is often used to sell back things to us as consumers! Our data is sold to tell us what ads we like, to show what ideals we have, and, therefore, how we can buy things. I question how being critical of where our data goes helps us from getting exploited both as consumers and as unknown producers of funding for these websites. 
How are “Cyberspace”, or online digital spaces, now even more so, designed to make it hard to contextualize race online? 
 Kolko et al. (2000) discuss the implications of cyberspace on race and identity portrait and how online space tends to promote ideological depictions of the cyborg. This non-gendered or raced entity is portrayed as a member of online communities. The article describes the importance of understanding race in cyberspaces and why the disconnection of race causes issues in grander topics discussing race in society. However, as time has continued, many platforms continue to disconnect race ideologies within their speech. One example I can think of is Twitter, and how it is often ungendered, unraced, and unnamed profiles that continue to describe and comment on race relations from the “real world” while not acknowledging the implications of race and treatment of others online in a deeper, macro-level way. Topics about race and gender are romanticized rather than seen in a social light; it is interesting to see how those times have changed. 
In what ways is does digital disembodiment within the video game encourage online discrimination? 
In the TED talk 5 Types of Racism (Tedx Talks, 2011). , We see that Rampage Jackson gets actively discriminated against while playing Halo as a teen. We know that digital media tend to have a want to become disembodied with race and racism as a space. Videogame spaces are often seen where people can say and do whatever they want without concern or remorse for the players at the other end. Disembodiment, then, encourages racist and prejudicial behavior, as there is a lack of humanity and actual consequences associated with prejudice. 
How are racial stereotypes in video games used to spread disinformation within video games? 
Also, from the TED talk (TEDx Talks, 2011), we learn that identity tourism is utilized to take characters and make them act out stereotypes. The video discusses how this action can act as minstrelsy, a parody or imitation of a specific group or stereotype. However, can individuals use these identity tourism attitudes to continue perpetuating negative stereotypes and information genuinely? In what ways can racial stereotypes in video games be used for genuine social information gathering? I can argue that pretending and acting out as another identity to make fun of or play on stereotypes may spread misinformation about those groups, acting as a form of disinformation technique. 
Kolko, B., Nakamura, L., & Rodman, G. (2000). Race in cyberspace. Routledge & CRC Press.
TEDx Talks. (2011, October 11). TEDxUIllinois - Dr. Lisa Nakamura - 5 Types of Online Racism and Why You Should Care [Video]. YouTube. https://www.youtube.com/watch?v=DT-G0FlOo7g&t=90s&ab_channel=TEDxTalks 
3 notes · View notes
discussionswithgyetti · 4 months ago
Text
Blog Post #3
Q1: Although social media is public, are there moral issues for the monopolization of spaces in which marginalized groups may go to cry and create change (especially while taking into account current government states)? 
In the revolution will be digitized, the authors discuss the role of the internet as a public sphere for activism for black activism, during a time in which there was a lack of safe, public spheres for social change. (Everett, 2011). This book was created in 2011, before the extreme monopolization of social media platforms. The use of unofficial forum websites have died down, and individuals often use these new platforms to elicit social movement and create eroding change. However, especially taking into account the current climate of politics, and the digital revenue based oligarchy that appears to be forming within the United States, I would like to question what the moral implications of these “public spheres”, when taking into account that the attention we provide, the adds we watch, and the data we give, all seems to line the pockets of capitalist oppressors. 
Q 2: The new jim code states “thus, even just deciding what problem needs solving requires a host of judgements; and yet we are expected to pay no attention to the man behind the screen”. In what ways do narratives and discussions around new technologies affirming the idea that new technologies are “unbiased”? 
Algorithms and data driven decision making is often seen as “out of the hands” of individual technicians and social media programers. As is stated in the race after technology, the new Jim Code article (Benjamin, 2020), a neoliberalism, colorblind view of technology has taken president. I reflected back on my own experiences prior to this class, as I also had lived under the assumption that algorithms were deemed as absolute. After taking into account my previous opinions on algorithms, and what this article states in regards to neoliberalism and productivity, I realized that production in “logic” has been moralized as being good, without further thought. Logic being different then empirical evidence, logic more so meaning a no nonsense, individualistic approach to the world. 
Q3: How does the exclusivity and gatekeeping of knowledge about algorithms contribute to its continued harm, as in regards for marginalized communities. 
In this week's Power of Algorithms chapter, the author states “It is impossible to know when and what influences proprietary algorithmic design, … except as we engage in critique and protest” (Noble, 2018). This statement made me question, how has the privatization of these public spaces prevented marginalized individuals from being a part of the conversation when it comes to their own algorithms, and what information they see? If updates and changes are made that change the info that people are exposed to, then why are consumers NOT more a part of the algorithm creation process? 
Q4: How might issues regarding online algorithms worsen as Artificial intelligence takes search engines by storm, now automatically generating simple consumable answers? 
This question stems from an ending remark made in the power of algorithms chapter (Noble, 2018), stating that there is a lack of human context in some types of algorithmically driven decisions.? Questions for me arise, such as, what results are used in the AI image generations? It can’t be all sources, are they the sources that pay money to be prioritized on google? The further distilling of responsibility (now AI being seen as absolute truth) may make it even harder for individuals to fight against algorithmic oppression, because it adds another “middle man”. 
References:
Benjamin, R. (2020). Race After Technology: Abolitionist Tools for the New Jim Code. Polity. 
Everett, A. (2011). “The Revolution Will Be Digitized: Reimaging Africanity in Cyberspace.” Digital Diaspora: A Race for Cyberspace, State University of New York Press, pp. 147–82. 
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
8 notes · View notes
discussionswithgyetti · 5 months ago
Text
Blog Post #2
Q # 1: In what ways are intersectional groups of women and other cultures overlooked when analyzing the cybercommunities of women online? (Daniels, 2009)
Daniels (2009) states in her article that gender is often separated from investigating online media use, while other marginalizing factors, such as race, socio-economic status, etc., are usually not thought of. One of the most prominent examples she provides is when looking at an Asian participant within a previous study, who looked at pro-ana content online and how “real life” social isolation, disconnect, and the body embodiment and community they sought online. 
Q # 2: Have the people who benefit from cyberfeminist communities changed over time?
Online cyberfeminist communities are cited as areas of information, exploration, civil engagement, and collectivity for individuals of varying gender identities and experiences. Whether it is through online representation and exploring outside of their physical bodies to find identity, or it is connecting with one's bodies in a way that is nonnormalized in society, women and gender minority individuals can use these spaces to explore and connect in a way that may not be socially available to them (Daniels 2009). However, are they the only ones benefitting from these online spaces? If we think about today's technology, we do not often have separate websites for activism like we did in 2009. Instead, we use pre-established social media applications (X, Discord, Facebook) to create these spaces. Is it possible that these cyberfeminist communities are now not only in benefit women and gender minority individuals, but also in benefit of these companies? 
Q #3: In what ways are marginalized and minoritized groups disproportionately affected by the automation of government and political processes? 
Minoritized and marginalized groups are often categorized in ways that are stereotype-fulfilling, usually harming individuals who are marginalized based on race or socioeconomic status. Eubanks (2018) discusses this in their article: automated eligibility resources often monitor and overclassify individuals of low economic status in positions of “fraud,” such as her husband being put after having to switch insurances because they switched jobs. Not only do the systems themselves cause this issue, but the verification and ratification of these issues also disproportionately affect individuals of low income, as the extensive time that it takes to rectify these wrongs often involves a strenuous, long process for the wronged. Individuals who are low income, working, etc. They may not have the social support or the financial freedom to take the time to rectify these issues, causing them to be affected much worse than others with those resources. 
Q #4: Why does there seem to be little push for reform for these automated technological issues?
As Brown (2020) alludes to in her video, the assumption and prioritization of “fairness and safety” tend to override the blatant harm many of these automated systems have on marginalized members of society. However, Brown also points out that the United States spends more on healthcare than any developed country, yet it still has the worst outcomes of any developed nation. It may be possible that the lack of reform for these automated technological issues stems from the fact that automation saves money for hospitals on labor and that although these issues occur, they often happen to populations who are a lot less likely to actively fight against the system, because of factors listed in the above question. 
Work Cited
Brown, N. (2020). Race and Technology. YouTube.
Daniels, J. (2009). Rethinking cyberfeminism(s): Race, gender, and embodiment. WSQ: Women’s Studies Quarterly, 37(1–2), 101–124. https://doi.org/10.1353/wsq.0.0158 
Eubanks, V. (2018). Red Flags. In Automating inequality: How high-tech tools profile, police, and punish the poor (pp. 9–28). Tantor Media.
6 notes · View notes
discussionswithgyetti · 5 months ago
Text
Blog Post #1: Storytime! (1/31)
The worst technological failure I have ever experienced involved texting the wrong person, specifically my mother. So, like many other families, my mother and I tend to bicker a lot. It is never overtly malicious; we love each other to the moon and back; however, we often have very different ideological beliefs that cause us to squabble. One particular fight occurred while my brother and sister were in the room with us. 
The fight happened when I was 18, so I have completely forgotten the topic of the argument. But all I knew was that my older sister and I were finding my mother to have an unreasonable take because she and I were texting back and forth. One of my texts to my sister essentially said, “Oh my god, I’m so upset with Mom; she’s driving me crazy!” I hit the send button and looked over to my sister to see her reaction, only to hear a ding not from her phone but from my mother's. I accidentally sent my already upset mother a text complaining about her. 
Another key piece of knowledge to know is that I am Middle Eastern. My mother is a proud Arab woman who raised me always to respect my parent, so I knew that if she saw this text, things would not end well for me. My mother's phone was facing up, so when the message was sent and the phone had pinged, my sister (sitting right next to her) could see that message. After giving me a look that could only be translated to  “You’re dead; I will start preparing the funeral,” she had the idea to take my mother's phone and delete the message slowly.
I had the task of not freaking out and continuing to argue with my mom so she would focus her attention on me. My mother was too focused on arguing with me to notice her phone ping, and subsequently, my sister took her phone. My mother also never had a passcode on her phone, so my sister could easily open her phone, delete the mistaken text message, and put my mom's phone back on the table without her ever noticing. That day, I learned two very important lessons: 1. Do not send messages complaining about someone in the middle of an argument, and 2. The power of family (more so sibling alliance) is stronger than the danger of any silly situation you can possibly get yourself into.  
1 note · View note