Tumgik
lizbethgonzalez98 · 4 years
Text
Week 13 Blog Post (due 11/20/2020)
1. Determine what kind of social media site you are creating. Then answer the following questions, explaining your decisions. (Remember, the key here is to create an ethical algorithm model). (133)
The social media site I would start is one for fitness motivation. I would call it FitForAll. There would be subsections on the site for people taking similar approaches, such as trying the same diet or focusing on the same goals. The site would allow people to share advice, progress, recipes, and exercises. The goal would be to build communities of people who want the same thing to motivate each other. Everyone would have to be respectful of one another; no kind of bullying would be tolerated. Sharing progress pictures would be allowed but showing your face would be discouraged. The images would have to follow specific guidelines. To sign up, the only thing required would be a valid email address, your name, and age. For safety precautions, only people 18 or older would be allowed to sign up.
2. What will you measure? (ex: likes, hashtags, how long someone stays on a page, etc.) Be sure to explain all your decisions.
Likes on a post would be the primary measurement. The post with the most likes would flow to the top of the subsection. This would indicate that the post was helpful to people in the community. We would also be measuring how much a person stays on a subsection. If the bodybuilding or keto subsection is the most popular, it will go to the top of the subsection list. The activity of individual users would also be monitored. If an account is inactive for too long (more than five months) or is disruptive to the community, it would be deleted. Disruptive accounts would also be flagged, so they cannot create a new account with the same email address. The most helpful tips from each subsection (determined by the number of likes) would be featured on the homepage. Comments will also be monitored. If a particular topic is inciting a lot of conversation, it would be pushed to the top.
3. How will you weight these factors? (rank factors from most important to least important)
The factor that would be the most important is if people are following the guidelines. It would be essential to monitor people’s age and how they are interacting with other people. Then I would take into consideration the interaction with the posts. Likes would be the primary measure and then comments. Engagement is significant on the site because the purpose is to learn and motivate each other. Lastly, we would monitor how long people stay on the site to try to maximize engagement.  
4. What factors will not be measured? (what will not be measured in your algorithm and why)
Things like ethnicity, social standing, occupation, and sexuality will not be things that are measured. The site is supposed to offer support to everyone, no matter how you identify. This would hopefully help mitigate any sexism or racism within the fitness community. No discrimination on any basis will be tolerated. We often see algorithms discriminating against minorities, but ethnicity, sexuality, and gender are unnecessary to track when trying to reach a fitness goal.
5. How will the model learn or adapt to new information? (this can include changes in language, etc.)
Any new information posted will be fact-checked and checked to see if it follows the harassment guidelines. The model will adapt to further information by analyzing the post before it is allowed up. There would be apparent flags in a post that will not allow it to upload, such as bad words or derogatory terms. If the post doesn’t follow the guidelines for any reason, but it was allowed to be uploaded, a user can report it, and the post will be taken down if deemed disruptive. If a person uploads a link to another site, the link will be checked to ensure it is safe and follows the guidelines.
6. What biases will be built into the model? How will this change the algorithm’s outcome?
One bias that will be built into the model is the ability to follow specific subsections. Following will automatically push this content to the top of their feed. Since the site is measuring likes, content with the most likes will also be pushed to people even if it isn’t what they want to see. For example, if someone is looking for cardio ideas, but the most liked post in that subsection is a progress post, it will not help the person. This will make the algorithm praise people for their success or the good parts of fitness while ignoring people who are just starting.
7. How will the model live out the (un)ethical practices we have discussed?
The model will outlive unethical practices by monitoring discrimination closely. Racism, sexism, etc., will not be tolerated by any means. The site is not asking the users to disclose their ethnicity, gender, or sexual preference, so they can not be targeted. Everyone is encouraged to grow and live a healthier lifestyle, regardless of who they are. If a person does experience discrimination, an open dialogue on the site will be encouraged so everyone can learn from it. This site is meant to be a place of growth, health, and healing for all. There might still be a divide between people who don’t have the financial means to access the site, but it will be easy to download the content and print it out to pass out to other people who might need it.
4 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 12 Blog Post (due 11/11/2020)
1.       What do all three articles have in common?
All three articles deal with the issue of profit. In "Virtual Homeplace," we see a struggle between advocating for your community but also taking advantage of them. While reading "Ethnic Online Communities," we see a common theme where advertisement companies take advantage of specific ethnic communities. Gamergate showed a group of people going after small independent companies rather than the huge profitable game companies. Money is a common theme in all three articles, and it intertwines with the issue of racism and sexism. Racism and sexism have proven to be profitable, allowing companies to take advantage of a communities despair. The purpose of advocacy is lost when tainted by advertisements and sells pitches that only benefit white male companies.
2.       Can a virtual home place be the start of a movement?
In the article "Virtual Homeplace" by Lee, we see hair blogs become places of healing, comfort, and nurture. Blogs such as "CurlyNikki.com," "Afrobella, all shades are beautiful," and "Blackgirllonghair.com" have provided spaces for black women "to build a community around their body, but more specifically hair." Here the women discuss products, DIY hair remedies, and even some social concerns on how natural hair is portrayed in the media. Many of these blogs call out media such as Stanger on Bravo for shedding a negative light on natural hair and glorifying Eurocentric standards of beauty. This commentary resulted in people boycotting the show and "to teach and discuss the historical wounds of racial oppression." There are many examples of discussions on blogs resulting in social change, but there are limitations. Some of these limitations are blogs only being virtual home places for tech-savvy people and have access to technology. With that being said, I still think virtual home places can be a place to start a movement because it brings people together and starts discussions on important topics that are always covered in academia.
3.       How do you know when profit has taken over purpose?
Profit taken over purpose means the website, blog, or advocacy group is valuing profit more than the cause they stand for. As stated by McLaine, "Profit is easier to measure and benefit from, while purpose is more elusive and indirectly rewarding." When many of these EOCs started, their goal was to bring together people of the same ethnicity. The advertisements on these sites have proven that profit has taken over purpose. None of the advertisements featured minority friendly companies, and the advertisements' money wasn't used to advance the community. As a visitor on these sites, it might be hard to differentiate when you are being sold something or the company is concerned with the community's wellbeing. Making sure to pay attention to the advertisements used and how the company uses its power is essential. We should always strive to support one another in the most authentic way possible.
4.       How has ignoring issues of race and sexism online affected the growth of online spaces?
In many of the readings we have done throughout the semester, we have seen online spaces ignoring groups of people. Gamergate was started over a woman in gaming, and EOCs were created to fill a void on the internet. Excluding these groups of people limit "the growth of individual users, as well as the technological medium as a whole." These people are usually targeted to profit from them, but they should be more than just money signs. Women and people of color have a lot of knowledge to offer. In gamergate, we see people saying that they want to play games without discussing minorities or women. By doing this, we are limiting our knowledge and accepting privilege. As stated by McLaine, the internet should not be seen as an equalizer. On the contrary, the internet has highlighted all of the work we still have to do to become more inclusive.
Hathaway, J. (2014). What is gamergate and why? An explainer for non-geeks. Gawker.
Lee, L. (2015). Virtual homeplace: (re)constructing the body through social media. 91-111.
McLaine, S. (2003). Ethnic online communities: between profit and purpose. 233-254.
8 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 11 Blog Post (due 11/4/2020)
1.       What are some problems that can arise through activism on social media?
Activism on social media is widely used to correct media portrayal of people of color, mobilize groups of people, advocate for causes, and call to action. While we see many companies being corrected through social media, we also fall into the trap of "accepted truths." These accepted truths are things that are easily believed; therefore, people rally behind the cause. This allows for some causes to get more coverage than others. An example of this is Marissa Alexander, who didn't get nearly as much media coverage as Trayvon Martin. The public is correcting media companies, but we now have to be more conscious of who we are and aren't rallying for. It is essential for "black women, queer, transgender, and immigrant bodies" to receive the same attention as black men who are society accepts as being under attack.
2.       Is activism on social media just as effective as activism in person?
Online activism wouldn't be possible without some activism being done in person. Fuchs states that "it is not the internet that creates sociality, but human actors who are embedded in antagonistic economic, political, and ideological structure of society." Social media does help amplify some voices and push forward specific agendas, but it wouldn't be possible without human interaction. Both forms of activism are now intertwined, and in several social movements, face-to-face communication was a more important means of getting information.  
3.       How has the history of surveillance impacted us today?
News bias in the United States can be stemmed back to the history of surveillance of enslaved people. The media representation of black bodies has always negatively connotated them as criminals and problem populations. This portrayal of black bodies has impacted how "our institutions were constructed to respond to black bodies." While reading Lee's article, it was astounding to me how similar the death of Eric Garner was so similar to the death of George Floyd. It is crucial to recognize how much power the media has over how people of color are treated. George Floyd is a prime example of how nothing has changed since Eric Garner, and media portrayal is still a huge issue.
4.       Can hacktivism be considered an essential role in online activism?
Online activism is "compromised of proactive actions to achieve a certain goal or reactive actions against controls and the authorities imposing them." Hacktivism is used to promote online advocacy and direct action against a company perpetuating an agenda the advocates disagree with. I think hacktivism is necessary because it brings the issue to the company's attention so it can be taken more seriously. But I also believe it can be overlooked because companies have so many more resources than the average person. Hacktivism is composed of "Web site defacements, virtual sit-ins, and email campaigns." Big companies have third-party security systems in place to stop such attacks, and most of the time, the company itself doesn't know they were attacked or how it was combatted because it happens so often.
Fuchs, C. (2014). Social media and communication power. In Social media: A critical introduction (pp. 69-94). London: SAGE Publications Ltd doi: 10.4135/9781446270066.n4
Lee, L. (2017). Black Twitter: A Response to Bias in Mainstream Media. Social Sciences, 6(1), 26. doi:10.3390/socsci6010026
Vegh, S. (2003). Classifying Forms of Online Activism The Case of Cyberprotests against the World Bank.
3 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 10 Blog Post (due 10/28/2020)
1.       If the trolling victim isn't hurt or doesn't acknowledge a troll, can we still consider him/her a troll?
In the readings, we are exposed to two different definitions for the word troll. The first is Norse Mythology, which states that a troll has malicious intent and wishes harm/ discomfort on their victim. The second comes from fishing, where the troll is baiting for a response from their audience. Grandpa Wiggly, who was seen as a troll by many, didn't fit either of these definitions; he stated that he played a role. Many people felt like they were being trolled, but there was no malicious intent behind the Reddit user, and no harm was done to others. Allowing the victim to decide whether a person is a troll or not takes away from the act itself. If a troll is trying to cause chaos and disrupt online spaces, then he/she should be considered a troll even if they hurt no one. In the case of Grandpa Wiggly, I do, to some extent, believe he was a troll because his coming out caused a lot of chaos. People's trust was broken, and they felt deceived, which led to people rethinking the internet guidelines.
2.       Is trolling a issue tech companies can fix?
I don't think trolling can have a complete fixed because people will always find a way to deceive others. Trolling has been going on for many years, and as the internet grows, there are more platforms for trolls. While trolls didn't self-identify as trolls, in the beginning, they were still doing things that were classified as trolls behavior. In the case of Leslie Jones, we see trolling that has gone too far. Jones was receiving threats and was being made fun of because she was a woman and not only that but a woman of color. It has been shown that "a significant percentage of this laughter is directed at people of color, especially African Americans, women, and gay, lesbian, bisexual, transgender, and queer people." With that being said, I think tech companies should have stricter rules for the harassment of others. Behavior that is targeting a person shouldn't be allowed. If an account is purposely being used to deceive and harm it, it should be reviewed and shut down if deemed disruptive.
3.       At what point can trolling be considered bullying?
Trolling should be considered bullying when targeting a person and making fun of them based on their looks or beliefs. Lulz, a kind of "unsympathetic, ambiguous laughter," done at someone else's expense, should never be okay. If the troll targets a specific person, sends them threats, makes fun of them, or is attacking them, they should be considered bullies. The internet is a public space, and making someone uncomfortable using it is taking away their rights to use a public space that is just as much theirs as it is the trolls. Using the excuse that the trolls "don't mean, or don't have to mean, the abusive things they say" isn't okay because while a troll can choose to what extent their word match their beliefs, we don't know how the person on the receiving end will take it. We don't know what the receiving end is going through, so the comments could be taken personally. As stated in the article, "trolls exercise what can be described as pure privilege – they refuse to treat other as they insist on being treated."
4.       Are trolls just people who are using the internet for disembodiment?
In previous articles, we have read about how different people use the internet for embodiment or disembodiment. Usually, the people who use it for disembodiment are trying to escape the realities of their real-life situations. They create a different character online that gives them more freedom. Trolls are people who aren't using their own identity; they are making a character. I don't think trolls can be categorized as using the internet for disembodiment because they aren't necessarily doing it to escape their reality. Trolls hide their identity because it allows them to engage in disruptive behavior without the consequences affecting their lives outside of the internet.
Bergstrom, K. (2011). “Don’t feed the troll”: Shutting down debate about community expectations on Reddit.com. First Monday, 16(8).
Phillips, W. (2015), "Defining Terms: The Origins and Evolution of Subcultural Trolling”. This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. pp.55-57.
Silman, A. (2016). A Timeline of Leslie Jones's Horrific Online Abuse.
8 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 8 Blog Post (due 10/14/2020)
1.       Why is there a border war between organisms and machines?
There is a border war between organism and machine because it is becoming increasingly ambiguous as to where one begins and ends. As stated by Haraway, "we are all chimeras, theorized and fabricated hybrids of machine and organism." Western traditions have been based on dualisms that allow for the domination of minority groups, but these dualisms are blurred when it comes to technology. It is unclear who is dominant because we don't know who the maker is and who was made. The machines have become part of us, part of our process, we are neither dominant nor threatened by the machines; "we are they." Having these ambiguous lines makes it difficult to stratify cyborgs giving these miniature silicon chips more power.  
2.       Are white women helping white supremacy by being a bit more liberal?
In "Gender, White Supremacy, and the Internet," we are exposed to Stormfront, a website that hosts many white supremacist discussions. The website is male-dominated, but many women contribute to the site and have women-only discussions. The article shows many excerpts of women who have written blog posts, and we see that they embrace certain aspects of liberal feminism. This allows more people to identify with white supremacy because it begins to feel like different views are supported. As stated by Daniels it, "allows for reinvigoration of this ideology  by including participation by white women who espouse liberal feminist rhetoric about equality, by including new interpretations of "interracial dating" and by offering limited support of "prowhite homosexuals." Bringing in these new views can also be attributed to digital media's participatory culture instead of the print era where one person wrote everything.
3.       How is the portrayal of femininity online sexist?
The portrayal of femininity online has used hyper-femininity to depict cyborgs to make them seem friendlier. A common fear amongst many people is that cyber cultural forms will become too intelligent; by assigning them a female form, they "demote these levels of perceived intelligence and rationality." That is why most voices used for GPS systems and other virtual commands use a female voice such as Siri, Alexa, and Cortana. O'Riordan states, "As feminized figures… computers can even be conceptualized as more emotional and less rational than their (hu)man users." We can also see sexism through the representation of women in cyberculture with a Victorian aesthetic. Virtual bodies are created through the male gaze in "attempts not to realize the fantastic but to idealize the real." While we have thought of animation as insignificant, we need to realize that these virtual portrayals have power and affect women.
4.       If machines and organisms are becoming one, will white males continue to be a dominating force?
In many aspects of life, we have seen white male dominance over minority groups. Women, people of color, poor people, etc. have all been white male dominance victims. As society continues to rely on digital technology and we become chimeras, we can see white supremacy also evolving. The internet was never created for minority groups; it was developed "by white men as a communication technology for the military." Digital technology was then predominantly used by white males initially, with only a small space being carved out by minorities for resistance. Now technology is operated by a wide array of people, but we still see white male dominance in algorithms and other digital codes. Technology is still mostly run by white males making them the dominating force.
Daniels, J. (2009). Gender, White Supremacy, and the Internet. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights (pp. 61-86). Rowman & Littlefield.
Haraway, D. (1991). A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. Simians, Cyborgs, and Women: The Reinvention of Nature (pp. 149-181). Routledge.
O’Riordan, K. (2006). Gender, Technology, and Visual Cyberculture. Critical Cyberculture Studies (pp. 243-252). New York University Press
4 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 7 Blog Post (due 10/7/2020)
1.     Can the tailoring of media be seen as a form of censorship?
 Many companies have now started tailoring what we see in media outlets, so it is different for everyone, depending on what data they have collected on you. This is a problem because it hinders progress by giving a false sense of progress. If Netflix is tailored to an African American person, they will see more African American actors on movie posters, making people think progress has been made. This also doesn't allow people to see that real change is needed not only in television but also in the institutions that perpetuate racist views using technology. As states by Benjamin, "cosmetic diversity too easily stands in for substantive change."
 2.     Minorities are fighting to end colorblindness, but does visibility ensure progress?
Colorblindness is when you choose not to see color because race doesn't shape your everyday experiences. Being colorblind neglects the fact that many minority groups are targeted because of their race and insinuates race has nothing to do with the hardships minority groups' face that white people do not have to go through. With that being said, visibility can also cause problems. Visibility doesn't ensure that the struggles minorities face will be truly seen, "for minority groups… visibility is a double-edged sword." Benjamin also states that the hypervisibility of minorities in media can differ from the "disenfranchisement of Black communities through de facto segregation and the punishment apparatus."
3.     How does the marginalization of minorities make tech companies money?
Technology reinforces racism by having loose rules when it comes to violent, hate-filled content. An example of this is twitter, which takes a very hands-off approach to such content. This causes retweets and comments on the content, resulting in more clicks. As we see in Senft and Noble's article, black twitter uses the platform as a way to engage "with others who have similar concerns, experiences, tastes, and cultural practices." White supremacists love twitter because they hardly have any rules so they can spread hate quickly. In either case, twitter profits because "more traffic equals more profit, even if that traffic involves violently crashing into other users." This shows how racism makes tech companies money, and they don't correct the issue because they value money more than ending discrimination.
4.     Is the deterministic worldview of technology not allowing us to move forward? (pg75)
The deterministic view of technology is "the mistaken view that society is affected by but does not affect technological development." If we continue to think that technology is doing to us, rather than us controlling technology, then can we fix the problem? We have to recognize that technology is affecting us because of the way it is developed. While "coded inequity makes discrimination easier, faster, and even harder to challenge," we have to understand that "social norms, ideologies, and practices are a constitutive part of technical design." If we cannot see the problem, then we won't be able to fix it.  
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Cambridge: Polity, 41-88. 
Sent,T. & Noble, U.N. (2014). Race and social media. The Social Media Handbook, 107-125. 
6 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 6 Blog Post (due 9/30/2020)
Should all media be regulated as if a kid were accessing it?
As I was doing the readings and listening to the ted talk, it was clear that media is widely unregulated in all aspects. Companies don't care how the media is being used as long as they can collect data and push out advertisements. There are so many forms of media that we tend to overlook all of them. In "White Supremacy in the Digital Era," we see how white supremacists are using cloaked sites to spread their propaganda, usually to kids who are researching civil rights movements. The ted talk highlights how the gaming world doesn't regulate the usage of headsets, so hate is also spread verbally. The yellow-faced cyborg is a prime example of how playing video games is also perpetuating racism. As stated in "Race in Cyberspace," we have to put "race more squarely on the table when it comes to the study of cyberspace." Listening, reading, and playing into racism and discrimination is a cause for concern when that type of content reaches a developing mind. Media is being used more and more by young children, so it is crucial to regulate it, so racism and discrimination aren't perpetrated further.
Who is responsible for the gaming industry's insensitivity?
The gaming industry has proved to be racially insensitive time and time again, but there are no consequences. In "The Revenge of the Yellow-faced Cyborg Terminator," Harraway states "that cyborgs are not born but constructed by the hands of others, and as a result are neither entirely innocent nor guilty of their actions." If cyborgs aren't real humans but rather a face on the internet in place of someone behind the screen, is it fair to blame the person controlling it? Or do we blame the government who is allowing these companies to go unregulated? I think both should be responsible but mostly the government, which helps ignorant behavior to continue. It should not be acceptable to make parodies of other cultures because gaming becomes more lifelike; it gets hard to distinguish the real from fake. Both the government and the gaming industry are to blame. The gaming industry is looking for profit, and the government isn't putting any regulations in place that will hinder their profit if rules are broken.
Has the media made it more challenging to have meaningful conversations about racism?
In recent events, we have seen an influx of conversations about racism. People are starting to become more aware of institutionalized racism that has been going on for years. A big part of this conversation is happening on the internet, but I feel it hasn't been easy to cultivate meaningful conversations when most people are just reposting. Quotes and pictures filled with paragraphs are being posted on instastories or people's feeds, but the next post will be about the person's morning coffee or their workout, and the topic of racism becomes an afterthought. Slacktivism is becoming more prominent, but it has also made people forget about the weight and importance of these conversations. Racism is still very evident, and while social media is a great way to spread awareness, it has also desensitized us to what is going in the world. Since it is widely shared amongst people, it is now "normal" to see acts of racism being committed and make it easier for people to accept them.
Does disembodiment give white supremacists more power?  
Disembodiment has given white supremacists more power as they no longer have to face the consequences of being openly racist. When speaking to people in person, you are more aware of what you say and how you say it because certain things aren't socially acceptable. With the internet, you can hide behind a screen and be openly racist without anyone knowing your real identity. While disembodiment can serve many people positively, it can also serve as a tool for popularizing white supremacy.
Daniels, J. (2009). White Supremacy in the Digital Era. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights (pp. 3-16). Rowman & Littlefield.
Kolko, B. E., Nakamura, L., & Rodman, G. B. (2000). Race in Cyberspace: An Introduction. Race in Cyberspace (pp. 1-13). Routledge.
Nakamura, L. (Director). (n.d.). TEDxUIllinois - Dr. Lisa Nakamura - 5 Types of Online Racism and Why You Should Care [Video file]. Retrieved from https://www.youtube.com/watch?v=DT-G0FlOo7g&ab_channel=TEDxTalks
Ow, J. A. (2000). The Revenge of the Yellowfaced Cyborg Terminator: The Rape of Digital Geishas and the Colonization of Cyber-Coolies in 3D Realms’ Shadow Warrior. Race in Cyberspace (pp. 51-68). Routledge.
5 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 4 Blog Post (due 9/16/2020)
How can we combat a "racist" algorithm?
While reading "Algorithms of Oppression," it became increasingly clear to me that tech companies need to be regulated. In the article, we see examples of how algorithms were racist and misogynistic, but there are no repercussions for such behavior. We cannot continue to let these companies self-regulate issues that they didn't even recognize, to begin with. As stated by Noble, "We need all the voices to come to the force and impact public policy on the most unregulated social experiment of our times the internet." It is crucial to demand change because these algorithms that make decisions on our lives will only continue to get worst if they are not correctly written. Instead of allowing google to fix an issue that didn't see in the first place, I think people from outside parties should be able to review these algorithms. Algorithms need to be written with many different views in mind because they are affecting people with many different backgrounds.
Is the digital divide an excuse to overlook minority groups on the internet?
I think that the digital divide theory does serve as a means of dismissing black technophiles. While the presence of black people wasn't seen until a couple of years after, they did make their presence known and accomplished a lot with technology. As we saw in "The Revolution will be Digitized," for many years, the work of African American people in technology has been dismissed by reports made in media. As stated by Everett, media has made African American people's efforts look "negligible at best, and impossible at worst." If we continue to hide behind the digital divide theory, then we will continue to ignore minority groups who have made their voices heard through media.
Do some people have the luxury of not playing in the digital era while others don't?
As technology is more widely used, we are seeing people who are more privileged to pull back from using it. Minority groups don't have the luxury of accessing the information needed to know how tech companies work. Most people don't know that their data is being tracked when they use apps and websites, and if they are aware, they don't know to what extent it is used to harm them. Social elites are better versed in the tech industry, so they are showing acts of defiance, such as limiting their kid's screen time. Minorities are spending more and more time on devices that perpetuate algorithmic oppression. This is only further embedding the racism we have seen for years as it is now part of "the architecture and language of technology."
Are "glitches" in systems really glitches or just acts of racism and sexism that have been caught?
In this week's readings, we see how algorithms are made to tag people who are seen as "high risk," which are usually minority groups. As stated by Noble, "glitches found in systems do not suggest that the organizing logics of the web could be broken but, rather, that these are occasional one-off moments." We see in "Race after Technology" how a woman is denied a loan based on her medical record, how many infants were put into gang database, and how people are seen as a threat to public safety based on their names. I personally don't think these are glitches but rather racism that is encoded into technology. While it might be true that sometimes it is a glitch, I think, more often times than not, the glitch is prompted by a code that deemed the person ill-equipped or dangerous.  
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Cambridge: Polity.
Everett, A. (2002). The Revolution Will Be Digitized: Afrocentricity and the Digital Public Sphere. Social Text, 125-146.
Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
5 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 3 Blog Post (due 9/9/2020)
1.       Are algorithms facilitating life or causing more trouble?
In "Automating Inequality," by Eubanks, we see that an algorithm made a life-altering decision when her partner was in dire need of healthcare coverage. While this decision wasn't a massive deal to them, we see that this a common occurrence, and others might not be as lucky as they were. Algorithms have taken over many aspects of decision making and now control "which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud," but with these automated systems come mistakes. The mistakes have to be fixed by humans and, most of the time, requires a long and tedious process. Most of the time, the people who are targeted are minority groups who don't have the time or sufficient knowledge to fight these battles. While algorithms facilitate the companies/organizations' job, it significantly troubles lower-income people when a mistake is made.  
2.       Is how you use the internet connected to your personal experiences? If so, can cyberfeminism ever be the same for everyone?
I don't think cyberfeminism could ever be the same for everyone because we all have different standings in life. An upper-class white woman won't face the same struggles as an upper-class woman of color will so their needs in cyberfeminism would be different. While feminism aims to be intersectional, I think that it becomes harder to acknowledge in a space where everyone has a different relationship with digital technologies. Although race and gender aren't taken into account in the "digital divide," I think they are still important when talking about any kind of feminism. They are things that affect our perspectives on life, therefore, changing what we put out on the internet and what communities we seek.
3.       Where does our responsibility lie? Is it the people being surveillance, the governments, or the public's responsibility to be more aware of the effects data has on the lower class?
The responsibility should be on everyone to help the lower class/marginalized people get away from the effects of data collection. Eubanks states, "Americas poor and working-class people have long been subject to invasive surveillance, midnight raids, and punitive public policy that increase the stigma and hardship of poverty." If data collection is being used to harm further a group of people who are already struggling, how do we know that data won't be used to harm others? I think everyone should care and take responsibility because this is technology that can be readily used to everyone's disadvantage.
4.       Is the internet being used to escape your identity or to solidify it?
While reading "Rethinking Cyberfeminism," by Jessie Daniels, we come across people who u se the internet for embodiment and those who use it as a way of disembodiment. People who use it for disembodiment claim that the "the absence of physical body… [has] a liberating effect on repressed social identity," in contrast, people who use it for embodiment use it to become more of themselves. I think the internet is being used in both ways to escape your identity and to solidify it. They are oppositions of each other, but from the readings, we can see that they are both being used to empower the individual.
Daniels, Jessie. (2009). Rethinking cyberfeminism: race, gender, and embodiment. Women’s Studies Quarterly. 37, 101-124.doi: 10.1353/wsq.0.0158
Eubanks, Virginia. (2018). Automating Inequality Introduction
9 notes · View notes
lizbethgonzalez98 · 4 years
Text
Week 2 Blog Post (due 9/2/2020)
1)      Media is constantly changing its form; how will that affect our ability to diversify it?
While reading “News for All The People” by Torres and Gonzales, I realized that media starts somewhat diverse, and it becomes less varied as time goes on and it centralizes. If this is a cycle that is continually going on, diversifying media becomes increasingly tricky. Media has never been in favor of diversifying; it has always been for white people ran by white people. Media constantly changing its form can provide people of color more opportunities to have their voices heard through new media outlets. Still, centralization sets them back.
2)      Is it possible for the media to be neutral in any situation?
No, I don’t think media can ever be truly neutral when reporting anything. Media is a form of expression, and whoever is writing the report is doing it through their lens, subconsciously adding their own beliefs and perceptions to their writing. In “News for All The People,” we also see that media was started to push political agendas, and that is still very much the case today. Our data is being sold to advertisement companies and being used to feed us content that is catered to us. Media is now more biased than ever, and as long as we continue to blindly agree to media outlets terms that will not change.
3)      How do you wake up society to the realities of social media?
As stated by Fuchs, “the history of these technologies is deeply embedded into the history of capitalism, colonialism, warfare, exploitation, and inequality.” Media isn’t just a mindless post on Instagram, Facebook, Twitter, etc. these posts are ways for big companies to gather information. Different forms of media have helped make our lives easier, but we don’t realize they have also been exploiting us. An example of this could be Facebook; everyone loves it because they are connected to hundreds of different people, but all of our data is being sold to other companies. People need to be more aware of what they agree to when signing terms and agreement pages. I think if everyone just became a little more informed, people would realize that social media isn’t all rainbows and butterflies, it’s a power dynamic, and we continue to lose control the less informed we are.
4)      Does slacktivism have a real impact on social justice movements?
I think slacktivism isn’t ideal, but it is helpful towards social justice movements. It would be ideal for people to post on social media and also do the physical work to have a social impact, but that isn’t always the case. With slacktivism, you raise awareness around the movement, and more people look at your cause. While no real actions are being taken to achieve change, there are now more people aware of the social movement, which can cause change to happen.
 Fuchs, Christian. 2017. Social Media: A Critical Introduction. 2nd Edition. Los Angeles: Sage Publications.
Gonzalez, J., & Torres, J. 2012. News for all the people: the epic story of race and the American media. Verso.
7 notes · View notes
lizbethgonzalez98 · 4 years
Text
When Technology Failed Me...
Technology fails me all the time, more times than I want it to. Recently I have been battling on and off with my wireless printer. I know everything is being done online this semester, but I am the kind of person than needs physical copies of things, or else I won’t focus. Last night I decided that instead of preparing for classes the day before and printing my syllabuses, I was going to do it in the morning. For a moment, I thought to myself, “that’s a bad idea something is bound to go wrong,” but I procrastinated anyway. Low and behold this morning, my printer would not work, and I couldn’t print anything. I started my first class, stressed and wondering what went wrong so I couldn’t focus. Moral of the story do not procrastinate, because technology doesn’t have your back.
3 notes · View notes