Don't wanna be here? Send us removal request.
Text
W2
After finishing this week's reading, which discusses how states employ various legal, technological, and social regulations to control and filter internet access. It reminded me of the dark memories while studying in China. When COVID-19 was raging around the world, the Chinese government took very strict measures to control the epidemic. The government has even locked the gates of communities with negative cases, and set up multiple seals and roadblocks around the communities to prevent community residents from escaping.
However, an unfortunate tragedy happened at this time. A fire broke out in a community in Xinjiang. The government over-controlled and set up roadblocks to hinder firefighters from rescuing, resulting in ten deaths. At the time, the public began questioning in the media whether their country was overly controlling and depriving people of their rights, and I became one of them.
What shocked me was that whenever I posted something about the fires or questioned the government, it was removed within ten seconds of posting, and sometimes it was detected that I was violating government regulations even before posting. I am not allowed to directly post what I want. There is also little to no sharing or discussion of the truth about the fire online. However, the public has discovered that they can cleverly game the rules to evade detection. For example, they deliberately use celebrity gossip that has nothing to do with the fire as the article's title in an attempt to evade the alert of the monitoring system. Even looking back a year later, I still feel that China’s current Internet filtering is too totalitarian.
Reference list
Zittrain J., Palfrey J. (2008). Internet filtering: The politics and mechanisms of control. In Deibert R., Palfrey J., Rohozinski R., Zittrain J. (Eds.), Access denied: The practice and policy of global internet filtering (pp. 103–122). MIT Press.
1 note
·
View note
Text
W3
"Big Other: Surveillance Capitalism and the Prospects of an Information Civilization" (Zuboff, 2015) discusses how digital media companies like Google exploit user data for financial profit, often of which the users themselves often know little or nothing.
When we use Google's search engine, Gmail, or YouTube, the company constantly collects our data (search history, email content, viewing preferences, etc.). This data is used to build our digital profiles, which are then utilized for personalized advertising and other commercial purposes. However, most users are likely unaware of how their data is being collected and used. As digital media companies often employ lengthy and complex terms of use and privacy policies, like Facebook's cookie policy, to obscure their data collection and usage practices, leaving most users know nothing about the economic value created from their data. We essentially become 'free laborers' in the digital space.
Additionally, terms of use often require users to accept the privacy policy to access the services, creating a 'take it or leave it' situation. But, due to social and professional needs, users are forced to accept these terms. Hence, we become a vulnerable group in the digital universe, at the mercy of digital companies' exploitation.
I don't know if such 'consent' really qualifies as informed consent from the users?
Reference list
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75 -89. Retrieved from https://doi.org/10.1057/jit.2015.5
0 notes
Text
W4
The article "The Economics of Big Data and Artificial Intelligence" (Mihet & Philippon, 2019) views technologies like big data and AI as intangible assets, and believes their significant impact on efficiency and productivity could exacerbate economic inequality.
The article analyzes how big data affects economic inequality from many perspectives, I particularly want to highlight aspects that are more relevant to ordinary workers: The rise of big data and related technologies has increased the demand for workers with specific technical skills. This situation has led to an extended wage gap between the tech industry and other sectors, thereby intensifying inequality within the labor market. Namely, technologies that were originally intended to benefit the public also brought about externalities that harmed the public interest. For instance, in my homeland - Taiwan, known for its high-tech industries, the salary gap between engineering and non-engineering fields can be as much as five times. Such an economic structure could further impact family incomes, thereby affecting children's educational opportunities and future development. This could lead to a widening gap between rich and poor families, thereby creating a generational cycle of wealth. It can be seen that the influence of the big data era is a series of interrelated reactions. Each change will trigger another change, seemingly endless, making it difficult to foresee its ultimate scope and consequences.
Reference list
Mihet, R., and Philippo, T. (2019). The economics of big data and artificial intelligence. Disruptive Innovation in Business and Finance in the Digital World (International Finance Review), 20: 29–43.
1 note
·
View note
Text
W5
This week's reading highlighted a key issue: the challenge in achieving both interpretability and completeness in artificial intelligence explanations. Namely, the most accurate explanations are often not easily understandable, while the most interpretable descriptions usually lack reliable predictive ability.
Take autonomous vehicles for example, which operate on complex ''black box'' systems, where intricate algorithms based on detector data make real-time driving decisions. However, when these sophisticated AI systems make errors, even the engineers who developed them hard to explain why. According to a 2023 report by the California Department of Motor Vehicles, there have already been 612 accidents involving autonomous vehicles just in California, including several fatalities (Dordulian Law Group, 2023). This opacity and difficulty in interpretation not only increase the risk of accidents but complicate the analysis and attribution of responsibility after accidents occur.
I used to believe that companies developing these technologies (not limited to autonomous driving) completely understood their inner workings. While I am beginning to worry that even technical experts can't fully grasp AI now, will AI one day transcend human control and develop its own mind like what happens in the movies?
References list:
Dordulian Law Group. (2023, June 19). Self-driving car accident statistics: 2023. Retrieved from https://www.dlawgroup.com/self-driving-car-accident-statistics-2023/
Gilpin Leilani H., Bau David, Yuan Ben Z., Bajwa Ayesha, Specter Michael, and Kagal Lalana.(2018). Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the IEEE DSAA. Retrieved from https://doi.org/10.1109/DSAA.2018.00018
0 notes
Text
W6
The reading "State Surveillance and Social Democracy: Lessons after the Investigatory Powers Act 2016" (Solove, 2007) discusses the plausibility of a common argument - if you have nothing to hide, you won't care about government surveillance infringing on your privacy rights.
After reading this paper, I began to think about an issue when privacy rights and public safety intersect- Should the data privacy of individuals suspected of illegal activities receive the same protection as that of ordinary citizens?
To be honest, my stance favors government surveillance of individuals confirmed to have engaged in illegal activities. Given that they have been confirmed to have behaved illegally, it seems reasonable to permit the government to monitor their digital privacy, as it could enhance the safety of millions or even billions of people. However, it becomes more complicated when discussing those who are merely 'suspected' of potential criminal behavior. These people may have never committed a crime or may never commit a crime. Depriving them of their human dignity—the right to privacy—based solely on the government’s suspicions doesn't seem entirely reasonable. While on the other hand, governments have an obligation to protect the public from criminal acts, and monitoring those who 'might' engage in illegal activities could enhance societal safety. It could really be said that the debate between privacy rights and public safety is not just black and white.
Reference list
Solove, D. J. (2007). I've got nothing to hide and other misunderstandings of privacy. San Diego Law Review, 44, 745-772.
1 note
·
View note
Text
W8
Investigatory Powers Act 2016., passed in the UK, grants the government significant powers for mass data surveillance, including collecting internet usage records and monitoring personal communications. Murphy (2019) criticizes that the act gives the government excessive power, threatening civil liberties such as legal professional privilege, journalists' sources, and trade unionists. This criticism raises me questions about why such a privacy-intrusive law could pass in an era that values civil liberties.
The debate between security and freedom has persisted for centuries, and recent increases in social security issues, like terrorism, might have influenced the passing of such laws. In my personal opinion, if it is guaranteed that the monitored data wouldn't be misused, I wouldn't mind government surveillance, especially given my sensitive nationality (Taiwan). I think my opinion is somewhat similar to the attitude of one of the interviewees in last week's reading, who said: "Do I care if the FBI monitors my phone calls? I have nothing to hide. Neither does 99.99 percent of the population. If the wiretapping stops one of these Sept. 11 incidents, thousands of lives are saved'' (Joe Schneider, 2006, as cited in Solove, 2007, p. 749). I know there must be political spies hiding in Taiwan. If surveillance will make Taiwan safer, I am very willing to accept, as I want Taiwan TO SURVIVE!! Just like that interviewee.
Reference list
Murphy, C. C. (2019). State surveillance and social democracy: Lessons after the Investigatory Powers Act 2016. Retrieved from https://ssrn.com/abstract=3494880.
Solove, D. J. (2007). I've got nothing to hide and other misunderstandings of privacy. San Diego Law Review, 44, 745-772.
0 notes
Text
W9
‘’The Declaration of the Independence of Cyberspace’’ (Barlow, 1996) is hailed as a landmark paper in academic circles and deeply explores the concepts of Internet freedom and government non-interference. However, Grossman (2013) states that governments had already been suspicious of this declaration, hinting at the necessity of substantial intervention by powerful entities, like governments, to address common problems in digital space governance.
There is a common belief that oversight in digital spaces can enhance internet security. This understanding of the importance of government assistance has led me to wonder: What would happen if the internet were absolutely free from government influence, as Barlow (2016) envisioned?
I guessed almost everyone knows the positives of this situation, so I would mainly like to discuss the negatives. Apart from direct issues like threats to personal privacy and security, the spread of misinformation and inappropriate content, I going to discuss three additional points:
Legal and Ethical Complexity: Without a unified legal framework established by governments, addressing online crimes, copyright disputes, and other legal issues becomes more complex.
Market Domination Risks: Large digital media companies might gain greater control in an environment lacking government oversight, leading to market monopolization and exacerbating inequalities.
Deepening the Power Divide in Digital Space: The lack of government regulation in the digital space may expose the public to the policies of dominant companies, thereby deepening the power gap between companies and users.
Therefore, I think Barlow's (1996) vision of an untethered Internet—where everyone is free to express themselves without fear of repression or conformity—seems inappropriate for today's digital environment.
References list
Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation. Retrieved from https://www.eff.org/cyberspace-independence
Grossman, W. M. (2018). Digital rights management. Pelican Crossing. Retrieved from https://www.pelicancrossing.net/netwars/2018/11/digital_rights_management.html
1 note
·
View note
Text
W10
"Misinformation and Its Correction: Continued Influence and Successful Debiasing" by Lewandowsky et al. (2012) is the paper which discusses why retraction cannot eliminate the belief in misinformation and the cognitive factors that make disinformation hard to correct, including repeated exposure to information, the construction of mental models, and the influence of social consensus.
Inspired by this article, I wonder why the academic community is such interested in studying online misinformation and its impacts and solutions. As a digital media student, I totally understand the immense power of the internet. However, in my opinion, even in a time without the internet, misinformation and rumors still spread through traditional means like word-of-mouth, print media, and broadcasting.
This essay also revealed that, even before the advent of the internet, disinformation and rumors have existed and spread through traditional communication channels. The internet has undoubtedly accelerated the speed and range of the dissemination of this information, but it is not the only medium for spreading misinformation. Rumors and misinformation have been part of human interaction for centuries, driven by social dynamics, political motives, and human psychology. If the Internet disappeared tomorrow, the trend of spreading misinformation would not stop.
Therefore, as a student of communication studies, I am eager to better understand and address the basic human tendency and psychological mechanism towards misinformation, rather than just how to slow the spread of misinformation online. This reading was like opened a door for me, stimulating my strong interest in this field.
Reference list
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106-131. Retrieved from https://doi.org/10.1177/1529100612451018
0 notes
Text
W11
This reflective blog was inspired by Daniels' (2018) article, which delves into how the 'Alt-Right' has commandeered social media as amplifiers for their ideology and examines how algorithms have created echo chambers for racist ideologies.
The reading references Matthew Prince, Cloudflare CEO, who hesitated to block the racist website The Daily Stormer provoked got me thinking. Prince's doubt about using personal or company values as measures of access to online content is striking. Before reading this, I firmly believed that digital enterprises were responsible for ensuring only harmonious, equal, and respectful discourse on the internet, and they are inclined to do so, as creating a non-discriminatory and friendlier online environment helps build a positive company image and reduce controversies and risks. However, I completely ignored the point made by John Perry Barlow (1996) in the Declaration of the Independence of Cyberspace: The original intention of the Internet was to provide a "place" for the exchange of ideas, free from capital and government monitor, and without a strict dichotomy of right or wrong opinions.
This article prompted me to reflect: Should personal and corporate ethics dominate the censorship of online discourse? Should individuals and businesses have the authority to decide which content is appropriate for display? In this contemporary age where free speech and racial equality are both highly valued, I am in a contradiction. On the one hand, freedom of speech indeed encompasses all types of discourse, even unpopular or controversial, as they are an essential part of a pluralistic society and the public sphere. On the other hand, enterprises should practice their social responsibility to prevent the spread of hate speech and discriminatory information, safeguarding all users from harm.
This complex issue might never have a definitive answer, though s an Asian who often faces discrimination online, I believe there is a need to limit harmful speech. It is hard to summarize the reasons in a few words. Only those who have experienced it can truly understand the sadness and inadvertent low self-esteem felt when encountering such remarks.
References list
Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation. Retrieved from https://www.eff.org/cyberspace-independence
Daniels, J. (2018). The algorithmic rise of the "Alt-Right ."Contexts, 17(1), 60–65. Retrieved from https://doiorg.liverpool.idm.oclc.org/10.1177/1536504218766547
2 notes
·
View notes