Tumgik
thedigitalage · 3 years
Text
Facebook knows more about you than you think
How Facebook exploits their users’ personal data for advertising purposes and the issue of privacy
Tumblr media
Copyright [The Guardian]
Since the emergence of technology, advertising within the marketing, digital and social media sectors, has transformed in ways like never before (Lamberton & Stephen, 2016). Ever stumbled across adverts that are creepily specific and relevant to you? This is known as online behavioural advertising (OBA) whereby the platform advertising holds the power to individually target customised ads to users based on their digital preferences and interests, via tracking and monitoring their online actions (Aguirre et al., 2015). The debate concerning personal data tracking and privacy concerns has grown over the years; increasingly, tech companies have come under public pressure regarding the ethics involved in monitoring, tracking and sharing users’ sensitive data to third party companies, in exchange for personalised advertisements. This is referred to as dataveillance - the monitoring and tracking of people using technologies that generate data (Lupton, 2016).
youtube
Copyright [The New York Times on YouTube]
While the phenomena of OBA has been commended as the future of online advertising for optimising marketing profits via more relevant and systematic ads (Chen & Stallaert, 2014), the practice’s exploitation of consumers’ privacy concerns may be detrimental for social media companies’ success. In a study conducted by Ham and Nelson (2016), participants admitted that they were unaware about the functioning of targeted advertisements on Facebook including how they are delivered, increasing suspicions over the platform about data privacy. In April 2018, Facebook underwent significant public scrutiny for breaching data protection laws, failing to keeps users personal information secure (Isaak & Hanna, 2018). The company inappropriately shared approximately 87 million users’ data with Cambridge Analytica, a political consultancy group, without gaining their consent (Ingram, 2018).
A follow up study about the scandal was conducted by Milansei (2018) who found that 17% of respondents had deleted the app from their devices, with 9% permanently deactivating their account. In addition, they found that 28% of respondents never trusted Facebook in the first place. Various studies came to similar conclusions, the most prominent findings being people’s increasing suspicions with Facebook’s handling of data tracking and feelings of disturbance being associated with invasion of privacy (Estrada-Jiménez et al., 2017; Rader et al., 2018). Following the scandal, restrictions around data protection have significantly tightened. The European General Data Protection Regulation (GDPR), introduced tougher restrictions on the processing and exploitation of individual users’ personal data (Cabañas et al., 2018), due to obvious privacy risks that may be derived from malicious use of such type of information (Rama et al., 2020). 
Tumblr media
Copyright [Kaspersky] 
Nevertheless, much research has found that the benefits of data sharing seem to outweigh its risks among many individuals. 65% of online buyers were unsatisfied when content deviated from their online and offline preferences and interests and therefore favoured OBA (IPG Media Lab, 2017). The personalisation of advertising is perceived in this case, as beneficial for both the receiver and marketer; the user is able to benefit from automatically being conveyed relevant content while the advertiser optimises revenue through a desirable pool of targeted people (Johnson, 2013). The OBA-privacy relationship is complex, involving various factors that influence users’ acceptance of their personal data usage, however here, it is revealed this phenomena is both positive and negative.
References
Aguirre, E., Mahr, D., Grewal, D., De Ruyter, K., & Wetzels, M. (2015). Unraveling the Personalization Paradox: The Effect of Information Collection and Trust-building Strategies on Online Advertisement Effectiveness. Journal of Retailing, 91(1), 34-49.
Cabañas, J. G., Cuevas, Á., & Cuevas, R. (2018). Facebook use of sensitive data for advertising in Europe. arXiv preprint arXiv:1802.05030.
Chen, J., & Stallaert, J. (2014). An Economic Analysis of Online Advertising Using Behavioural Targeting. Mis Quarterly, 38(2), 429-449.
Ham, C. D., & Nelson, M. R. (2016). The Role of Persuasion Knowledge, Assessment of Benefit and Harm, and Third-person Perception in Coping with Online Behavioral Advertising. Computers in Human Behavior, 62, 689-702.
Ingram, D. (2018, March 20). Factbox: Who is Cambridge Analytica and what did it do?. Reuters. https://www.reuters.com/article/us-facebook-cambridge-%20analytica-factbox/factbox-who-is-cambridge-analytica-and-what-did-it-do-%20idUSKBN1GW07F
IPG Media Lab. (2017, February). Turbocharging Your Skippable PreRoll Campaign. Magna Global. https://www.magnaglobal.com/wp- content/uploads/2017/02/Magna.IPGlab_Turbocharging-Your-Skippable-Pre-Roll- Campaign_external.pdf
Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56-59.
Johnson, J. P. (2013). Targeted Advertising and Advertising Avoidance. The RAND Journal of Economics, 44(1), 128-144.
Lamberton, C., & Stephen, A. T. (2016). A Thematic Exploration of Digital, Social Media, and Mobile Marketing: Research Evolution from 2000 to 2015 and an Agenda for Future Inquiry. Journal of Marketing, 80(6), 146-172.
Lupton, D. (2016). The Diverse Domains of Quantified Selves: Self-tracking Modes and Dataveillance. Economy and Society, 45(1), 101-122.
Milanesi, C. (2018, April 11). US Consumers Want More Transparency from Facebook. Techpinions. https://techpinions.com/us-consumers-want-more- transparency-from-facebook/52653
Rader, E., Cotter, K., & Cho, J. (2018, April). Explanations as Mechanisms for Supporting Algorithmic Transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).
0 notes
thedigitalage · 3 years
Text
Freedom of speech vs. regulation
The impact of censoring and regulating free speech on social media driven by politics 
youtube
Copyright [DW News on YouTube]
The new media environment is dynamic and constantly evolving, sometimes in unprecedented ways that seem to greatly impact democratic governments and politics (Aruguete, 2017). Social media acts a safe haven open to anyone with an opinion who can freely express their views and beliefs in a place that is not restrained by strict laws and regulations (Joyce, 2015). Although maintaining a fair and healthy balance between freedom of expression and censorship remains challenging, the question is, where is the line drawn between innocent freedom of expression and intended exploitation of free online services?
Recently, the Indian government, under Prime Minister Narendra Modi’s reign, has faced tremendous backlash regarding their inadequate handling of the nation’s overwhelming second wave of the coronavirus. Many citizens locally and globally, voiced their criticisms on Twitter and other online platforms. In a democratic country such as India, freedom of speech is allowed but generally condemned and often leads to removal, regulation and censorship if one’s rights are breached. The government has attempted to salvage the narrative by eliminating all possibilities of free speech online, namely Twitter, as an intimidation tactic in efforts to control the growing online speech against the government’s handlings (Erbschloe, 2017). One example of government regulation of the media is blocking questionable content that appears to be insulting or a threat to the government’s values. The video above highlights the Indian government requesting Twitter to block over 50 tweets surrounding critical comments made about the government’s handling of the second wave, as the government felt these tweets acted as ‘social media misuse’. (Singh, 2021). When did freely voicing opinions about injustices become an act of cybercrime? Such scare tactics used by authorities to silence the majority, undermine social media’s original purpose of posing a serious threat to freedom of speech and expression.
youtube
Copyright [CBS This Morning on YouTube]
youtube
Copyright [America Uncovered on Youtube]
Similarly, Russia’s approach to social media has emerged as becoming more sophisticated following the 2011 anti-government protests, whereby the degree of the protests and their use of social media possibly caused the Russian government to drastically increase its effort to monitor, regulate and influence the internet and social media (Helmus et al., 2018). Recently, the widespread anti-government protests sparked after Russian opposition leader, Navalny’s wrongful imprisonment following his YouTube investigation exposing President Vladimir Putin’s alleged corruption (https://time.com/5934092/navalny-putin-palace-investigation/). This prompted the government to control online information by or removing content and completely blocking mobile Internet access (Roache, 2021). Additionally, a new law passed enabling Russian authorities to restrict or fully block websites that discriminate against Russian state media. Such examples question how the internet can regain its right as a domain for freedom of expression against injustices, without facing scrutiny from third parties such as governments. 
Tumblr media
Copyright [Mumbai Mirror] 
References
Aruguete, N. (2017). The Agenda Setting Hypothesis in the New Media Environment. Comunicación y sociedad, (28), 35-58.
Erbschloe, M. (2017). Social Media Warfare: Equal Weapons For All. CRC Press.
Helmus, T. C., Bodine-Baron, E., Radin, A., Magnuson, M., Mendelsohn, J., Marcellino, W., ... & Winkelman, Z. (2018). Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe. Rand Corporation.
Joyce, D. (2015). Internet Freedom and Human Rights. European Journal of International Law, 26(2).
0 notes
thedigitalage · 3 years
Text
The Anonymous movement: human rights and hacktivism
Can hacktivist groups be justified as an emerging form of political participation and action? 
Tumblr media
Copyright [Britannica]
‘Hacktivism is a form of political activism in which computer hacking skills are heavily employed against powerful commercial institutions and governments, among other targets’ (Sorell, 2015, p.1). Typically, hackers have been negatively stereotyped as malicious thrill-seekers or cyberterrorists with the pure intention to create chaos (Bartlett, 2014). However, increasingly, political or social causes in the real world are what motivates them to create the tools to provide freedom of speech in the digital perimeter for those living in oppressive nations (Still, 2005).
Tumblr media Tumblr media
Copyright [Twitter] 
Anonymous are a group of hacktivists who organise mass cyber-attacks against several online enemies who they deem offensive or dislike. According to their self-description ‘Anonymous is not a person, nor is it a group, movement or cause: Anonymous is a collective of people…left to its own devices, quickly builds its own society out of rage and hate… They have no leader, no pretentious douchebag president or group thereof…This makes them impossible to control or organize’ (Bodó, 2014) challenging the claim that there is no such thing as organising without organisation.
Tumblr media
Copyright [BBC] 
What started off as an online group committing mischiefs for entertainment, has evolved into a ‘chaotic power of vigilante justice’ (Bodó, 2014, p. 2). The group has rallied against laws they deemed unjust and turned against what they have considered as corrupt businesses and individuals by often through methods of illegality (Coleman, 2012) such as leaking top secret government files and vandalising websites where injustice has been performed. For example, following the murder of George Floyd, Anonymous re-emerged to show solidarity with the #BlackLivesMatter movement by temporarily shutting down the Minneapolis police department website (Molloy & Tidy, 2020), condemning the US’ systemic racism. Despite their popularity, those who value centralisation, stable ideologies and authoritarian values, remain against Anonymous’ method of cyberactivism, constantly bringing up the theme of cyberterrorism. However, public cyberactivism is evidently included by traditional human rights protections (Jorgensen, 2013; Park, 2013). This is because forms of cyberactivity coincides with street protests, democratic election processes and channels of political communication available to citizens that traditional human rights measures protect (Sorell, 2015, p. 395). The International Covenant on Civil and Political Rights (ICCPR) recognises and protects freedom of opinion and speech including technology. Article 19, paragraph 2 states:
Everyone shall have the right to freedom of expression; this right shall include the freedom to seek, receive and impart information and ideas of all kinds…through any other media of his choice.
Nevertheless, the fact that these groups are driven by change and the fight for justice as opposed to thrill-seeking or financial purposes, suggests that although hacktivism differs from traditional forms of activism, it is still activism, and should therefore be treated in the same manner. Ultimately, they all share one primary goal: striving for political or social change for the greater good.
References
Bartlett, J. (2014). The Dark Net. London: Heinemann.
Bodó, B. (2014). Hacktivism 1-2-3: How Privacy Enhancing Technologies Change the Face of Anonymous Hacktivism. Internet Policy Review, 3(4), 1-13.
Coleman, G. (2012). Our Weirdness is Free: The Logic of Anonymous—Online Army, Agents of Chaos, and Seeker of Justice. Triple Canopy, 15.
Jorgensen, R. F. (2013). Framing the Net and Human Rights. Edward Elgar Publishing.
Molloy, D. & Tidy, J. (2020). George Floyd: Anonymous hackers re-emerge amid US unrest. BBC News. https://www.bbc.co.uk/news/technology-52879000
Park, S. (2013). The United Nations Human Rights Council’s Resolution on Protection of Freedom of Expression on the Internet as a First Step in Protecting Human Rights Online. North Carolina Journal of International Law and Commercial Regulation, 38(4), 1129.
Sorell, T. (2015). Human rights and hacktivism: The cases of wikileaks and anonymous. Journal of Human Rights Practice, 7(3), 391-410.
Still, B. (2005). Hacking for a cause. First Monday, 10(9). https://doi.org/10.5210/fm.v10i9.1274 
1 note · View note
thedigitalage · 3 years
Text
Bursting your (filter) bubble
The dangers behind YouTube’s baseline recommendations of conspiratorial content reinforcing extremist ideologies
Tumblr media
Copyright [Mashable] 
Pariser (2011) coined the phrase ‘filter bubble’ to capture the notion of how people are unaware of the individualised filtering that is conducted on their behalf, and hence may not know what important information they are missing out on, as it doesn’t conform to their ‘bubbles’. YouTube’s algorithmic promotion of conspiracy videos has sparked debate surrounding recommendation engines magnifying sensational content. 70% of viewed content on the platform are recommended videos (Solsman, 2018) which YouTube’s algorithm encourages as it enhances engagement and view-time. Because the nature of conspiracy theories are entertaining and provoking, they tend to optimise higher user-engagement (Hussein et al., 2018). August 2019 saw the FBI characterising fringe conspiracy theories as motivators for domestic extremists, crimes and violent activity due to increasing incidents influenced by such beliefs. While YouTube is a convenient source of entertainment and information, its recommendation algorithms are rooted in forcing users into a loop where more conspiracy material is consumed, reinforcing radicalism as opposed to rational factual resources (Bryant, 2020; Zhao et al., 2019). The videos below confirm and elaborate on how YouTube’s recommendation algorithm is programmed in way that may lead to the promotion of conspiracy theories.
youtube
Copyright [VICE News on Youtube] 
youtube
Copyright [Business Insider on YouTube]
Tumblr media
Copyright [BBC]
Several of the global mass shooting attacks have been traced to small-scale thriving online communities such as 8chan, an anonymous message board site that is moderation free and advocates freedom of speech, where users support each others’ xenophobic views and declare violence as the sole remedy (Bryant, 2020). The El Paso shooting mimics a pattern conducted at the Christchurch mosque shootings and Poway synagogue shooting – in both attacks the suspects posted announcements to 8chan, permeated with white nationalist ideologies (Bryant, 2020; Hayden, 2019). Reed et al. (2019) found that YouTube’s recommender systems prioritises extremist right-wing content and after a few clicks, followers of the recommender system are instantly submerged into an ideological bubble that reinforces their pre-existing interests and beliefs (O’Callaghan, 2015). This suggests that engaging with extremist material on YouTube increases the likelihood of coming across similar extreme content in the future, thus proving the existence of YouTube’s dangerous and radicalised filter bubble. A 2017 survey found that only 9% of Americans considered it acceptable to possess alt-right or Neo-Nazi beliefs whereas 50% of Americans deemed these opinions inappropriate (Langer, 2017). Unsurprisingly, this minority of outspoken right-dependent online groups that generate substantial content online and with a large amount of that content dedicated to YouTube, further supporting Reed et al.’s (2019) findings. 
However, YouTube were quick to condemn these claims highlighting that users are always given the choice to engage or not with recommended content, and that the algorithms simply respond to each user’s online behaviours and interests (Faddoul et al., 2020). Although the role of the user’s choice is a debate for another time, it is still important that we strive to burst these so-called filter bubbles. When people begin to believe that everyone’s engine searches and recommended content are identical, regardless of the level of extremism, a potentially dangerous ideology is born.
Tumblr media
Copyright [Axios]
References
Bryant, L. V. (2020). The YouTube Algorithm and the Alt-Right Filter Bubble. Open Information Science, 4(1), 85-90.
Faddoul, M., Chaslot, G., & Farid, H. (2020). A Longitudinal Analysis of YouTube's Promotion of Conspiracy Videos. arXiv preprint arXiv:2003.03318.
Hayden, M. E. (2019, August 4). White nationalists praise El Paso attack and mock the dead. Southern Poverty Law Center. https://www.splcenter.org/hatewatch/2019/08/04/white-nationalists-praise-el-paso-attack-and-mock-
Hussain, M. N., Tokdemir, S., Agarwal, N., & Al-Khateeb, S. (2018, August). Analyzing disinformation and crowd manipulation tactics on YouTube. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 1092-1095). IEEE.
Langer, G. (2017, August 21). Trump approval is low but steady; On Charlottesville lower still. ABC News/Washington Post. Retrieved from https://www.langerresearch.com/wp-content/uploads/1190a1TrumpandCharlottesville.pdf
O’Callaghan, D., Greene, D., Conway, M., Carthy, J., & Cunningham, P. (2015). Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems. Social Science Computer Review, 33(4), 459-478.
Pariser, E. (2011). The Filter Bubble: What the Internet is Hiding from You. Penguin Press.
Reed, A., Whittaker, J., Votta, F., & Looney, S. (2019). Radical Filter Bubbles: Social Media Personalization Algorithms And Extremist Content. London: Global Research Network on Terrorism and Technology.
Resnick, P., Garrett, R. K., Kriplean, T., Munson, S. A., & Stroud, N. J. (2013, February). Bursting your (filter) bubble: strategies for promoting diverse exposure. In Proceedings of the 2013 Conference on Computer supported cooperative work companion (pp. 95-100).
0 notes
thedigitalage · 3 years
Text
To vaccinate or not to vaccinate?
The grave COVID-19 pandemic among the even graver infodemic: how anti-vaccine memes have contributed to the harmful spread of COVID-19 misinformation 
youtube
Copyright [BBC on YouTube]
The novel coronavirus has not only significantly affected several lives, but has also increased people’s need to consume news and information associated with the pandemic (Newman et al., 2020). According to Ofcom’s 2020 news consumption report, 45% of adults choose to receive news information from social networking sites, with Facebook being the third most popular source of information. This is problematic as social media sites such as Facebook have become the epitome of sensationalism, rumours, misinformation and disinformation with regards to the pandemic (Brindha et al., 2020). The World Health Organisation labels this as an ‘infodemic’ –‘deliberate attempts to disseminate wrong information to undermine the public health response and advance alternative agendas of groups or individuals’ (WHO, 2020).
Such falsehoods can take the form of Internet memes. While intended to be humorous, repeated exposure to COVID-19 memes, relating to the negative consequences of being vaccinated, reinforces inaccurate information and contributes to concerns about vaccination safety and efficacy (Basch, 2021). The image below is a popular meme many have stumbled across on their social media newsfeeds depicting snapshots from rapper Drake’s Hotline Bling music video to spread false claims that vaccines can alter genetic DNA.
Tumblr media
Copyright [BBC] 
The meme states that the recovery rate from the disease is 99.97% implying that contracting the virus is far better than being vaccinated, an assumption that scientists and professionals continously reject (O’Connor & Murphy, 2020). Additionally, the Astra-Zeneca vaccine which caused blood clots and strokes in a scarce number of patients, was taken out of its original context and used as a generalisation to all vaccinations being life-threatening (Basch, 2021), despite health officials claiming the benefits still outweigh the risks. Resultingly, memes about vaccines’ radical side effects dominated the internet. This is known as post-truth which can be defined as ‘not the abandonment of facts, but a corruption of the process by which facts are credibly gathered and reliably used to shape…beliefs about reality’ (McIntyre, 2018, p. 11). 
Tumblr media
Copyright [BBC] 
Shared posts of deformed people or creatures (pictured above), claim that their deformities were caused by the vaccine – while these are obviously a joke meant for humorous purposes, the message that these groups strongly oppose vaccinations is clear (Goodman & Carmichael, 2020). Even though vaccination side effects are extremely mild such as a sore arm, headache or fever for a couple of days, the internet spares nobody. 
youtube
Copyright [CNBC Television on YouTube]
Valenzuela et al., (2019) revealed that seeking information on social media increases the possibility for further spreading and sharing of misinformation, due to higher levels of exposure. Research has found that even a 5-10 minute exposure to anti-vaccine content increased perceptions of vaccination risks and hesitancy (Betsch et al., 2010; Kata, 2012). Such behaviours have seen an alarming declination in vaccination and recovery rates, preventing herd immunisation therefore posing a serious threat to public health, as addressed in the video above. Despite professionals striving to gather more about the current situation, the unprecedented flux of unverified information only furthers anxieties due to the infodemic and risks thousands of lives worldwide (Gupta et al., 2020).
References
Basch, C. H., Meleo-Erwin, Z., Fera, J., Jaime, C., & Basch, C. E. (2021). A Global Pandemic in the Time of Viral Memes: COVID-19 Vaccine Misinformation and Disinformation on TikTok. Human Vaccines & Immunotherapeutics, 1-5.
Betsch, C., Renkewitz, F., Betsch, T., & Ulshöfer, C. (2010). The Influence Of Vaccine-Critical Websites on Perceiving Vaccination Risks. Journal of Health Psychology, 15(3), 446-455.
Brindha, M. D., Jayaseelan, R., & Kadeswara, S. (2020). Social Media Reigned by Information or Misinformation about COVID-19: A Phenomenological Study. SSRN Electronic Journal April.
Goodman, J. & Carmichael, F. (2020). Covid-19: What’s the harm of ‘funny’ anti-vaccine memes?. BBC News. https://www.bbc.com/news/55101238
Gupta, L., Gasparyan, A. Y., Misra, D. P., Agarwal, V., Zimba, O., & Yessirkepov, M. (2020). Information and Misinformation on COVID-19: A Cross-sectional Survey Study. Journal of Korean Medical Science, 35(27).
Kata, A. (2012). Anti-Vaccine Activists, Web 2.0, and the Postmodern Paradigm–An Overview of Tactics and Tropes Used Online by the Anti-Vaccination Movement. Vaccine, 30(25), 3778-3789.
McIntyre, L. (2018). Post-truth. MIT Press.
Newman, N., Fletcher, R., Schulz, A., Andı, S., & and Nielsen, R.K. (2020). Reuters Institute Digital News Report 2020. Reuters Institute. Retrieved from: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2020-06/DNR_2020_FINAL.pdf
O’Connor, C., & Murphy, M. (2020). Going Viral: Doctors Must Combat Fake News in the Fight against Covid-19. Irish Medical Journal, 113(5), 85-85.
Valenzuela, S., Halpern, D., Katz, J. E., & Miranda, J. P. (2019). The Paradox of Participation Versus Misinformation: Social Media, Political Engagement, and the Spread of Misinformation. Digital Journalism, 7(6), 802-823.
1 note · View note