Don't wanna be here? Send us removal request.
Text
Crowdsourcing in Times of Crisis: Strengths, Challenges, and Ethical Considerations
Week 12

The increasing reliance on crowdsourcing during disasters highlights its transformative potential in crisis response, as demonstrated by Riccardi’s (2016) analysis of its role in disaster risk reduction. Crowdsourcing leverages collective intelligence and real-time data from affected populations, enabling faster and more adaptable responses than traditional top-down approaches (Riccardi 2016). During events such as earthquakes, floods, or humanitarian crises, platforms like Ushahidi and Zooniverse have proven instrumental in mapping affected areas, coordinating rescue efforts, and distributing aid (Meier 2015). However, while crowdsourcing enhances situational awareness and community resilience, it also presents challenges related to data accuracy, privacy, and digital inequality that must be addressed to maximise its effectiveness.
One of the key strengths of crowdsourcing in crisis situations is its ability to provide real-time, hyperlocal information that official channels may lack. Riccardi (2016) emphasises how crowdsourced data from social media, SMS, and dedicated platforms can fill critical gaps in early warning systems and damage assessment. For instance, during the 2010 Haiti earthquake, volunteers worldwide analysed satellite imagery and social media reports to create crisis maps, significantly improving aid delivery (Zook et al. 2010). Similarly, during the COVID-19 pandemic, grassroots initiatives like mutual aid networks used crowdsourcing to identify vulnerable individuals and distribute resources (Biddinger et al. 2021). These examples demonstrate how decentralised information gathering can enhance the speed and precision of disaster response, particularly when traditional infrastructure is compromised.
Despite these advantages, crowdsourcing in crises is not without limitations. Data verification remains a persistent challenge, as unvetted user-generated content can spread misinformation or overwhelm responders with redundant reports (Starbird & Palen 2011). Ethical concerns also arise regarding privacy and the potential exploitation of vulnerable contributors, particularly in low-resource settings where participants may lack digital literacy or face surveillance risks (Crawford & Finn 2015). Furthermore, crowdsourcing initiatives often exclude marginalised groups with limited internet access, reinforcing existing inequalities in disaster response (Houston et al. 2015). Riccardi’s (2016) framework underscores the need for governance mechanisms that ensure data reliability while protecting contributor rights a balance that remains difficult to achieve in high-pressure scenarios.
The effectiveness of crowdsourcing ultimately depends on how well it integrates with formal response systems. Studies show that the most successful initiatives involve partnerships between grassroots volunteers and established organisations, combining community insights with institutional resources (Liu 2014). For example, the Standby Task Force collaborates with the United Nations to filter and validate crowdsourced data before it reaches decision-makers (Meier 2015). Such hybrid models address key concerns raised by Riccardi (2016), including scalability and accountability, while preserving the agility of bottom-up participation. Future developments in artificial intelligence could further enhance these efforts by automating data processing, though human oversight remains crucial to contextualise information and mitigate bias (Poblet et al. 2018).
As climate change and global instability increase the frequency of disasters, crowdsourcing will likely play an even greater role in crisis management. Riccardi’s (2016) work highlights both its transformative potential and the need for ethical guidelines to govern its use. Strengthening digital infrastructure in vulnerable regions, improving verification protocols, and fostering equitable partnerships between communities and institutions can help realise crowdsourcing’s full benefits while minimising its risks.
Reference List
Biddinger, PD, Charney, RL, Goralnick, E & Koh, HK 2021, ‘Lessons from the COVID-19 pandemic for disaster preparedness’, New England Journal of Medicine, vol. 384, no. 8, pp. 581–583.
Crawford, K & Finn, M 2015, ‘The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters’, GeoJournal, vol. 80, no. 4, pp. 491–502.
Houston, JB, Hawthorne, J, Perreault, MF, Park, EH, Goldstein Hode, M, Halliwell, MR, Turner McGowen, SE, Davis, R, Vaid, S, McElderry, JA & Griffith, SA 2015, ‘Social media and disasters: a functional framework for social media use in disaster planning, response, and research’, Disasters, vol. 39, no. 1, pp. 1–22.
Liu, SB 2014, ‘Crisis crowdsourcing framework: designing strategic configurations of crowdsourcing for the emergency management domain’, Computer Supported Cooperative Work, vol. 23, no. 4–6, pp. 389–443.
Meier, P 2015, Digital humanitarians: how big data is changing the face of humanitarian response, CRC Press, Boca Raton.
Poblet, M, García-Cuesta, E & Casanovas, P 2018, ‘Crowdsourcing roles, methods and tools for data-intensive disaster management’, Information Systems Frontiers, vol. 20, no. 6, pp. 1363–1379.
Riccardi, M 2016, ‘The power of crowdsourcing in disaster response operations’, International Journal of Disaster Risk Reduction, vol. 15, pp. 1–8.
Starbird, K & Palen, L 2011, ‘“Voluntweeters”: self-organising by digital volunteers in times of crisis’, Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 1071–1080.
Zook, M, Graham, M, Shelton, T & Gorman, S 2010, ‘Volunteered geographic information and crowdsourcing disaster relief: a case study of the Haitian earthquake’, World Medical & Health Policy, vol. 2, no. 2, pp. 7–33.
0 notes
Text
Digital Citizenship and Conflict: Social Media Governance in Online Student Cultures
Week 11

The rise of digital platforms has transformed how young people engage in civic discourse, yet these spaces often become arenas for conflict, harassment, and toxic behaviour, as highlighted by Haslop, O’Rourke & Southern (2021) in their study of UK student online culture. Their research on the #NoSnowflakes phenomenon reveals how online harassment is often tolerated under the guise of "free speech," creating a gendered digital divide where women and marginalised groups face disproportionate hostility. This dynamic raises critical questions about digital citizenship how individuals navigate online spaces ethically and the role of platform governance in either mitigating or exacerbating conflict.
Social media platforms claim to foster open dialogue, yet their governance structures frequently fail to protect users from harassment, particularly along gendered lines (Haslop, O’Rourke & Southern 2021). The #NoSnowflakes movement, which derides perceived oversensitivity, exemplifies how online abuse is often framed as mere "banter," normalising hostility while silencing marginalised voices. This aligns with broader critiques of platform governance, where algorithmic amplification of inflammatory content prioritises engagement over safety (Gillespie 2018). The resulting digital divide forces marginalised users to self-censor or withdraw, undermining their participation in online civic spaces (Vogels 2021).
The tension between free expression and harm prevention remains unresolved in platform policy. While social media companies have introduced moderation tools, enforcement is inconsistent, allowing harassment to persist under vague community guidelines (Citron 2019). Haslop, O’Rourke & Southern (2021) demonstrate how UK student forums often dismiss misogynistic or racist abuse as "just jokes," reflecting a broader cultural tolerance for online hostility. This parallels findings in gaming communities, where women and minorities report high rates of abuse but face scepticism when reporting it (Gray 2020). Without robust governance, digital citizenship becomes exclusionary, privileging those who conform to dominant and often toxic norms.
Effective digital citizenship education could counter these trends by fostering critical awareness of online power dynamics. Research suggests that media literacy programs help users recognise manipulative behaviours and resist harassment (Mihailidis & Viotty 2017). However, individual education must be paired with structural reforms in platform governance. Some scholars advocate for "feminist platform design" that centres marginalised users’ safety (Suzor 2019), while others call for regulatory interventions to hold platforms accountable for systemic harassment (Citron 2019). The #NoSnowflakes case study underscores the urgency of these measures, as passive moderation policies perpetuate a digital culture where abuse is tacitly endorsed (Haslop, O’Rourke & Southern 2021).
Ultimately, social media governance must reconcile free expression with equitable participation. Haslop, O’Rourke & Southern’s (2021) findings reveal how laissez-faire moderation entrenches gendered and racialised hierarchies in digital spaces. A reimagined digital citizenship framework one that prioritises safety and inclusion could empower users to navigate conflicts constructively, while platform reforms must ensure that governance systems actively deter harassment rather than tacitly permitting it.
Reference List
Citron, DK 2019, Hate crimes in cyberspace, Harvard University Press, Cambridge.
Gillespie, T 2018, Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media, Yale University Press, New Haven.
Gray, KL 2020, Intersectional tech: Black users in digital gaming, LSU Press, Baton Rouge.
Haslop, C, O’Rourke, F & Southern, R 2021, '#NoSnowflakes: the toleration of harassment and an emergent gender-related digital divide, in a UK student online culture', Convergence, vol. 27, no. 5, pp. 1419-1435.
Mihailidis, P & Viotty, S 2017, 'Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in "post-fact" society', American Behavioral Scientist, vol. 61, no. 4, pp. 441-454.
Suzor, NP 2019, Lawless: the secret rules that govern our digital lives, Cambridge University Press, Cambridge.
Vogels, EA 2021, The state of online harassment, Pew Research Center, viewed 15 June 2023, https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/.
0 notes
Text
Gaming Communities, Social Gaming, and Live Streaming: A Cultural and Economic Analysis
Week 10

The landscape of digital gaming has evolved significantly with the rise of social gaming and live streaming platforms, creating new forms of community engagement and value creation within gaming cultures. Drawing on Keogh's (2021) analysis of the Melbourne indie game scene, we can examine how localized gaming communities intersect with global platforms like Twitch and Discord to form complex ecosystems of cultural production and social interaction. Keogh's (2021) concept of "value regimes" proves particularly insightful for understanding how different stakeholders developers, streamers, and community members negotiate cultural, social, and economic capital within these spaces.
Live streaming platforms have fundamentally transformed gaming from a private activity into a public, performative practice. Platforms like Twitch have created new economies of attention where streamers cultivate parasocial relationships with audiences (Taylor 2018). This phenomenon aligns with Keogh's (2021) observations about how indie developers in Melbourne navigate between artistic integrity and commercial viability, as streamers similarly balance authentic self-expression with the demands of platform algorithms and audience expectations. The rise of "social gaming" through platforms like Discord has further enabled the formation of persistent gaming communities that extend beyond individual game titles, creating spaces for identity formation and cultural exchange (Pearce 2020). These developments reflect Keogh's (2021) findings about how local gaming scenes maintain their distinct identities while participating in global digital networks.
The economic dimensions of gaming communities reveal tensions between grassroots participation and corporate co-optation. As Keogh (2021) demonstrates through Melbourne's indie scene, alternative value systems often emerge in opposition to mainstream gaming industry practices. Similarly, many streaming communities develop their own norms and economies that challenge traditional media production models (Johnson & Woodcock 2019). However, these independent spaces increasingly face incorporation into formalized monetization systems through platform features like Twitch subscriptions or Discord Nitro (Duffy 2020). This tension mirrors Keogh's (2021) analysis of how indie developers negotiate between artistic autonomy and the need for financial sustainability within global platform ecosystems.
Social gaming communities also serve as important sites for identity work and cultural production. Research has shown how marginalized gamers use these spaces to create alternative communities that challenge the often toxic environments of mainstream gaming (Gray 2020). These findings complement Keogh's (2021) examination of how Melbourne's indie scene fosters particular cultural values and social networks. The COVID-19 pandemic further accelerated the importance of gaming communities as social spaces, with platforms like Animal Crossing: New Horizons becoming virtual gathering places (Authors 2021). This development underscores Keogh's (2021) arguments about the localized nature of gaming cultures, even within globally accessible digital platforms.
Looking forward, gaming communities face challenges around platform governance, labor conditions, and cultural representation. As Keogh's (2021) work suggests, maintaining diverse and sustainable gaming ecosystems requires careful attention to how value is created and distributed among different participants. The growth of Web3 technologies and play-to-earn models introduces new complexities to these dynamics (Nadini et al. 2021), while ongoing debates about content moderation and community management highlight the political dimensions of these ostensibly leisure spaces (Mortensen 2020). These developments suggest that Keogh's (2021) framework of value regimes will remain crucial for understanding the evolving relationships between gaming communities, platform economies, and cultural production.
Reference List
Authors, A 2021, 'Pandemic play: Animal Crossing and the cultural moment', Games and Culture, vol. 16, no. 4, pp. 419-432.
Duffy, BE 2020, Social media influencers and the attention industry, Cambridge University Press, Cambridge.
Gray, KL 2020, Intersectional tech: Black users in digital gaming, LSU Press, Baton Rouge.
Johnson, MR & Woodcock, J 2019, 'The impacts of live streaming and Twitch.tv on the video game industry', Media, Culture & Society, vol. 41, no. 5, pp. 670-688.
Keogh, B 2021, 'The Melbourne indie game scenes: value regimes in localized game development', in P Ruffino (ed), Independent videogames: cultures, networks, techniques and politics, Routledge, London, pp. 215-230.
Mortensen, TE 2020, 'Modding as commentary: when players become developers', Convergence, vol. 26, no. 3, pp. 594-608.
Nadini, M, Alessandretti, L, Di Giacinto, F, Martino, M, Aiello, LM & Baronchelli, A 2021, 'Mapping the NFT revolution: market trends, trade networks and visual features', Scientific Reports, vol. 11, no. 1, pp. 1-11.
Pearce, C 2020, Communities of play: emergent cultures in multiplayer games and virtual worlds, MIT Press, Cambridge.
Taylor, TL 2018, Watch me play: Twitch and the rise of game live streaming, Princeton University Press, Princeton.
0 notes
Text
Augmented Reality (AR) Filters: Beauty Standards, Digital Identity, and Psychological Implications
Week 9

Augmented Reality (AR) filters have become ubiquitous across social media platforms like Snapchat, Instagram, and TikTok, fundamentally altering how users present themselves digitally. While these filters offer creative opportunities for self-expression, they simultaneously reinforce narrow beauty ideals and raise significant concerns about self-perception and digital authenticity (Barker 2020). Jessica Barker's (2020) analysis of Snapchat's "pretty filters" reveals how these tools promote a homogenized standard of attractiveness that aligns with Eurocentric beauty norms, smoothing skin, enlarging eyes, and thinning faces to create an unrealistic benchmark that particularly affects young users. This phenomenon is exacerbated by algorithmic biases that often lighten skin tones and modify facial features in ways that privilege certain racial characteristics over others (Dyer 2019), contributing to what researchers have identified as increased body dissatisfaction among frequent users (Fardouly et al. 2018).
The psychological impacts of AR filters are profound, with studies demonstrating their role in distorting self-perception and fostering negative social comparisons. The "filtered face" phenomenon has become so pervasive that some users seek cosmetic procedures to resemble their digitally enhanced selves, a trend termed "Snapchat dysmorphia" (Rajesh et al. 2021). This blurring of reality and digital enhancement creates a troubling disconnect between users' authentic appearances and their curated online personas (Barker 2020), with research indicating that heavy filter use correlates with reduced self-esteem when individuals compare themselves to their unedited reflections (McLean et al. 2022). The rise of "digital makeup" through apps like Facetune and TikTok filters has further complicated beauty routines and raised critical questions about authenticity in digital interactions (Cotter 2019).
These technologies also present significant ethical concerns, particularly regarding their impact on mental health and the lack of transparency about their effects. Studies consistently link frequent filter use to increased anxiety and body dissatisfaction, with young women being especially vulnerable to negative self-comparisons (Tiggemann & Slater 2014). The normalization of digital perfection through these tools creates unrealistic expectations that can be psychologically damaging (Rajesh et al. 2021). From a regulatory perspective, concerns have been raised about both the deceptive nature of unlabeled filtered content and the data privacy implications of facial recognition technologies embedded in these filters (Zuboff 2019). Some countries, like Norway, have responded by requiring influencers to disclose when images have been altered (BBC 2021), suggesting a growing recognition of these issues.
Addressing the challenges posed by AR filters requires a multi-faceted approach. Media literacy programs that educate users about how filters manipulate reality could help mitigate some of the negative psychological effects (McLean et al. 2022). There is also a pressing need for more diverse filter designs that reflect a broader range of beauty standards (Dyer 2019). Regulatory measures, such as policies mandating the disclosure of edited images similar to France's "Photoshop Law," could promote greater transparency (BBC 2021). As Barker's (2020) research demonstrates, while AR filters represent a significant technological innovation, their current implementation raises serious concerns about their impact on self-image and mental health that must be addressed through a combination of education, design reform, and policy intervention.
Reference List
Barker, J 2020, 'Making-up on mobile: the pretty filters and ugly implications of Snapchat', Fashion, Style & Popular Culture, vol. 7, no. 2, pp. 207-223.
BBC 2021, 'Norway influencers must label retouched photos under new law', BBC News, 2 July, viewed 15 June 2023, https://www.bbc.com/news/world-europe-57686274.
Cotter, K 2019, 'Playing the visibility game: how digital influencers and algorithms negotiate influence on Instagram', New Media & Society, vol. 21, no. 4, pp. 895-913.
Dyer, N 2019, 'The racial implications of Instagram's beauty filters', The Atlantic, 15 March, viewed 15 June 2023, https://www.theatlantic.com/technology/archive/2019/03/how-instagrams-face-filters-racialize/584911/.
Fardouly, J, Diedrichs, PC, Vartanian, LR & Halliwell, E 2018, 'Social comparisons on social media: the impact of Facebook on young women's body image concerns and mood', Body Image, vol. 26, pp. 38-45.
McLean, SA, Wertheim, EH, Masters, J & Paxton, SJ 2022, 'Photoshopping the selfie: self-photo editing and body dissatisfaction', Body Image, vol. 41, pp. 263-272.
Rajesh, A, Sharif, A & Klassen, AF 2021, 'Snapchat dysmorphia: how filters are changing the face of cosmetic surgery', Aesthetic Surgery Journal, vol. 41, no. 12, pp. 1543-1549.
Tiggemann, M & Slater, A 2014, 'NetGirls: the Internet, Facebook, and body image concern in adolescent girls', International Journal of Eating Disorders, vol. 46, no. 6, pp. 630-633.
Zuboff, S 2019, The age of surveillance capitalism, Profile Books, London.
0 notes
Text
Digital Citizenship and Health Education: Body Modification on Visual Social Media
Week 8

Social media platforms like TikTok, Instagram, and YouTube have become central spaces for discussions around body modification, including tattoos, piercings, and cosmetic surgery. However, as Duffy & Meisner (2022) demonstrate, algorithmic governance often renders certain content invisible, shaping public discourse in ways that privilege mainstream norms while marginalizing alternative perspectives. This dynamic has significant implications for digital citizenship how users engage with, critique, and navigate platform policies and health education, particularly concerning the risks and realities of body modification.
A key issue is algorithmic suppression, where content related to body modification is shadowbanned or removed under vague platform guidelines (Duffy & Meisner 2022). For example, while cosmetic procedures like Botox or lip fillers are often normalized, more extreme or DIY modifications may be flagged as "harmful" and restricted. This inconsistent enforcement reinforces societal biases, framing some body modifications as acceptable while others are stigmatized. Digital citizenship in this context requires users to critically assess how platform governance shapes health narratives (Duffy & Meisner 2022).
TikTok's moderation of cosmetic surgery content exemplifies these challenges. The platform prohibits content that "encourages risky behavior," but enforcement is uneven videos from licensed surgeons discussing procedures may thrive, while posts about DIY modifications are often removed (TikTok 2023). This creates a gap where medical misinformation spreads unchecked, while legitimate educational content is suppressed. Creators have responded with resistance strategies, such as using coded language or migrating to less restrictive platforms (Duffy & Meisner 2022).
For health education, this raises concerns about algorithmic literacy users must understand how platform rules influence the information they see. A teenager researching cosmetic procedures might encounter glamorized before and after videos but miss critical discussions about risks. To address this, platforms should collaborate with medical professionals to distinguish between harmful content and legitimate health discussions. Additionally, fostering critical digital citizenship means encouraging users to question why certain body modifications are normalized while others are censored (Duffy & Meisner 2022).
In conclusion, Duffy & Meisner's (2022) framework reveals how algorithmic visibility complicates health education around body modification. TikTok's inconsistent moderation reflects broader platform governance issues that impact public understanding of health risks. Moving forward, greater transparency in content moderation and improved algorithmic literacy are essential to ensuring social media supports informed decision making about body modification.
References
Abidin, C. (2021). “From ‘Internet Famous’ to ‘Influencer’: The Commercialization of Viral Bodies on Social Media.” Social Media + Society.
Duffy, B. E., & Meisner, C. (2022). “Platform governance at the margins: Social media creators’ experiences with algorithmic (in)visibility.” Media, Culture & Society.
TikTok. (2023). Community Guidelines. Retrieved from https://www.tiktok.com/community-guidelines.
0 notes
Text
Digital Citizenship in Action: Social Media Influencers and the Slow Fashion Movement
Week 7
The slow fashion movement, which advocates for ethical production, sustainable consumption, and environmental responsibility in the fashion industry, has found a powerful ally in social media influencers who embody principles of digital citizenship. These influencers leverage platforms like Instagram, TikTok, and YouTube to educate audiences, promote ethical brands, and challenge fast fashion practices, demonstrating how digital spaces can foster responsible engagement and activism (Ribble 2015). By combining digital literacy with advocacy, they encourage followers to make informed purchasing decisions while raising awareness about fashion’s environmental and social impacts.
Central to this movement are influencers who use their platforms to shift consumer behavior. Creators such as Venetia La Manna (@venetialamanna) on TikTok and Aja Barber (@ajabarber) on Instagram employ infographics, thrift hauls, and documentary-style content to expose the harms of fast fashion while spotlighting sustainable alternatives (Niinimäki et al. 2020). Their approach aligns with digital citizenship by fostering critical thinking—helping audiences distinguish between genuine sustainability and corporate greenwashing (Gossen & Süphan 2022). Additionally, collaborations with ethical brands like Patagonia and Reformation demonstrate how influencer marketing can drive demand toward eco-conscious products (Henninger et al. 2017). Beyond individual actions, hashtag activism has amplified the movement, with campaigns such as #WhoMadeMyClothes and #PayUp pressuring brands for transparency and fair labor practices (Jackson & Foucault Welles 2016).
However, the intersection of influencer culture and slow fashion activism is not without challenges. Critics highlight issues such as performative activism, where influencers promote sustainability while maintaining partnerships with fast-fashion brands (Pham 2021). Algorithmic biases on social media also pose barriers, as platforms often prioritize high-engagement fast-fashion content over educational slow fashion discourse (Bick et al. 2018). Furthermore, the movement’s framing sometimes overlooks socioeconomic barriers, presenting sustainable fashion as accessible only to privileged consumers (Hethorn & Ulasewicz 2015). Despite these obstacles, influencers continue to reshape the fashion landscape by merging digital citizenship with activism—proving that social media can be a force for ethical change when used responsibly.
References
Bick, R, Halsey, E & Ekenga, CC 2018, ‘The global environmental injustice of fast fashion’, Environmental Health, vol. 17, no. 1, pp. 1-4.
Fletcher, K 2010, Slow fashion: An invitation for systems change, Fashion Practice.
Gossen, M & Süphan, N 2022, ‘Digital activism in sustainable fashion’, Journal of Consumer Policy, vol. 45, pp. 1-22.
Henninger, CE, Alevizou, PJ & Oates, CJ 2017, ‘What is sustainable fashion?’, Journal of Fashion Marketing and Management.
Jackson, SJ & Foucault Welles, B 2016, ‘#Ferguson is everywhere’, Information, Communication & Society, vol. 19, no. 3.
Niinimäki, K et al. 2020, ‘The environmental price of fast fashion’, Nature Reviews Earth & Environment, vol. 1, pp. 189-200.
Pham, MT 2021, Influencers and the ethics of fashion promotion, Bloomsbury.
Ribble, M 2015, Digital citizenship in schools, ISTE.
0 notes
Text
Digital Citizenship and Hashtag Activism on TikTok
Week 6

TikTok has emerged as a dominant platform for digital engagement, shaping how users practice digital citizenship and participate in hashtag activism. Digital citizenship refers to the responsible use of technology, encompassing digital literacy, online safety, ethical behaviour, and civic participation (Ribble 2015). On TikTok, this translates to users critically assessing viral trends, protecting their privacy, and engaging in ethical content creation (Oh & Lee 2022). However, challenges such as misinformation, algorithmic bias, and performative activism complicate these ideals (Anderson & Jiang 2018).
TikTok as a Space for Hashtag Publics and Political Engagement
TikTok’s algorithm-driven nature facilitates the rapid formation of hashtag publics, where users rally around shared causes. Movements like #BlackLivesMatter and #ClimateStrike have gained traction, demonstrating the platform’s ability to amplify activism (Jackson & Foucault Welles 2016). Unlike Twitter, where hashtags primarily drive discourse, TikTok’s "For You Page" (FYP) accelerates visibility, allowing grassroots campaigns to reach global audiences (Burgess & Green 2018). For example, during the 2020 U.S. elections, TikTok became a hub for youth-led voter mobilisation, illustrating its political influence (Vogels et al. 2020).
Challenges and Ethical Considerations
Despite its potential, TikTok activism faces criticism. The platform’s moderation policies have been accused of censoring sensitive topics, such as LGBTQ+ rights and geopolitical issues (Tiffany 2021). Additionally, the spread of misinformation such as health-related conspiracies highlights the risks of viral content (Parker et al. 2022). While hashtag activism can drive awareness, critics argue that some trends encourage "slacktivism" superficial engagement without real-world impact (Morozov 2009).
Conclusion
TikTok exemplifies the evolving landscape of digital citizenship and hashtag activism, offering both opportunities and challenges. Its algorithmic reach empowers marginalised voices, yet issues like censorship and misinformation demand critical media literacy. Future research should explore TikTok’s long-term impact on political engagement and digital advocacy.
References
Anderson, M & Jiang, J 2018, Teens, social media & technology, Pew Research Center.
Burgess, J & Green, J 2018, YouTube: Online video and participatory culture, Polity Press.
Jackson, SJ & Foucault Welles, B 2016, ‘#Ferguson is everywhere’, Information, Communication & Society, vol. 19, no. 3, pp. 397-418.
Morozov, E 2009, The net delusion: The dark side of internet freedom, PublicAffairs.
Oh, Y & Lee, H 2022, ‘Digital citizenship and TikTok engagement’, New Media & Society, vol. 24, no. 5, pp. 1120-1138.
Parker, K et al. 2022, ‘Misinformation and TikTok’, Journal of Communication, vol. 72, no. 1, pp. 45-62.
Ribble, M 2015, Digital citizenship in schools, ISTE.
Tiffany, K 2021, ‘How TikTok moderates its content’, The Atlantic, viewed 18 May 2025.
Vogels, EA et al. 2020, Teens and political content on social media, Pew Research Center.
0 notes
Text
Reality Television in the Age of Social Media: Digital Communities and Fandom
Week 5

The rise of social media has transformed reality television fandom into a highly participatory digital culture, where audiences no longer passively consume content but actively shape its production, reception, and longevity (Deller 2019). Platforms such as Twitter, Reddit, and TikTok facilitate real-time engagement, with fans live-tweeting episodes, creating memes, and dissecting storylines in online forums (Jenkins 2006). This dynamic fosters a sense of collective viewership, where viral moments such as iconic catchphrases or dramatic confrontations are amplified through user-generated content, extending the show’s cultural impact far beyond its broadcast (Deller 2019).
Reality TV fandoms operate as affective communities, where emotional investment drives both support and backlash (Andrejevic 2008). Fans engage in "stan" behavior, fiercely defending favored contestants while mobilizing against perceived villains often through hashtag campaigns or direct social media harassment (Deller 2019). This phenomenon is particularly evident in franchises like Love Island and The Bachelor, where contestants face intense public scrutiny, blurring the lines between entertainment and real-world consequences (Marwick and boyd 2011). The parasocial relationships formed between viewers and reality stars further deepen fan engagement, as audiences perceive direct access to cast members through Instagram stories, TikTok updates, and Twitter feuds (Couldry and Hepp 2017).
Algorithmic recommendation systems on platforms like YouTube and TikTok also play a crucial role in sustaining fandom, as short-form clips and reaction videos introduce reality TV content to broader, more casual audiences (Deller 2019). However, this digital ecosystem raises ethical concerns, particularly regarding mental health and labor exploitation. Contestants often trade privacy for clout, while fan-generated content such as recaps and memes fuels the industry’s profitability without fair compensation (Andrejevic 2008). Ultimately, reality television’s survival in the social media age hinges on its ability to harness and monetize fan engagement, transforming viewers into co-producers of the genre’s ever-evolving narrative (Deller 2019).
References
Andrejevic, M., 2008. Reality TV is work. In: S. Murray and L. Ouellette, eds. Reality TV: Remaking television culture. New York: NYU Press, pp. 87-104.
Couldry, N. and Hepp, A., 2017. The mediated construction of reality. Cambridge: Polity Press.
Deller, R.A., 2019. Reality television in the age of social media. Celebrity Studies, 10(3), pp. 411-425.
Jenkins, H., 2006. Convergence culture: Where old and new media collide. New York: NYU Press.
Marwick, A.E. and boyd, d., 2011. To see and be seen: Celebrity practice on Twitter. Convergence, 17(2), pp. 139-158.
1 note
·
View note