Don't wanna be here? Send us removal request.
Text
How We Can Protect The Truth In The Age of Misinformation; Insight Gained
by Wanda Leigh S. Santos
Fake News A Critical Contemporary Concern
False information is news, stories, or hoaxes created to deliberately misinform or deceive readers. Pieces of information which is fabricated without credibility are potentially risky because they can create chaos and misunderstanding between people.
As the phenomenon known as Accelerating Change persists, technological advancement and improvement have been created to form a stronger generation of technology. On the other hand, its disadvantages persevere, the most timely relevant, and dangerous among them is fake news. According to media outlets, social critics, political organizations, and research groups have identified the problem of “fake news” as a critical contemporary concern.

In a 2018 TEDx talk in Geneva, Switzerland, IDE lead, Sinan Aral, discusses some of the threats and dangers face from unchecked social media posts, citing his strategies and research on how we can tangled web between true and false. Sinan with his team had found that false news diffused further, faster, deeper, and more broadly than the truth in every category of information that were studied, sometimes by an order of magnitude.
And in fact, false political news was the most viral. During the US presidential election alone in 2016, the Internet Agency's efforts reached 126 million people on Facebook in the United States and issued three million individual tweets and 43 hours' worth of YouTube content. All of which was fake -- misinformation designed to sow discord in the US presidential election, while a recent study by Oxford University showed that in the recent Swedish elections, one-third of all of the information spreading on social media about the election was fake or misinformation. In addition, these types of social-media misinformation campaigns can spread what has been called "genocidal propaganda," for instance against the Rohingya in Burma, triggering mob killings in India. It diffused further, faster, deeper, and more broadly than any other type of false news.
Based on their study false-news spreaders had fewer followers, followed fewer people, were less active, less often "verified" and had been on Twitter for a shorter period. And yet, false news was 70 percent more likely to be retweeted than the truth, controlling for all of these and many other factors.
Sinan’s team came up with a different hypothesis, the "novelty hypothesis”, so to understand people's perceptions of false news, they looked at the information and the sentiment contained in the replies to true and false tweets. What they found was that across a bunch of different measures of sentiment -- surprise, disgust, fear, sadness, anticipation, joy, and trust -- false news exhibited significantly more surprise and disgust in the replies to false tweets. True news exhibited significantly more anticipation, joy, and trust in reply to true tweets.
The surprise corroborates our novelty hypothesis, they also studied a lot of potential false-news spreaders such as bots, and what they found out was that, yes indeed, bots were accelerating the spread of false news online, but they were accelerating the spread of true news at approximately the same rate. This means bots are not responsible for the differential diffusion of truth and falsity online, we humans, the users are the ones who are responsible for the spread of any type of news or information, we have to be extra careful and think a lot of times before clicking something that goes on the internet. As democratic satisfaction is key to sustaining citizen participation, perceived legitimacy of electoral outcomes, and democratic commitment, high levels of presumed influence of misinformation threaten the fundamentals of every democratic governance.
As Aral had discussed, on April 23, 2013, the Associated Press put out the following tweet on Twitter. It said, "Breaking news: Two explosions at the White House and Barack Obama has been injured." This tweet was retweeted 4,000 times in less than five minutes, and it went viral thereafter. Now, the news turned out to be fake sending markets on a round-trip roller coaster ride, it was propagated by Syrian hackers that had infiltrated the Associated Press Twitter handle. Their purpose was to disrupt society, but they disrupted much more. Because automated trading algorithms immediately seized on the sentiment of this tweet, and began trading based on the potential that the president of the United States had been injured or killed in this explosion. And as they started tweeting, they immediately sent the stock market crashing, wiping out 140 billion dollars in equity value in a single day.
Social media has been an ever-changing and ever evolving field, it is the new but not so new, way of communication and connection, and it serves platform where we can share our thoughts and ideas. Originated as a way to interact and socialize with people, specifically family, and friends, and for the past few years, it has allowed the execution and exchange of content for 4.54 billion users, and there are 3.725 billion active social media users. Social media has been a tool for such things as false-News or fake news and it has always been a really important and dangerous issue, especially in this time of pandemic people spend most of their time using social media platforms, having a lot of access to information online.
This problem has been a challenge for a long time since the Internet and social media exist and have already become a part of human lives. There are about five different paths that Sinan had addressed. Each one of them has promise, but each one of them has its challenges. The first one is labeling.
Think about it this way, when you go to the grocery store to buy food to consume, it's extensively labeled. You know how many calories it has, and how much fat it contains, and yet when we consume information, we have no labels whatsoever. What is contained in this information? Is the source credible? Where is this information gathered from? We have none of that information when we are consuming information. That is a potential avenue, but it comes with its challenges. For instance, who gets to decide, in society, what's true and what's false? Is it the government? Is it Facebook? Is it an independent consortium of fact-checkers? And who's checking the fact-checkers
Another solution is to improve regulation, in the United States, currently, Sinan and his team are exploring what might happen if Facebook and others are regulated. While we should consider things like regulating political speech, labeling the fact that it's political speech, and making sure foreign actors can't fund political speech, it also has its dangers. For instance, Malaysia just instituted a six-year prison sentence for anyone found spreading misinformation. And in authoritarian regimes, these kinds of policies can be used to suppress minority opinions and to continue to extend repression.
The third possible option Sinan stated is transparency. They want to know how Facebook's algorithms work. How does the data combine with the algorithms to produce the outcomes that we see? If possible they should open the kimono and show us exactly the inner workings of how Facebook is working. And if we want to know social media's effect on society, we need scientists, researchers, and others to have access to this kind of information. But at the same time, we are asking Facebook to lock everything down, to keep all of the data secure.
Another aspects that we could improve are algorithms and machine learning. The technology was devised to root out and understand fake news, how it spreads, and to try and dampen its flow. Humans have to be in the loop of this technology because we can never escape that underlying any technological solution or approach is a fundamental ethical and philosophical question about how we define truth and falsity, to whom do we give the power to define truth and falsity and which opinions are legitimate, which type of speech should be allowed and so on. Technology is not a solution to that. Ethics and philosophy is a solution for that.
Humans are the spreader of fake news, therefore social media platforms are just tools that we, humans have to take responsibility for. The precautions, safe use of apps, and guidlines about the internet is a major mechanism that will protect the truth in the age of misinformation.
Users should be careful and selective on the posts that they will be engaging with, and sharing. People are responsible for defending the truth. Thus, it will create a new generation of technological advancement wherein the users, and the digital economy is protected.
1 note
·
View note