#process information and detect misinformation from media outlets
Explore tagged Tumblr posts
rroketa · 1 year ago
Text
'media literacy' has become such an annoying word lately because once someone sees people hating on some dumb edgy show or game they like its all suddenly the 'death of media literacy'.
128 notes · View notes
mariacallous · 9 months ago
Text
Washington-based rights organisation Freedom House’s latest report on internet freedoms worldwide, published on Wednesday, warned that in south-east Europe, censorship and misinformation are persistent problems in Turkey and Serbia.
Online liberties in Turkey remain threatened by the actions of the government, Freedom House’s Freedom on the Net 2024 report said.
“Internet freedom in Turkey remains under threat even though the country has observed a single point overall increase in this year’s analysis, due to the absence of a natural disaster that devastated the infrastructure, as in years before,” Gurkan Ozturan, Freedom House’s Turkey rapporteur, told BIRN.
Ozturan noted that many violations reported during election periods but this is not the only major issue.
“There were many cases of digital violations during the election process, including blocking access to news outlets. Regarding the elections in May 2023, other cases of violations were detected retrospectively during the reporting period,” Ozturan said, underlining the increase in access blocks of news articles and other content, as well as access blocks of digital platforms.
According to the report, Turkey stood out with negative examples in all five categories of arbitrary access blocking, blocking of social media and communication, blocking of access to news, and arrests and physical attacks on users. Turkey “set a bad example for the rest of the world,” Ozturan said.
People in Serbia also suffered from a politically manipulated online space, particularly during elections, the report said.
“Ahead of Serbia’s December 2023 elections, pro-government tabloids published false and misleading information about the opposition and independent media, including a fake video purporting to show the political opposition buying votes. These campaigns disproportionately target women who play a prominent role in political processes,” it alleged.
The report highlighted that activists in Serbia were targeted with spyware, and that journalists continue to face strategic lawsuits against public participation, or SLAPPs.
Freedom House said that in general, freedoms on the net have been curbed around the world.
“A rapid series of consequential elections have reshaped the global information environment over the past year. Technical censorship curbed many opposition parties’ ability to reach supporters and suppressed access to independent reporting about the electoral process. False claims of voter fraud and a rise in harassment of election administrators threatened public confidence in the integrity of balloting procedures,” the report said.
Voters in at least 25 of 41 countries covered by the Freedom House report that held or prepared for nationwide elections during the coverage period contended with a censored information space, it added.
0 notes
thestoryofmymind99 · 1 year ago
Text
Fox Corp. and Polygon Labs Join Forces to Combat Deepfake Distrust
Tumblr media
Revolutionizing Media Integrity: Fox Corp. and Polygon Labs Unite to Tackle the Deepfake Crisis
Deepfake technology has become a growing concern in recent years, with its ability to manipulate and fabricate audiovisual content raising questions about the authenticity of what we see and hear. In an effort to combat this growing distrust, Fox Corp. and Polygon Labs have announced a groundbreaking partnership that aims to develop advanced deepfake detection and verification tools. This collaboration brings together the media giant Fox Corp., known for its influential news and entertainment platforms, and Polygon Labs, a leading provider of augmented reality and virtual set solutions, to tackle the rising threat of deepfakes.
The partnership between Fox Corp. and Polygon Labs comes at a crucial time when deepfake technology is becoming increasingly sophisticated and accessible. Deepfakes, which use artificial intelligence to superimpose one person's face onto another's body or alter their voice, have the potential to deceive viewers and undermine public trust in media. This article will delve into the details of this collaboration, exploring the innovative technologies and strategies that Fox Corp. and Polygon Labs will employ to detect and combat deepfakes.
Additionally, it will highlight the significance of this partnership in the broader context of media integrity and the fight against disinformation.
youtube
Key Takeaways:
1. Fox Corp. and Polygon Labs have formed a partnership to address the growing concern of deepfake technology and its potential to spread misinformation and erode trust in media.
2. Deepfake technology uses artificial intelligence to create convincing and realistic fake videos and images, making it increasingly difficult to distinguish between what is real and what is manipulated.
3. The collaboration between Fox Corp. and Polygon Labs aims to develop advanced detection and verification tools that can identify deepfake content and prevent its dissemination on Fox's platforms.
4. This partnership is a significant step towards combating deepfake distrust, as it combines Fox Corp.'s expertise in media and content creation with Polygon Labs' cutting-edge technology in visual effects and augmented reality.
5. By proactively addressing the deepfake challenge, Fox Corp. is demonstrating its commitment to maintaining the integrity of its news and entertainment offerings, ensuring that viewers can trust the authenticity of the content they consume.
Emerging Trend: Enhanced Deepfake Detection and Prevention
As the use of deepfake technology becomes more prevalent, media companies like Fox Corp. are taking proactive measures to combat the potential misuse and spread of misinformation. Fox Corp. has recently partnered with Polygon Labs, a leading provider of augmented reality and virtual reality solutions, to develop advanced tools for deepfake detection and prevention.
This emerging trend is driven by the growing concern over the impact of deepfakes on public trust and the credibility of media outlets. Deepfakes, which are highly realistic manipulated videos or images, have the potential to deceive viewers and spread false information. By joining forces with Polygon Labs, Fox Corp. aims to stay ahead of the curve in identifying and mitigating deepfake threats.
The collaboration between Fox Corp. and Polygon Labs involves the development of cutting-edge algorithms and machine learning models that can analyze media content for signs of manipulation. These tools will enable Fox Corp. to quickly identify and flag potential deepfakes, allowing for a more accurate and reliable news reporting process.
With the rise of deepfake technology, it is crucial for media companies to invest in robust detection and prevention systems. By leveraging the expertise of Polygon Labs, Fox Corp. is taking a proactive stance in protecting its viewers from the potential harm caused by deepfakes.
Future Implications: Safeguarding Media Integrity and Trust
The collaboration between Fox Corp. and Polygon Labs marks a significant step forward in the fight against deepfake distrust. By implementing advanced detection and prevention tools, media companies can safeguard their integrity and maintain the trust of their audience.
One of the key future implications of this partnership is the potential to restore public confidence in the authenticity of media content. As deepfakes become more sophisticated and harder to detect, viewers may become increasingly skeptical of the information presented to them. However, by actively addressing this issue, Fox Corp. is demonstrating its commitment to providing accurate and reliable news.
Moreover, the development of enhanced deepfake detection and prevention tools could have broader implications beyond the media industry. As deepfake technology evolves, it poses a significant threat to various sectors, including politics, finance, and entertainment. By investing in advanced technologies, companies like Fox Corp. are not only protecting their own interests but also contributing to the overall fight against misinformation.
Another future implication of this collaboration is the potential for knowledge sharing and industry-wide collaboration. As Fox Corp. and Polygon Labs work together to combat deepfake distrust, they will likely develop valuable insights and best practices. These can be shared with other media companies and organizations, fostering a collaborative approach to tackling the deepfake challenge.
Furthermore, the partnership between Fox Corp. and Polygon Labs could inspire other media companies to prioritize deepfake detection and prevention. As the threat of deepfakes continues to grow, it is crucial for the entire industry to take a united stand against misinformation. By leading the way in this regard, Fox Corp. is setting a positive example for others to follow.
Key Insight 1: The Growing Threat of Deepfake Technology
Deepfake technology, which uses artificial intelligence to create realistic but fabricated videos, poses a significant threat to the media industry. With the ability to manipulate and distort reality, deepfakes have the potential to undermine trust in news and fuel misinformation campaigns. As the technology becomes more accessible and sophisticated, the need for effective countermeasures becomes paramount.
The collaboration between Fox Corp. and Polygon Labs to combat deepfake distrust is a proactive step towards addressing this pressing issue. By joining forces, these two industry leaders are sending a powerful message that they are committed to protecting the integrity of their content and restoring trust in the media.
Key Insight 2: The Power of Collaboration in Combating Deepfakes
Deepfake detection and mitigation require a multi-faceted approach that combines technological expertise, industry knowledge, and collaborative efforts. Fox Corp. and Polygon Labs recognize the importance of pooling their resources and expertise to develop effective solutions that can withstand the ever-evolving nature of deepfake technology.
This partnership highlights the power of collaboration in tackling complex challenges that threaten the media industry. By leveraging their respective strengths, Fox Corp. and Polygon Labs can develop innovative tools and strategies to detect and combat deepfakes, ultimately safeguarding the credibility of their content and the trust of their audience.
Key Insight 3: The Need for Continued Innovation and Adaptability
Deepfake technology is constantly evolving, with new techniques and algorithms emerging at a rapid pace. To stay ahead of the game, media organizations must demonstrate a commitment to continuous innovation and adaptability.
The collaboration between Fox Corp. and Polygon Labs signifies a proactive approach to addressing the deepfake threat. By investing in research and development, these companies can stay at the forefront of deepfake detection and mitigation, ensuring they are equipped to combat the latest advancements in this technology.
Moreover, this collaboration serves as a call to action for other media organizations to prioritize the development of robust countermeasures against deepfakes. As deepfake technology becomes more accessible, it is crucial for the industry to unite and invest in innovative solutions that can effectively combat this growing threat.
Fox Corp. and Polygon Labs: A Powerful Collaboration
Fox Corp. and Polygon Labs have recently joined forces to combat the growing threat of deepfake technology. This collaboration marks a significant step in the fight against deepfake distrust, as both companies bring their unique expertise and resources to the table. Fox Corp., a global media and entertainment company, and Polygon Labs, a leading provider of augmented reality (AR) and virtual reality (VR) solutions, have recognized the urgent need to address the potential harm caused by deepfakes.
By combining their strengths, they aim to develop innovative solutions that will help protect the integrity of media content and restore trust among audiences.
Understanding the Deepfake Threat
Deepfake technology has rapidly advanced in recent years, enabling the creation of highly realistic, AI-generated videos that manipulate or replace the faces and voices of individuals. While deepfakes can be used for entertainment purposes, they also pose a significant risk when used maliciously. Deepfake videos can be used to spread misinformation, defame individuals, manipulate public opinion, and even facilitate fraud.
The potential consequences of this technology are far-reaching, as it undermines the authenticity and credibility of visual media. Recognizing the gravity of this threat, Fox Corp. and Polygon Labs are determined to tackle the issue head-on.
The Role of Fox Corp. in Deepfake Mitigation
As a prominent media and entertainment company, Fox Corp. has a vested interest in protecting its brand reputation and ensuring the authenticity of its content. By partnering with Polygon Labs, Fox Corp. aims to leverage the latter's expertise in augmented reality and virtual reality to develop cutting-edge technologies that can detect and combat deepfakes. The collaboration will enable Fox Corp. to stay ahead of the deepfake curve, implementing robust measures to safeguard its media assets and maintain the trust of its audience.
Polygon Labs: Pioneering AR and VR Solutions
Polygon Labs has established itself as a leader in the field of augmented reality and virtual reality solutions. The company's innovative technologies have been widely adopted across various industries, including broadcasting, sports, and live events. By applying their expertise to the realm of deepfake detection and prevention, Polygon Labs aims to provide Fox Corp. with state-of-the-art tools that can identify and flag manipulated content.
These tools will not only benefit Fox Corp. but also have the potential to be utilized by other media organizations seeking to combat the deepfake threat.
Collaborative Research and Development
One of the key aspects of the Fox Corp. and Polygon Labs collaboration is their commitment to collaborative research and development. By pooling their resources and expertise, both companies will work together to develop advanced algorithms and machine learning models that can effectively detect deepfakes. This joint effort will involve extensive testing, analysis, and refinement of the technologies to ensure their accuracy and reliability.
By investing in research and development, Fox Corp. and Polygon Labs aim to stay at the forefront of deepfake mitigation strategies.
Building Public Awareness and Education
Recognizing the importance of public awareness and education, Fox Corp. and Polygon Labs are also committed to raising awareness about the dangers of deepfakes. Through targeted campaigns, educational initiatives, and partnerships with organizations focused on media literacy, the collaboration aims to empower individuals to identify and critically evaluate potentially manipulated content. By equipping the public with the necessary knowledge and tools, Fox Corp. and Polygon Labs hope to create a more discerning audience that can better navigate the digital landscape.
Industry Collaboration and Best Practices
The fight against deepfake distrust requires a collective effort from various stakeholders in the media industry. Fox Corp. and Polygon Labs recognize the importance of collaboration and are actively engaging with industry partners to share best practices and develop standardized protocols for deepfake detection and prevention. By fostering collaboration and knowledge exchange, the collaboration aims to establish a united front against deepfake threats, ensuring that the media industry as a whole remains resilient in the face of evolving technology.
Looking Ahead: Future Innovations and Challenges
The collaboration between Fox Corp. and Polygon Labs represents a significant step forward in combating deepfake distrust. However, as technology continues to evolve, new challenges are likely to emerge. Both companies are aware of the need to continuously innovate and adapt their strategies to stay ahead of malicious actors.
The collaboration serves as a foundation for future advancements in deepfake detection and prevention, with the ultimate goal of restoring trust in visual media and safeguarding the integrity of content.
The partnership between Fox Corp. and Polygon Labs brings together the expertise and resources of two industry leaders to address the growing threat of deepfake technology. By leveraging cutting-edge technologies, collaborative research, and public awareness initiatives, the collaboration aims to combat deepfake distrust and protect the integrity of media content. As the fight against deepfakes continues, the collaboration between Fox Corp. and Polygon Labs serves as a beacon of hope, demonstrating the power of collaboration in tackling emerging challenges in the digital age.
The Rise of Deepfake Technology
Deepfake technology, a term derived from "deep learning" and "fake," refers to the use of artificial intelligence (AI) to create manipulated or fabricated videos that appear incredibly realistic. The concept of deepfakes emerged in the late 2010s, primarily through online forums and social media platforms.
Initially, deepfakes were mostly used for harmless entertainment purposes, such as swapping faces in videos or creating amusing parodies. However, as the technology advanced, concerns grew regarding its potential for misuse and the spread of disinformation.
The Impact on Trust and Credibility
The rise of deepfake technology has had a significant impact on trust and credibility, particularly in the media industry. With the ability to create convincing videos of public figures saying or doing things they never actually did, deepfakes have the potential to undermine the public's trust in news and information.
In recent years, several high-profile incidents involving deepfakes have raised alarm bells. For example, a deepfake video of former President Barack Obama went viral, showing him delivering a speech that he never actually gave. This incident highlighted the potential for deepfakes to be used as a tool for spreading misinformation and manipulating public opinion.
The Role of Fox Corp. and Polygon Labs
Fox Corporation, a prominent media company, recognized the growing threat of deepfakes and the need to combat their negative effects on trust and credibility. In response, they joined forces with Polygon Labs, a leading provider of augmented reality and virtual set solutions, to develop innovative solutions to address this issue.
By combining their expertise, Fox Corp. and Polygon Labs aim to create advanced technologies that can detect and authenticate deepfake videos. Their collaboration seeks to develop tools that can help media organizations and individuals identify deepfakes and distinguish them from authentic content.
The Evolution of the Partnership
The partnership between Fox Corp. and Polygon Labs has evolved over time, reflecting the increasing sophistication of deepfake technology and the need for more robust countermeasures. Initially, the collaboration focused on research and development, exploring different techniques for detecting and analyzing deepfakes.
As deepfake technology continued to advance, the partnership expanded its efforts to include the development of practical tools and solutions. This involved the integration of AI algorithms and machine learning models into existing media production workflows, enabling real-time deepfake detection and authentication.
Current State and Future Prospects
Currently, Fox Corp. and Polygon Labs have made significant strides in their fight against deepfake distrust. They have developed cutting-edge technologies that can identify subtle visual cues and discrepancies in videos, helping to expose deepfakes with a high degree of accuracy.
Moreover, the partnership has actively engaged with other industry stakeholders, sharing their expertise and collaborating on best practices to combat deepfake proliferation. This collaborative approach has proven crucial in staying ahead of the constantly evolving tactics employed by those seeking to exploit deepfake technology for nefarious purposes.
Looking to the future, Fox Corp. and Polygon Labs remain committed to staying at the forefront of deepfake detection and authentication. They continue to invest in research and development, exploring new techniques and technologies to combat the ever-growing threat of deepfakes.
Ultimately, their efforts are aimed at preserving trust and credibility in the media landscape, ensuring that the public can rely on the authenticity of the content they consume. By proactively addressing the challenges posed by deepfakes, Fox Corp. and Polygon Labs are playing a vital role in safeguarding the integrity of information in the digital age.
Fox Sports: Enhancing Viewer Trust with Real-Time Graphics
One of the key success stories resulting from the collaboration between Fox Corp. and Polygon Labs is the implementation of real-time graphics to combat deepfake distrust in Fox Sports broadcasts. The use of deepfake technology has raised concerns about the authenticity of live sports events, as it can be used to manipulate footage and create false narratives.
To address these concerns and enhance viewer trust, Fox Sports partnered with Polygon Labs to develop a cutting-edge graphics system that provides real-time data and analysis during live broadcasts. This system allows viewers to access accurate and up-to-date information, reducing the risk of misinformation and deepfake manipulation.
For example, during a live NFL game, Fox Sports utilized the graphics system to display player statistics, team rankings, and real-time analysis. By incorporating these graphics into the broadcast, viewers were able to rely on accurate information and gain a deeper understanding of the game. This transparency and real-time data presentation helped to build trust between Fox Sports and its audience, mitigating the impact of deepfake distrust.
Fox News: Fact-Checking Deepfake Videos in Real-Time
In the realm of news broadcasting, deepfake videos have become a significant concern, as they can be used to spread false information and manipulate public opinion. Fox News, in collaboration with Polygon Labs, has taken a proactive approach to combat deepfake distrust by implementing a real-time fact-checking system.
During live news segments, Fox News utilizes advanced video analysis algorithms developed by Polygon Labs to detect potential deepfake videos. The system analyzes facial movements, voice patterns, and other visual cues to identify discrepancies that may indicate manipulated content. If a deepfake video is detected, the fact-checking system triggers an alert, and the news anchor can address the issue immediately, ensuring accurate reporting.
This real-time fact-checking system has proven to be highly effective in maintaining the credibility of Fox News broadcasts. By promptly addressing deepfake videos and providing accurate information, Fox News has built a reputation for reliable reporting, earning the trust of its audience.
FX Networks: Safeguarding Original Content with Watermarking Technology
Another notable case study resulting from the collaboration between Fox Corp. and Polygon Labs involves FX Networks and their efforts to protect their original content from deepfake manipulation. Deepfake technology poses a significant threat to the entertainment industry, as it can be used to create counterfeit content that undermines the integrity of original productions.
To safeguard their content, FX Networks partnered with Polygon Labs to implement advanced watermarking technology. This technology embeds unique digital markers within the video footage, making it difficult for deepfake manipulators to replicate or alter the content without detection. These watermarks are invisible to the naked eye but can be easily identified using specialized software.
By utilizing watermarking technology, FX Networks ensures that their original content remains authentic and untampered. This proactive approach not only protects the integrity of their productions but also reinforces viewer trust in the network's commitment to delivering genuine and reliable entertainment.
FAQs
1. What is Fox Corp. and Polygon Labs' partnership all about?
Fox Corp. and Polygon Labs have joined forces to combat deepfake distrust. Deepfakes are highly realistic manipulated videos or images that can deceive viewers into believing false information or events. This partnership aims to develop advanced technology to detect and authenticate deepfakes, ensuring that viewers can trust the authenticity of the content they consume.
2. Why is deepfake distrust a concern?
Deepfake distrust is a significant concern because it can undermine the credibility of information and media. With the increasing sophistication of deepfake technology, it becomes harder to distinguish between real and manipulated content. This can lead to misinformation, damage reputations, and even pose a threat to national security.
3. How will Fox Corp. and Polygon Labs combat deepfake distrust?
Fox Corp. and Polygon Labs will utilize their expertise in media and technology to develop advanced algorithms and tools that can detect and authenticate deepfakes. By leveraging machine learning and artificial intelligence, they aim to create a robust system that can identify manipulated content and ensure the authenticity of media.
4. What are the potential applications of this partnership?
The partnership between Fox Corp. and Polygon Labs has wide-ranging applications. It can be used to authenticate news footage, ensuring that viewers receive accurate information. It can also be applied to entertainment content, protecting the integrity of movies, TV shows, and other media.
Additionally, this technology can aid in preventing deepfake-based cybercrimes and fraud.
5. How will this partnership benefit the viewers?
This partnership will benefit viewers by promoting trust in the media they consume. With the ability to detect and authenticate deepfakes, viewers can have confidence in the authenticity of the content they see. This will help combat misinformation and ensure that viewers make informed decisions based on reliable information.
6. Will this technology be accessible to the general public?
While the specific details of how this technology will be implemented are not yet known, it is likely that the general public will benefit from its application. As deepfake technology becomes more prevalent, it is essential to make the tools to combat it widely accessible to ensure the integrity of information and media.
7. Can this technology be used to manipulate content in the opposite direction?
No, the primary goal of this partnership is to detect and authenticate deepfakes, not to manipulate content. The focus is on combating deepfake distrust and ensuring the authenticity of media. The technology developed will be used to identify and prevent the spread of manipulated content, rather than create it.
8. How long will it take for this technology to be fully developed?
The timeline for the development of this technology is not explicitly mentioned in the announcement. However, given the complexity of deepfake detection and authentication, it is likely to require significant research, development, and testing. It is important to prioritize accuracy and reliability over speed to ensure the effectiveness of the technology.
9. Will this partnership have any impact on journalism and media ethics?
Yes, the partnership between Fox Corp. and Polygon Labs can have a significant impact on journalism and media ethics. By enabling the detection and authentication of deepfakes, it helps uphold the principles of accuracy and truthfulness in journalism. It also highlights the importance of ethical media practices and the responsibility of media organizations to provide reliable information to the public.
10. Are there any concerns or potential challenges with this partnership?
While the partnership between Fox Corp. and Polygon Labs is a positive step towards combating deepfake distrust, there are potential challenges to consider. Deepfake technology is continually evolving, and new techniques may emerge that can bypass current detection methods. Therefore, ongoing research and development will be necessary to stay ahead of malicious actors.
Additionally, striking a balance between privacy and the need for deepfake detection may raise concerns, which will need to be addressed transparently and responsibly.
Concept 1: Deepfakes
Deepfakes are manipulated videos or images that use artificial intelligence (AI) to create highly realistic but fake content. These deepfakes can make it seem like someone said or did something they actually didn't. They are created by training AI algorithms on large datasets of real images and videos, and then using that knowledge to generate new content that looks convincing.
Deepfakes can be used for various purposes, including spreading misinformation, creating fake news, or even for entertainment.
Concept 2: Fox Corp. and Polygon Labs
Fox Corporation is a media company that owns and operates various television networks, including the Fox News Channel. They are known for their news and entertainment programming. Polygon Labs, on the other hand, is a technology company specializing in real-time graphics and visual effects for the broadcast industry.
They provide solutions to enhance the visual presentation of news and other media content.
Concept 3: Joining Forces to Combat Deepfake Distrust
Fox Corp. and Polygon Labs have decided to work together to combat the growing distrust caused by deepfake technology. They aim to develop tools and technologies that can detect and counter the spread of deepfakes. By leveraging their expertise in media and technology, they hope to restore trust in the authenticity and reliability of the content they produce and deliver to their audiences.
Common Misconception 1: Fox Corp. and Polygon Labs are creating deepfake technology
One common misconception surrounding the partnership between Fox Corp. and Polygon Labs is that they are joining forces to create deepfake technology. Deepfakes, which are manipulated videos or images created using artificial intelligence, have raised concerns about their potential to spread misinformation and deceive viewers.
However, it is important to clarify that Fox Corp. and Polygon Labs are not developing deepfake technology. In fact, their collaboration aims to combat the distrust caused by deepfakes by creating advanced graphics and visual effects tools.
Polygon Labs specializes in developing real-time 3D graphics and visual effects solutions for broadcasters and media companies. Their expertise lies in creating immersive and engaging visual content that enhances the viewer experience. By partnering with Fox Corp., Polygon Labs aims to leverage their technology to improve the production quality of Fox's broadcasts and digital content.
It is crucial to understand that Fox Corp. and Polygon Labs are focused on enhancing the authenticity and visual appeal of their content, rather than engaging in the creation or promotion of deepfake technology.
Common Misconception 2: The partnership aims to deceive viewers with realistic fake content
Another misconception that needs clarification is the belief that the collaboration between Fox Corp. and Polygon Labs intends to deceive viewers by creating realistic fake content. This misconception stems from the association of deepfakes with the manipulation of video and images to create false narratives.
However, the partnership between Fox Corp. and Polygon Labs has a different objective altogether. Their goal is to enhance the production quality of Fox's broadcasts and digital content by leveraging advanced graphics and visual effects tools. The emphasis is on creating visually appealing and engaging content that captivates the audience.
By utilizing Polygon Labs' expertise in real-time 3D graphics, Fox Corp. aims to deliver immersive visual experiences that elevate their storytelling capabilities. This collaboration does not involve the creation of fake content or any intention to deceive viewers.
Common Misconception 3: The partnership will exacerbate the spread of misinformation
Some individuals may fear that the partnership between Fox Corp. and Polygon Labs will contribute to the spread of misinformation, given the concerns associated with deepfakes. Misinformation is a significant problem in today's digital age, and any collaboration involving advanced visual effects technology might raise suspicions.
However, it is important to note that Fox Corp. and Polygon Labs are committed to upholding ethical standards and responsible journalism. Their collaboration aims to enhance the quality and authenticity of Fox's content, rather than promoting misinformation.
By leveraging Polygon Labs' cutting-edge graphics and visual effects tools, Fox Corp. can create visually stunning and engaging content that captivates viewers. The emphasis is on enhancing the storytelling capabilities and delivering a more immersive experience, while ensuring the accuracy and integrity of the information presented.
It is crucial to understand that the partnership between Fox Corp. and Polygon Labs is focused on utilizing technology to improve the production quality and viewer experience, without compromising the truthfulness of the content.
1. Stay Informed and Educated
One of the most important tips for combating deepfake distrust is to stay informed and educated about the latest developments in this technology. Keep up with news articles, research papers, and expert opinions to understand the potential risks and challenges associated with deepfakes.
2. Verify the Source
Before believing or sharing any content, make sure to verify the source. Check if the information is coming from a credible and trustworthy source. Look for reputable news organizations, official statements, or experts in the field to ensure the authenticity of the content.
3. Scrutinize Visual and Audio Cues
When encountering visual or audio content, pay close attention to any inconsistencies or anomalies. Look for unnatural movements, mismatched lip-syncing, or distorted voices. Deepfakes often have subtle flaws that can be detected with careful observation.
4. Cross-Reference Multiple Sources
Don't rely on a single source of information. Cross-reference multiple sources to verify the accuracy of the content. If you find conflicting information or inconsistencies, dig deeper to find the truth.
This approach helps in minimizing the impact of deepfake misinformation.
5. Utilize Fact-Checking Tools
Take advantage of fact-checking tools and websites available online. These tools can help you determine the credibility of a piece of information or debunk false claims. Use reputable fact-checking platforms such as Snopes, FactCheck.org, or PolitiFact to verify the accuracy of news stories.
6. Be Skeptical of Sensational Content
Deepfakes often target sensational or controversial topics to grab attention. Be skeptical of content that seems too good to be true or overly dramatic. Deepfake creators often exploit emotions and biases to manipulate public opinion, so it's crucial to approach such content with caution.
7. Engage in Critical Thinking
Develop critical thinking skills to evaluate the information you come across. Ask yourself questions like: Does this story align with what I already know? Are there any logical inconsistencies?
Is the source credible? Engaging in critical thinking can help you identify potential deepfakes and avoid falling for misinformation.
8. Report Suspected Deepfakes
If you come across a suspected deepfake, report it to the relevant authorities or platforms. Social media platforms and video sharing websites often have mechanisms in place to report and flag deepfake content. By reporting these instances, you contribute to the collective effort of combating deepfake distrust.
9. Support Research and Development
Support organizations and initiatives focused on researching and developing technologies to detect and combat deepfakes. By contributing to these efforts, you help in advancing the tools and techniques necessary to counter the spread of deepfake misinformation.
10. Educate Others
Spread awareness about deepfakes and educate others about the risks associated with this technology. Share reliable information, conduct workshops, or participate in discussions to help people understand the implications of deepfakes. By collectively raising awareness, we can build a more informed and resilient society.
In conclusion, the partnership between Fox Corp. and Polygon Labs marks a significant step in combating deepfake distrust in the media industry. By leveraging Polygon Labs' innovative technology and Fox Corp.'s vast resources, the two companies aim to restore public trust in the authenticity of video content. The collaboration will focus on developing advanced deepfake detection tools and creating educational initiatives to raise awareness about the dangers of manipulated media.
Through this partnership, Fox Corp. and Polygon Labs are setting a precedent for other media organizations to follow suit and take proactive measures against deepfake technology. As deepfakes become increasingly sophisticated and prevalent, it is crucial for the industry to unite in the fight against misinformation and restore credibility to visual media. By investing in research and development, along with public education, Fox Corp. and Polygon Labs are positioning themselves as leaders in the battle against deepfake distrust, ultimately safeguarding the integrity of news and entertainment content.
0 notes
violetsystems · 5 years ago
Link
Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.
The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.
“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
If a piece of online content looks real but ‘smells’ wrong chances are it’s a high tech manipulation trying to pass as real — perhaps with a malicious intent to misinform people.
And while plenty of deepfakes are created with a very different intent — to be funny or entertaining — taken out of context such synthetic media can still take on a life of its own as it spreads, meaning it can also end up tricking unsuspecting viewers.
While AI tech is used to generate realistic deepfakes, identifying visual disinformation using technology is still a hard problem — and a critically thinking mind remains the best tool for spotting high tech BS.
Nonetheless, technologists continue to work on deepfake spotters — including this latest offering from Microsoft.
Although its blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”
This summer a competition kicked off by Facebook to develop a deepfake detector served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.
Microsoft, meanwhile, says its Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, which it notes are “both leading models for training and testing deepfake detection technologies”.
It’s partnering with the San Francisco-based AI Foundation to make the tool available to organizations involved in the democratic process this year — including news outlets and political campaigns.
“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.
The tool has been developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee — as part of a wider program Microsoft is running aimed at defending democracy from threats posed by disinformation.
“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”
On the latter front, Microsoft has also announced a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online — providing a reference point for authenticity.
The second component of the system is a reader tool, which can be deployed as a browser extension, for checking certificates and matching the hashes to offer the viewer what Microsoft calls “a high degree of accuracy” that a particular piece of content is authentic/hasn’t been changed.
The certification will also provide the viewer with details about who produced the media.
Microsoft is hoping this digital watermarking authenticity system will end up underpinning a Trusted News Initiative announced last year by UK publicly funded broadcaster, the BBC — specifically for a verification component, called Project Origin, which is led by a coalition of the BBC, CBC/Radio-Canada, Microsoft and The New York Times.
It says the digital watermarking tech will be tested by Project Origin with the aim of developing it into a standard that can be adopted broadly.
“The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies,” Microsoft adds.
While work on technologies to identify deepfakes continues, its blog post also emphasizes the importance of media literacy — flagging a partnership with the University of Washington, Sensity and USA Today aimed at boosting critical thinking ahead of the US election.
This partnership has launched a Spot the Deepfake Quiz for voters in the US to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”, as it puts it.
The interactive quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising, per the blog post.
The tech giant also notes that it’s supporting a public service announcement (PSA) campaign in the US encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming election.
“The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October,” it adds.
3 notes · View notes
generateawareness · 2 years ago
Text
ZeroGPT: The Revolutionary GPT Output Detector Safeguarding Against Misinformation
Introduction
In an era of rapidly advancing AI technology, OpenAI's GPT (Generative Pre-trained Transformer) models have garnered significant attention for their ability to generate human-like text. However, as the capabilities of these models expand, so does the concern over the potential misuse of AI-generated content. To address this pressing issue, ZeroGPT emerges as a pioneering solution, offering a powerful GPT output detector that helps combat misinformation and ensures responsible AI use.
ZeroGPT: The Guardian Against AI-generated Misinformation
ZeroGPT, developed by a team of experts at www.zerogpt.com, is an innovative tool specifically designed to identify AI-generated text and distinguish it from human-written content. By leveraging cutting-edge techniques in natural language processing and machine learning, ZeroGPT offers a robust defense against the spread of misinformation, deepfakes, and AI-generated propaganda.
How Does ZeroGPT Work?
ZeroGPT combines a variety of advanced techniques to effectively detect AI-generated outputs. Here's an overview of its key features:
Training on GPT Output: ZeroGPT has been trained on a vast dataset of GPT-generated text, allowing it to identify patterns and characteristics unique to AI-generated content. This training enables the system to quickly recognize when text originates from a GPT model.
Linguistic and Semantic Analysis: ZeroGPT employs sophisticated linguistic and semantic analysis algorithms to evaluate the coherence, structure, and logical flow of the text. By comparing the output against linguistic patterns observed in human-generated content, the system can identify deviations that may indicate AI involvement.
Contextual Understanding: ZeroGPT goes beyond surface-level analysis and takes into account the context in which the text appears. It assesses the consistency and relevance of the information presented, ensuring that the content aligns with factual accuracy and logical reasoning.
Deep Learning Models: ZeroGPT utilizes deep learning models trained on vast amounts of human-generated data to develop a comprehensive understanding of natural language. By contrasting AI-generated outputs against these models, the detector can pinpoint text that exhibits distinct AI characteristics.
Continuous Learning: ZeroGPT leverages an ongoing learning framework that allows it to adapt and evolve with the ever-changing landscape of AI-generated content. The system continuously updates its detection algorithms, making it increasingly effective in detecting newer variations and iterations of AI-generated text.
The Importance of ZeroGPT in Combating Misinformation
The rise of AI-generated content poses a significant challenge when it comes to discerning reliable information from fabricated or biased narratives. ZeroGPT plays a vital role in combating misinformation and protecting individuals, organizations, and society at large. By accurately identifying AI-generated text, ZeroGPT empowers users to be critical consumers of information, reducing the risk of falling victim to manipulative narratives or false claims.
Applications of ZeroGPT
The impact of ZeroGPT extends across various domains, including:
Journalism and Media: Journalists and media outlets can utilize ZeroGPT to fact-check content and identify potential sources of misinformation before disseminating information to the public.
Social Media Platforms: ZeroGPT can be integrated into social media platforms to automatically flag AI-generated content, curbing the spread of misinformation and maintaining a safer online environment.
Academic Research and Publications: Researchers can employ ZeroGPT to validate the authenticity of sources and ensure the integrity of academic discourse.
Business and Finance: ZeroGPT offers businesses an added layer of protection against AI-generated scams, fake reviews, and manipulated financial data.
Conclusion
As AI technology continues to advance, the need for effective detection mechanisms becomes increasingly crucial. ZeroGPT, the innovative GPT output detector developed by www.zerogpt.com, serves
0 notes
jessgartner · 5 years ago
Text
Leaving Facebook Part I
My brain forms ideas and makes decisions very slowly for a long time and then acts on them suddenly, immediately. It's often startling to friends and family who hear an idea out of my mouth one minute and then watch me uproot some giant chunk of my life a moment later. My biggest decisions in life seem the most impulsive and rash, but usually I've been quietly, abstractly percolating these ideas in my head for weeks, months, or even years. And then one day, for whatever reason, things snap hard into focus and I'm ready to go.
The decision to leave Facebook was like this. For years, I've had this abstract, primordial-soup notion that Facebook was becoming a net-negative in my life. And then all at once, I knew it was time to go. I like to imagine my brain as one of those detective bulletin boards with photos and sticky notes and red string. Then some final clue, some stray artifact, ties it all together. Case closed.
So what was on my mental cork-board?
The 2016 election shook my worldview in a big way. For the rest of my life, I will remember sitting in that Mexican restaurant on the San Antonio riverwalk, crying into my margarita and knowing that nothing would ever be the same. Something broke in me that day.
Over the next few years, evidence has mounted that Facebook played a significant role in the 2016 election. There was the flood of Russian ads intended to sow division and spread information. In 2018, a report exposed that:
"Cambridge Analytica, a political data firm hired by President Trump’s 2016 election campaign, gained access to private information on more than 50 million Facebook users. The firm offered tools that could identify the personalities of American voters and influence their behavior."
Zuckerberg's Congressional testimony did not build confidence. The whole thing left me ill-at-ease about the character and competence of this company with so much data and, increasingly, so much political influence. A second Congressional testimony last fall, for which he presumably had the benefit of hindsight and time for preparation - particularly the exchange with Rep. Ocasia-Cortez - deepened my concern about Zuckerberg's ethics, moral courage, and maturity.
Why wait?
In hindsight, it's hard to look at these reports and finances and wonder why any one of them wasn't enough to trigger my departure from the platform. It was becoming undeniable that the Facebook leadership team was reckless with private data and information; unable or unwilling to draw a hard line on their social and moral obligations to limiting foreign influence in our elections; and uninterested in curbing the flow of rampant disinformation.
So what took me so long?
Facebook's power and danger are one in the same: they are deeply integrated into our social fabric. Its near ubiquity makes it hard to leave. Not to mention, I was getting plenty of personal value out of my participation on the platform. I stay in touch with family and old friends, I like the cute baby pictures and engagement announcements, it's easy to know when everyone's birthday is, I discover cool local events, I've met new friends, and it's been a net-positive on my personal social capital. The cost of leaving felt steep.
There's a brilliant episode in season 3 of The Good Place, "The Book of Dougs," that articulates how difficult it is to be a "good" person in a late-stage capitalistic society:
“In 1534, Douglass Wynegarr of Hawkhurst, England, gave his grandmother roses for her birthday. He picked them himself, walked them over to her, she was happy — boom, 135 points,” Michael explains before diving into the twist.
“In 2009, Doug Ewing of Scaggsville, Maryland, also gave his grandmother a dozen roses, but he lost four points. Why? Because he ordered roses using a cell phone that was made in a sweatshop. The flowers were grown with toxic pesticides, picked by exploited migrant workers, delivered from thousands of miles away — which created a massive carbon footprint — and his money went to a billionaire racist CEO who sends his female employees pictures of his genitals..."
The dick pics embody the issue: “The Bad Place isn’t tampering with points. They don’t have to because every day, the world gets a little more complicated and being a good person gets a little harder.”
It's impossible to be totally good. Nearly of all our consumerism is somehow fraught. So why even bother?
The last straw
My inner Mariska Hargitay had pretty much thrown in the towel on this case. Facebook was just going to have to be a necessary evil in my life.
And then I watched this video. Mark Zuckerberg appears on FoxNews (of all media outlets) with Dana Perino, responding to Twitter's fact-checking of Trump's tweets about mail-order voting and says:
We have a different policy, I think, than Twitter on this. I just believe strongly that Facebook shouldn't be the arbiter of truth of everything that people say online.
And this was the last straw for me. Everything about this interview reveals a stunning lack of moral conviction and courage.
My theme word for 2020 is intention: being more intentional and mindful about my time, energy, attention, food, etc. Upon a lot of reflection, I cannot in good conscience keep giving this platform so much of my time and attention.
What about Twitter?
Twitter has absolutely been complicit in the spread of information and for too long they have failed to address serious issues with harassment and trolling. 
The difference, to me, is that they seem to give a damn. They should have done more sooner but they seem to have turned the corner on taking action. They demonstrated leadership in flagging Trump's misleading tweets about mail-order voting. This follows their broader improvements to policies focused on addressing misinformation. They addressed allegations about their influence in the 2016 election with their Ads Transparency Center, providing tools to show the ad campaigns that any Twitter handle has run, including information around ad spending and demographic targeting. They also released political ad guidelines to prevent foreign election interference.
As a user, I've also noticed tightening controls and intelligence around muting/hiding trolling or harassing replies and accounts. They're also rolling out new features to give users even more control over conversations.
This is a start. They have a long way to go but they there are marked attempts at taking responsibility for the problems their platform has created/exacerbated. I reserve the right to change my mind on them and I'll be continuing to evaluate whether they are upholding this responsibility with fidelity over time.
What now?
In the next blog in this series, I'll be detailing more about my process of extricating myself from the Facebook machine and what's next.
0 notes
tomwolfgangascott · 6 years ago
Text
Global Conspiracy Theory Attacks
This post originally appeared on Yale on 19 November 2019
Security challenge: As local news media deteriorate, conspiracy theories, crafted to incite fear and tarnish achievements, flourish online.
With local news in decline and more legitimate news behind internet paywalls, readers turn to social media where conspiracy theories are plentiful. Some conspiracy theories emerge from anxiety, such as parents worrying about the side effects of vaccinations for children. Others are deliberate misinformation campaigns crafted to target marginalized populations, weaken social cohesion, increase fear and belittle achievements. “Some individuals struggle to form communities because they harbor politically incorrect thoughts and meet resistance,” explains Tom Ascott, the digital communications manager for the Royal United Services Institute for Defence and Security Studies. “Yet racist, sexist, homophobic and alt-right communities thrive online. Such communities might be small and inconsequential in any one geographic area, but the internet presents a border-free world, allowing niche, politically incorrect views to thrive.” Website managers analyze which content draws the most users and engagement – often the most outrageous, sensational tales along with conspiracy theories. Ascott offers recommendations. Companies should end “likes” and other popularity measures, prohibit falsehoods and revise algorithms that repeatedly spoon-feed content that reinforces views. Societies must invest in open data sources. Individual users must recognize expertise and apply critical reading skills, including consideration of sources with double checks and fact checks. – YaleGlobal
The internet has given conspiracy theories a global platform. While traditional local news media deteriorate, the borders for online communities are broadening, offering weird beliefs that pose political, security and economic implications.
Conspiracy theories are common, and all countries struggle with them. In Poland the 2010 plane crash in Smolensk that killed the former president became fodder for conspiracy theories. And on the 50th anniversary of the Apollo 11 moon landing, NASA contended with accusations that the moonlanding was filmed on a soundstage, the earth is flat and the moon is a hologram. Spain’s fact-checking site Newtral set out to fight the mistruths.
As some people find social interactions more challenging, online platforms provide outlets for expression. A vicious cycle develops: As people spend more time online, they find personal interactions more challenging and experience social anxiety, prompting more online interactions. The preference to communicate through technology on its own might not be a problem, but can deter the ability to form communities in real life.
Some individuals struggle to form communities because they harbor politically incorrect thoughts and meet resistance. Yet racist, sexist, homophobic and alt-right communities thrive online. Such communities might be small and inconsequential in any one geographic area, but the internet presents a border-free world, allowing niche, politically incorrect views to thrive. As a result, politically incorrect views become less niche. The Centre for Research in the Arts, Social Sciences and Humanities estimates 60 percent of Britons believe in a conspiracy theory. In France, it’s 79 percent.
Conspiracy theorists and anti-establishment alt-right groups are not distinct, and an investigation by Aric Toler for Bellingcat, the investigative journalism website, suggests the two camps share vocabulary. Embracing conspiracy theories goes along with “rejecting all political and scientific authority, thus changing your entire worldview.”
Rejection of basic and institutional truths contributes to an individual’s vulnerability to radicalization and a rise in extremist views. Internet platforms, whether social media giants like YouTube or Twitter or small online magazines, thrive on clicks and engagement, and operators are keenly aware that outrageous comments or conspiracy theories drive engagement. Bloomberg recently implied that YouTube is aware of the radicalizing effect of video algorithms, offering related content that reinforces users’ views. It is a Pyrrhic success, though, as content fuels disillusionment, frustration and anger.
Algorithms are designed to keep users on the site as long as possible. So, if users search for content opposing vaccinations, YouTube continues serving more anti-vaccination content. A Wellcome study has shown residents of high-income countries report the lowest confidence in vaccinations. France reports the lowest level of trust, with 33 percent reporting they “do not believe that vaccines are safe.”
Such reinforcement algorithms can challenge core democratic ideals, like freedom of speech, by deliberately undermining the marketplace of ideas. The belief underpinning free speech is that truth surfaces through transparent discourse that identifies and counters maliciously false information. Yet the notion of automatic algorithms contribute to a situation that every view is valid and carries equal weight, culminating in the “death of expertise.” When it comes to complex, technical and specialist subjects, everyone’s view is not equally valid.
There is nothing wrong with challenging democratic ideals in an open and earnest debate — this is how democracy evolves. Women won the right to vote through open and free discourse. But posts designed to undermine the marketplace of ideas challenge nations’ ontological security – or the security of a state’s own self-conception. If pushed too far, commonly accepted ideas that are held to be true –the world is round; science, democracy and education are good – could start to collapse and jeopardize national security.
The Cambridge Analytica scandal showed that political messaging could be made more effective by targeting smaller cohorts of people categorized into personality groups. Once platforms and their clients have this information, Jack Clark, head of policy at OpenAI, warns that “Governments can start to create campaigns that target individuals.” Campaigns, relying on machine learning, could optimize ongoing and expanding propaganda seen only by select groups. Some conspiracy theories are self-selecting, with users seeking out the details they want to see. This contrasts with targeted misinformation that typically indicates information warfare.
Propaganda campaigns need not be entirely fictitious and can represent partial reality. It is possible to launch a conspiracy theory using real resources: Videos showing only one angle of events can be purported to show the whole story, or real quotes can be misattributed or taken out of context. The most tenacious conspiracy theories reflect some aspect of reality, often using videos showing a misleading series of events, making these much harder to disprove with other media.
In authoritarian countries, with less reliable repositories of institutional data, fact checking is difficult. In Venezuela, open data sources are being closed, and websites that debunk false news and conspiracy theories are blocked. This problem is not new, stemming from the same dictatorial philosophy that leads regimes to imprison journalists or shut down public media stations.
Video is becoming more unstable as a medium for truth, with deepfakes, videos manipulated by deep-learning algorithms, allowing for “rapid and widespread diffusion” and new evidence for conspiracy theories. Creators churn out products with one person’s face convincingly placed over a second’s person’s face, spouting a third person’s words. There is an ongoing arms race between the effectiveness of these tools and the forensic methods that detect manipulation.
Content still can be fact-checked by providers or users, though this is a slow process. Once a theory or video goes online, the debunking often doesn’t matter if viewers are determined to stand by their views. Users can post content on Facebook, YouTube, Twitter and other social media without fact-checking, though Facebook has come under fire for allowing political advertisements to make false claims. Twitter avoids the issue by banning political ads that mention specific candidates or bills.
Conspiracy theorists are also adept at repackaging. For example, any tales related to the rollout of 5G are “rehashed from 4G.” Likewise, the Notre Dame fire quickly produced anti-Semitic or anti-Islamic theories. Conspiracy theorists link new theories to old ones. In the minds of conspiracy theorists, despite evidence to the contrary, such connections give greater weight to the new theory as continuation of an established idea they have already accepted as true.
A government can refute conspiracy theories to prevent, as in the case of Notre Dame, anti-Semitic sentiment. As trust in government and politicians declines, the ability to fight rumors falls. Marley Morris describes the cycle for Counterpoint: “low levels of trust in politicians can cause people to resort to conspiracy theories for their answers and in turn conspiracy theories construct alternative narratives that make politicians even less likely to be believed.”
James Allworth, head of innovation at Cloudflare, proposes banning algorithmic recommendations or prioritization of results for user-generated content. Policy ideas like this as well as internal regulations such as Instagram masking “likes” in six countries, and then globally, indicate appetite for industry change.
There are solutions at the individual level, too, including deputizing users to flag and report false or misleading content. The paradox is that users reporting problem content are not typical viewers or believers. It’s human nature to be curious about controversial content and engage. And unfortunately, according to Cunningham’s law, “the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer.”
1 note · View note
majid119153 · 5 years ago
Text
Assignment 2
ID: s119153
Date: May 31, 2020
Number of words: 1496
Course name: Issues in Mass Communication.
Course code: MASS2620
Introduction
Media is a very important mean could be used to clarify the truth as it should be, or used to manipulate it to a certain direction, and we should know how to distinguish the truth from the misleading information.
Submitted material
 
1- Disinformation:
 
Media disinformation is simply that it is the process of transmitting and disseminating information, mostly lies, to direct people's interests in a specific direction, and to influence their choices and decisions.  
 
2- Use of cognitive strategies in media:
 
A cognitive strategy in the media in its general sense is a set of messages that the organization formulates in a way that is consistent with its goals and then directs it to the target audience to achieve the best results.
 
3- Communication used in Cold War, World War I, World War II, The American coalition invasion of Iraq.
 
The media has a critical role in determining the winner or loser in war by the various means that it uses, whether radio, press, television or modern media, as the media destabilize society or increase cohesion according to the goal you were directed to.
 
4- How media outside Oman present race, gender (Male, Female, third gender) and class?
 
The media presented both race, gender, and class, by the nature of the place they are in, where media presentation of these types varies in each country.
 
5- Selfie and narcissism - fame online:
 
Celebrities on social media are really ordinary people, but they have benefited from these programs for their personal interests and for the profits that help them build their lives, and we have to follow whoever we see provides us with positive information and experiences to benefit from it and stay away from those who do not provide content that benefits us.
 
6- How to define hate speech in the media?
 
Hate speech clearly appears on social media, enabling everyone from anywhere and at any time to present any ideas and opinions they want appropriate to a specific topic.
 
7- The role of media in health education:
 
The media plays an important role in the health field, as it can help people cope with crises (Corona Crisis), and help them avoid disease through the print media and posters.
 
8- Two hypotheses: (Citizens have the right to know; media do not need to inform citizens):
 
The best solution to balancing these hypotheses is that if the case information can be solved without the need for any cooperation with citizens, it would be better to keep it hidden; But if the issue has a great relationship with the citizens and cannot be resolved without informing them, then they must be informed of the required information.
 
9- Perspective about media ethic challenges:
 
The media faces many ethical problems, including presenting the truth to people and committing to transparency when providing information and the difficulty of commitment not to interfere in transmitting the privacy of people's lives.
 
10- Understanding the media
 
To understand media we need to select specific questions to ask, mass media is considered very important both in politics and cultural fields, as in politics it reflects an influential channel for the government's agents and interest groups, the same importance is represented in the cultural field as media reflects the identity images and cultural expressions.
 
11- Media as agent of social changes
 
Media is considered as a very important factor when it comes to the social changes, the information we get from media is often filtered, people can consume very much time of their day having interactions and conversations.
 
12- Combatting disinformation and misinformation through media and information literacy (MIL)
MIL is devoted to literate us about how misleading the media could be as it may contain certain fake news or alternative facts news used to deliver subliminal messages to the unconscious mind, MIL depends mainly on the critical thinking.
 
13- Media literacy
 
Media literacy depends on using analysis to understand the role media plays in popular culture, students become more aware of how the media products produce biases, values and beliefs, students also learn to distinguish the media's messages immediate reaction from critical response to these messages.  
 
14-Representations and responsibilities
 
Television is a very effective method could change many people beliefs, values and behaviors, cultivation, social learning theories are aimed to clarify this phenomenon.
 
15- Media histories, media power
 
Changing era changes the key factors of the media, and this what we see from the media histories, everything starts with the printing press in 1450 and ends with Facebook commencing in 2004.
 
16- Helping yourself to increase media understanding 
 
The increased influence of media could be controlled through reading from several sources about several topics and discussing what we read with others through meetings, and training to distinguish what's real from fantasy of the media messages.
 
17- Labelling
 
Societal reaction theory is used to clarify the technique social groups use to create and apply definitions for deviant behavior, media tries to label many people which affect those people lives.
 
18- Bias in the media
 
It's very important nowadays to detect bias which could be against a religion, a race, a specie,…
Critical questions should be asked to face any media bias.
 
19- liberal Bias
 
Bias has many types including: commission, omission, story selection, placement, source selection, spin, labeling, and condemnation.
Three main categories are excluded from being bias which are: opinion columns, accurate statements though they affect one side badly, and specific events stories that have no needs to be balanced.
 
 
20- Corona virus disinformation
 
There were many fears from the increasing disinformation widely viewed on the social media which forced the active stance against it, social media took the place of the official media resulting in so many mistaken information could cause damage to people following and believing it.
 
21- Comics as an information medium
 
To educate and communicate with people comics and animations have been used, making the information more easier and funnier, comics range have been widen to cover race, politics and history.
 
22- Challenges to the media
 
There are many challenges mass media faces such as: obstructing the freedom of expression, high levels of inaccuracies, the misleading headlines,…
 
23- Proliferation of information
 
The information interconnection should be considered when we write any material in any time, as we should evaluate the competing claims, relating the main story, and so on.
Analysis:
The materials that we studied and discussed in the previous lines are very important, The main goal is to develop critical thinking to learn the truth from various media outlets.
It is important to understand the media and reveal the truth as there is an interconnection between Disinformation and Cognitive strategy and it is not necessary to believe what is published in the media. Indeed, it is important to reveal the truth and not waste time in watching celebrities.
Disinformation and hate speech There is a common connection between them, because disinformation sometimes aims to sow hatred between people.
Media present of gender, class and race and Hate speech. Media covers the news of the wealthy class and this makes the poor hate them.
These materials proves that media is unfortunately being politicized in most cases in approximately every country. So it’s used as a very powerful tool in smear campaigns as we saw in the labeling material where media is subjected to the unconscious mind and making wrong misleading lies as the audience’s solid facts.
The corona pandemic reminds us of another important rule that media plays which is “increasing health awareness”. Media was used before several times but in a much lighter viruses like: H1N1, and bird flu. However, this time the crisis is global and more dangerous . Furthermore, many disinformation was spread.
The materials also mentions that the social media is playing nowadays a significant role which comes with the great technology evolution. if we blames some governments for misleading the audience to use media, it's even worse now as almost any one can mislead receivers “audience”. Moreover ,the number of social media's celebrities who may play a critical role in spreading disinformation increases.
In the same time there are obstacles that media can't be overlooked especially in critical times.
Conclusion
Media is considered as this era's strongest weapons seeing so many strong and powerful countries racing to get to the leading roles of this industry means that we barely see the truth without any misleading news which means that we need special steps to follow so that we don't affected with these misinformation.
In my opinion the media's rule in this era should be “Raising aware audience” as we can't control what they receive from the different media sources but at least we could train them to less believe what they get from the media and analyze it first, determining media's desired target from each news and deciding whether it's the complete fact or it has been manipulated with.
 
#MASS2620_20 #NOMAJOR
#Assignment 2
0 notes
activistnewsnetwork · 7 years ago
Text
A New Wave of Censorship: Distributed Attacks on Expression and Press Freedom
KEY FINDINGS
In both authoritarian or democratic contexts, new forms of censorship online are carried out through distributed attacks on freedom of expression that are insidiously difficult to detect, and often just as effective, if not more, than the kinds of brute force techniques by state agents that came before. Their goal is not always to block users, content or themes, but to attack the democratic discourse, weaken trust in institutions like the media, other governments, the opposition, and civil society. These strategies increasingly polarize and diminish the networked public sphere, resulting in a more dangerous and confined space for media and civil society to operate.
•  Distributed attacks on freedom of expression and press freedom challenge conventional notions of censorship. They come from multiple sources, not just the state, leveraging both complicit and unwitting citizens to amplify campaigns that intimidate, confuse, and ultimately silence.
•  Because the internet is a global network, these tactics are spreading not only within authoritarian or transitionally democratic regimes, but also impacting consolidated democracies worldwide.
• Journalists need the expertise of an entirely new array of actors to protect them from online attacks, including data scientists, digital security experts, and large social media platforms.
New Forms of Censorship by Distributed Attacks on Expression and Press Freedom
As soon as the internet became an important tool for sharing independent news and empowering citizens to speak their minds, authoritarian governments and their political allies started to seek ways to censor and block content that might undermine their grip on power. Initially, this entailed censoring the content of individual pages or users, blocking websites, and at times even cutting off internet access to entire communities, cities, and countries. This represented a direct form of censorship in which regimes suppressed objectionable content by removing it from the public sphere. Now, however, authoritarian actors are becoming more sophisticated in the strategies they use to curtail access to information and freedom of the press. They have developed novel, distributed forms of censorship that utilize new technologies, such as automated social media accounts and selective throttling of bandwidth, to constrain news circulation and the public discourse. While these new tactics are often less perceptible by the general public, they have the overall impact of fundamentally undermining an open and independent news media ecosystem that is the bedrock of democracies.
Whereas traditional forms of censorship seek to overtly block content from circulating, these new forms of censorship are less focused on totally removing content from the public sphere. Rather, they seek to disrupt the media ecosystem by alternatively overwhelming it with content, often hyper-partisan stories and even downright disinformation, or by chilling communication through slower internet speeds and self-censorship induced by overt surveillance.  When viewed together, these tactics represent what can be called a distributed attack on expression and press freedom. Just as a distributed denial of service (DDoS) attack renders a website inaccessible through a flood incoming traffic from many different sources—requests that overwhelm the server and make it impossible to operate—these new threats combine to overwhelm public institutions, the media, and the democratic principles that undergird civil society.
As in a DDoS assault, in isolation, none of the incoming requests is out-of-the-ordinary or seen as malicious, but together the effect is paralyzing. And because the attacks are distributed —they are launched from different systems operating in conjunction with each other rather than a single source—they can also be more difficult to address. In essence, authoritarian regimes and other political actors interested in manipulating the public sphere have utilized new mechanisms to influence the flow of information in ways that undermine the ability of journalists to report stories, disseminate their content, and also the capacity of citizens to assess what information is reliable and accurate. These conditions curtail the ability of media to fulfill their role to inform the public, stifle discourse in civil society, and have the longer-term effect of eroding public trust in the news media.
Over the past couple of years there has been a heightened, global awareness about the impact that disinformation campaigns can have on political processes, both in authoritarian countries and democracies. Though sowing doubt and cynicism through false narratives is not a new tactic, it is one that has taken on new manifestations in the age of social media. Little attention, however, has been given to the chilling or stifling effects of disinformation campaigns on freedom of expression and the press. Even when audiences remain unconvinced by disinformation or propaganda, the distribution of intentionally misleading information or false accusations can still achieve its goal of undermining a free and open news media ecosystem by crowding out reliable content and re-directing the topics of public discussion. Furthermore, while disinformation campaigns may be directed by state actors or their direct proxies, in many cases they are independently amplified by real individuals acting on their own accord. This is another way in which these new mechanisms are distributed, and therefore harder to counteract. The blurred lines and multiple vectors from which this type of content emanates make addressing these distributed attacks on freedom of expression and the press incredibly complex.
“While these new distributed tactics are often less perceptible by the general public, they have the overall impact of fundamentally undermining an open and independent news media ecosystem that is the bedrock of democracies.”
Complicating matters even further, the media ecosystem has also undergone a dramatic shift over the past twenty years. New tech platforms, like Google, Facebook, and Twitter now play a central role in the global circulation of news, and have upended traditional news organizations by both altering distribution mechanisms and reconfiguring the advertising market, which has had an extremely negative impact on privately-owned news outlets traditionally supported by advertising. In terms of these new forms of censorship, the business models of social media corporations have generated perverse incentives that have exacerbated the problem at times, particularly when it comes to the circulation of dis- and misinformation. In many instances, the lack of transparency practiced by these private companies has also obfuscated the true scope of these challenges and how are they affecting societies.
Through case studies in Ukraine, Turkey, the Philippines, Bahrain, and China, this report will elucidate how new forms of distributed online censorship have undermined freedom of expression and press freedom in ways that defy conventional notions of control. While these various methods are often used in conjunction with each other, by teasing them out as much as possible we gain a better sense of how they operate, and therefore, how public and private sector entities and broader civil society can respond. Indeed, journalists and those concerned with the development of news media ecosystems must understand how these techniques operate in order to construct and implement effective responses.
Unfortunately, there is no easy policy solution to the new forms of censorship because these practices often take advantage of technologies, that when utilized for other means, are benign or even beneficial. They are in a sense dual or multi-use,  and difficult to gauge and implement effectively. For example, an automated social media account that warns people of traffic congestion is quite different from one that disseminates state propaganda. Ultimately, we need to develop responses that address the problem while at the same time do not undermine freedom of expression or the development of new technologies that benefit society.
The Dictator’s Digital Dilemma Reexamined
The rise of news forms of online censorship begs us to reexamine how governments negotiate news media ecosystems in an era where the internet is a primary communications tool. The increasing centrality of the global network as a tool to organize economic growth has meant that even authoritarian regimes—those most wary of allowing citizens access to independent news and information—have often allowed access to digital networks. In opening their economies to the world, they also afforded citizens the benefits that come with connecting to global communications networks, such as broader access to information. These benefits in turn come with a loss of control over the ideas that their citizens encounter. As a result, the leaders of such countries are faced with a “dictator’s digital dilemma,” determining whether the risk of opening their networks is worth the economic rewards. 1
Certain scholars argued that digital communication technologies would be liberating,2 opening dictatorships or struggling democracies to different sources of news and information.3 This theory has been contested, particularly by scholars such as Evegeny Morozov, who argued that if regimes failed to confront online networks through various forms of censorship or surveillance, they would eventually resort to traditional methods of physical or legal suppression of opposition. Indeed, we now see that the internet, while providing an essential communication tool to reformers and opposition groups, also provides a useful tool for dictators and their allies to surveil and censor. Authoritarian regimes are now using new technological methods to pursue their opponents online.4
This dilemma is evident in several Middle Eastern countries since 2011’s Arab Spring, when countries from Tunisia to Egypt to Bahrain were faced with choices over continuing their censorship regimes or allowing their citizens access to new sources of information. The internet has developed rapidly in the past six years, as more people have moved online, but connected in different ways, using fewer websites and blogs and more social networks and messaging services through mobile devices, rather than computers. The network is increasingly encrypted through HTTPS for websites, or end-to-end encryption through secure messengers such as WhatsApp and Signal. These are global trends, but particularly significant for the Middle East’s democratization movements and conflicts that have developed since 2011.5 Certain countries in the region have moved in positive directions since the events of that time, such as Tunisia, and opened their countries’ economies, networks and societies to the world while democratizing their political systems.6 Certain countries in the region have moved in positive directions since the events of that time, such as Tunisia, and opened their countries’ economies, Others have re-entrenched the dictatorships that exist, as in Bahrain or Saudi Arabia, while still others that moved towards democratic politics for a period are again reverting to authoritarianism, as in the case of Egypt.7
Egypt is a particularly important case in the sense of internet control and censorship because it also provides an example of a country that completely shut down its networks for a time, one of the largest national internet disconnections in history.8 Precedents were either much smaller, such as Myanmar’s decision to cut connections in 2007, or more limited, as in Iran’s internet slow down during disputed elections in 2009.9 India’s blockages in selected states in 2016 represent a regionalized version of this phenomenon.10
Internet shutdowns are only the bluntest instrument in a large toolkit of authoritarian or semi-authoritarian rulers, and they come at a cost. A report from the Brookings Institution estimated that the costs of 81 internet and social network shutdowns during a period from 2015 and 2016 totaled over $2.4 billion in dictatorships ranging from Saudi Arabia to Ethiopia, as well as a range of democracies including Brazil and India.11 In all cases, there were clearly economic costs to shutting down the internet, or even blocking individual pages, domains or social networks. HTTPS makes it very difficult for regimes to block specific pages a user requests from a domain because the request is encrypted, and as a result many countries have increasingly had to block entire networks, as Turkey did for Twitter in 2015,12 or China and Iran now do for Facebook and Twitter.13 They are also easily perceivable; inevitably, a state will come under criticism for shutting down the internet, and not only for the costs, but also because of the political implications of blocking a key avenue of communication, information and expression for the citizens.
Blocking tools for websites and individual pages are a simpler form of censorship, and often ineffective when access is defined by large social networks rather than email, individually hosted web pages and forums. What’s more, government agencies, police, emergency services and command and control systems also rely on the same networks, both social and technical. Because of these costly trade offs, regimes are relying on new technological tools to monitor and disrupt the flow of news and information online, ranging from automated accounts, analyzing fast-expanding stores of data, or manipulating algorithms on which billions now depend for content.
The New Tools for Democratic Disruption
The Tools and Tactics of New Forms of Censorship
Troll farms
Bot networks
Distributed denial of service (DDoS) attacks
Abuse of terms of service to block accounts and remove content on social networks
Localized shutdowns/slowdowns
IP/website/network blocking
Personal data exfiltration and expropriation
Criminalization and tracking of online political speech
Dedicated moderation networks for censorship
Automated systems to filter for political content
Algorithms to report and remove political speech and moderate content
Mal-information: hacking and leaking of media, civil society and other private information for political, commercial, or personal gain
The new methods authoritarian governments and other undemocratic actors are using to disrupt and manipulate democratic dialogue online are designed to allow the internet to continue to operate, while providing greater control over how information is disseminated and reaches the intended audience. More importantly, some of these techniques can be used not only domestically, but also abroad. This is feasible given that they often employ global social media platforms. These tactics include the use of networks of automated accounts, as well as individuals employed by the state or a private company with connections to the state operating multiple profiles to post messages, support others, and influence trending topics on social networks. Other forms of computational propaganda include the manipulation of algorithms to change the topics of conversation, as well as the pages, posts, advertising and other content that users see. Governments and other entities can make use of large stores of data that private entities have gathered on individuals to target them very precisely and filter what they see online.14 This power to change the trends of social media is combined with the influence these social networks and topics can have on traditional media. For instance, if stories go viral on Facebook or Twitter, they are often picked up on television or in newspapers and online outlets. As a result, automated tactics can be amplified and augmented with various methods, computational, viral, and supported by various forms of media.
What are these new tools and tactics precisely, and how do they differ from the ones that came before? How are these new forms of online censorship subtler than the ones that were used earlier, and do we have tools for monitoring and tracking them so that we can explain them to others working in the media, monitors and more broadly within civil society? What are the responses that those in media, government, and civil society can formulate to encourage the proliferation of a free, open, and democratic networked public sphere?15
The following sections of this report contain examples of how these new forms of censorship are operating in different authoritarian and contested democratic environments all over the world. The examples illustrate techniques that are being used by regimes to monitor populations, censor citizens, block networks, and affect social media that often originates or is hosted in democratic countries. In each case, the stifling impact on the news media is clear, from the disruption of content distribution to the de-legitimatization of sources. Evident too in each case is a grave threat to the possibility of democratic dialogue, and the opportunities this creates for resurgent authoritarianism.
State Sponsored Trolling: Russia’s Efforts to Overload Ukraine’s Media Ecosystem
As social media has become an increasingly important arena for the circulation of news and the formation of public opinion, marketers and political advocates of all stripes have taken to these platforms to promote their brands and messages through large groups of social network users who purposefully create content and interact with other users. However, these influence campaigns take on a whole other dimension when they are funded by states with deep pockets and enacted with specific geopolitical intentions. In these instances, the motive is frequently not to reach individuals with information that could potentially be useful, but rather to flood the public sphere with content­—often false—and to make it incredibly difficult for citizens to filter what information can be trusted and what cannot. This is what happened in 2014 in Ukraine when the Russian government launched a concerted effort to disrupt the news media ecosystem amid the political upheaval of the EuroMaidan protests. The use of state-sponsored trolls to spread disinformation and to attacks Ukrainian journalists online was one of the primary tactics that the Russian government employed.
Russia’s ability to successfully coordinate these efforts in Ukraine was based on several important factors. First, the Russian government had already developed a network of paid posters (trolls) under the government of Dmitry Medvedev. President Medvedev embraced social media and became known as the “blogger in chief” for his use of blogs and other social media during that period.16 The first instances of organized networks of paid posters (trolls) and automated accounts (bots) connected to the government surfaced at this time. However, in contrast to the campaigns that would come to attack democracies in other countries, at the onset these activities were mostly to promote and draw attention to the president’s writings and other content online.17 When Putin resumed the presidency in 2012, he implemented a more aggressive internet policy that included more sophisticated filtering systems and the weaponization of troll networks to attack opponents. Companies such as the Internet Research Agency, a quasi-independent organizations with deep connections to the state, emerged at this time. This St. Petersburg based group brings together a complex of programmers, spammers, and simple computer users who design and participate in varying online campaigns against opposition candidates, parties, and other movements.18 The Russian government also used these capabilities to attack political opponents more directly, for instance by hacking their emails and distributing them through these same networks, spreading malware on their computer systems, or engaging in Distributed Denial of Service (DDoS) attacks against their websites and other networks. These tactics combined with the oppressive legal and political climate seriously affected the way that the media operated in Russia. Those few outlets that remained independent had to develop strong cybersecurity practices to protect sources, keep communications within the newsroom private, and keep their operations online. Further, Russian websites and social networks such as Odnoklassniki, VKontakte, Yandex, mail.ru were incredibly popular in Ukraine and other post-Soviet countries. This cybernetic connection with Russia as well as the fact that many Ukrainians got their news from Russian language television and radio made the country very susceptible to a disinformation campaign.19 Indeed, Ukraine represented one of the most fertile places in the world for a new kind of propaganda. This intervention, which began in 2014, showed the power of these networks to extend their reach outside of Russia.
The Ukrainian revolution in 2014 represented one of the Russian government’s greatest fears: a large, post-Soviet country moving away from an alliance with Russia and turning towards Europe. Its 2004 Orange Revolution had resulted in the defeat of Russia’s ally Viktor Yanukovych in an election that pitted him against a Western aligned, European-oriented candidate Viktor Yushchenko. Six years later, Yanukovych regained power in another election only to be challenged in 2013 by another round of protests against corruption and a decision by his government to cancel an agreement to move towards integration with the European Union. The protests coalesced in Maidan Square in Kiev and the movement they generated came to be known as the EuroMaidan.
The massive, nation-wide protest paralyzed the country. In February of 2014, as Yanukovych and many of his key ministers fled to Russia and parliament called for special elections to replace him, disguised Russian soldiers took control of government buildings and strategic infrastructure in Ukraine’s Crimea region. In a referendum that November that occurred under Russian military occupation and was denounced as illegitimate by the west, Crimea was incorporated into the Russian Federation. Further Russia-backed agitations for succession have since embroiled Ukraine’s Donbas region.
These events became foci of disinformation campaigns by the Russian government, from denying the assistance that Russian forces gave to paramilitaries in Ukrainian regions, to the downing of a Malaysian Airways flight by those groups during the war.20 Research by the Oxford internet Institute’s Computational Propaganda project shows that networks of “trolls” or paid social media accounts have been particularly prevalent in Ukraine throughout these events, and at relatively cheap cost. Accounts manipulated by paid users to post about specific topics or “like” other posts, accounts or pages cost as little as US $0.40 to $0.90 on social networks such as Facebook, Twitter, and VK.21 These networks were often based in Russia or other former Soviet Russian republics. While in some cases these trolls, often amplified by bots, spread messages based on common themes and central organizational principles, in other cases they did so in more decentralized and multifarious ways.
Once the protests began, these networks developed quickly. Research showed that various bot networks were created during both the 2013 EuroMaidan protests and the beginning of the conflict in Eastern Ukraine in 2014.22 These bots and trolls were used to amplify content that supported the Russian narrative that the EuroMaidan movement was a Western-backed coup, attack users who objected to this narrative, confuse users about facts on the ground, or encourage various hashtags or topics to trend on social networks. Bots or trolls could even be used to monitor real users for violations of the terms of service and report them with the goal of getting them banned or suspended. In one case, a journalist had their Facebook account disconnected for posting about the downing of a Malaysian Airways commercial airliner MH17 during the war for Eastern Ukraine.23Bots and trolls sent thousands of requests for takedowns to Facebook and other moderation teams, which banned or blocked user accounts tied to media or others in civil society. On the whole, these tactics represented a distributed form of attack on freedom of expression and the press because they sought to hinder the ability of journalists to communicate the news and prevented Ukrainian citizens from being able to easily access high-quality information.
“On the whole, these tactics represented a distributed form of attack on freedom of expression and the press because they sought to hinder the ability of journalists to communicate the news and prevented Ukrainian citizens from being able to easily access high-quality information.”
Ukrainians have developed some responses to these attacks on their information ecosystem. For example, at the Kyiv-Mohyla School of Journalism, a group of individuals formed the fact-checking initiative StopFake.org. This organization counters false news narratives pushed by Russia by identifying, analyzing, and discrediting over 1,000 stories on social media since its formation in 2014.24 They also broadcast reports on the propaganda and false narratives they find, which they distribute on YouTube and Facebook. This combination of networks analysis, identification of content, dissection of propaganda, and use of video and social media provides an effective example of how media can evolve and respond to these new challenges.25
Ukrainian journalists have played a major role in exposing bot networks and the use of Russian computational propaganda. The news website Texty.org.ua did a comprehensive analysis of the groups that formed to counter the current Ukrainian government and published a website that included graphical examples of how the online network functioned.26 This combination of data scientists, graphic designers, and journalists demonstrates a powerful example of how new forms of journalism—by revealing how the disinformation networks are formed and administered—can counter new forms of propaganda and censorship. This model is especially powerful when applied with traditional forms of narrative journalism.
The Ukrainian government attempted to form a user base of social media agents to counter false narratives, and registered 40,000 individuals to work to oppose false narratives. However, the government has not been able to confirm that they have used this base in any consistent way.27 Ultimately, the Poroshenko administration chose a more blunt strategy, banning Russian television and radio from Ukrainian networks and blocking Russian social media sites such as VKontakte and Odnoklassniki, as well as the Yandex search engine.28 It was a questionable decision, as this kind of blanket censorship seriously affects freedom of expression in democratic society, and with dubious effectiveness, given the numerous ways to pierce the ban, such as VPNs or encrypted networks.
Ukraine is currently divided by civil war, and unfortunately this division is reflected in its social networks, which havebeen exploited by Russia and its allies. The country provides examples of how networks, both human and robotic, can shape narratives about events and people, but also how new kinds of media organizations like StopFake and Texty can begin to describe and counter these narratives by identifying the networks that propagate them and the content they are sending. Simultaneously, they are working to push back on these narratives by explaining why the stories are wrong and also how to use social media to discredit them. As a result, Ukraine is both a sign of how new forms of distributed censorship can operate in contested contexts, and how civil society and media can begin to form effective responses.
Domestic Trolling: Shaping the Public Dialogue in Turkey
Developments in Turkey over the past five years provide another example of how authoritarian state agencies use large networks of pro-government users to undermine the free exchange of ideas. While “troll armies” are becoming increasingly prevalent throughout the world, Turkey exemplifies how these tools are being turned on their own populations to create a new form of distributed censorship that starves citizens of reliable news and information, and makes the work of independent journalists incredibly challenging.
Turkey’s democratic institutions have been severely challenged in recent years, as President Recep Tayyip Erdoğan has changed the constitution to empower the executive and significantly cracked down on press freedom. His government has jailed more journalists than any other country in the world and has shuttered or threatened more than 150 media outlets in the wake of a military coup against his regime in 2016.29 Some of these organizations and journalists have been designated as security threats, but many have been attacked for challenging the official government narrative or not giving sufficient support to the regime and criticizing the military for its role in the plot.30 In addition to these traditional censorship measures such as shutting down news outlets or jailing journalists, Erdogan has moved aggressively to challenge the opposition in the online space. The 2013 protests in Istanbul against the destruction of public space Gezi Park caught the attention of many citizens throughout the country, online and through social networks, and quickly became a touchstone for the opposition movement. Since then, Erdoğan’s government has worked in various ways to change the narratives and shut down opposition voices. Beyond blocking pages there are four major components to these online attacks on the media, and civil society groups that oppose the government’s aims:
Attack opposition social media accounts through networks of trolls and bots. Often these coordinated attacks are complemented by the regime’s supporters working in direct coordination with government agencies.
Lodge complaints with Twitter and other social networks against accounts that are challenging the regime in hopes that the platform will pull done the content.
Hack journalists accounts and expose their private conversations to the public.
Prosecute journalists for news and opinion pieces they post online.
The first component comes through a network of supportive social media accounts. The central node in the network of these campaigns was often a group of over 6,000 supporters attached to the “New Turkey Digital Office” that promoted ideas supporting the regime and attacked others that did not agree with the government’s perspective.31  These networks were also capable of activating thousands of followers in online social networks to support these campaigns. Government affiliated and supporting groups increased their use of these tactics in March 2014 when they focused on defending Erdoğan and his allies from accusations of corruption that surfaced on Twitter from an account known as @oyyokhirsiza. This account leaked confidential information that showed questionable business dealings of his Minister of Communication, Binali Yıldırım, and his son. The Shorenstein Center at Harvard defines this kind of campaign as a form of “malinformation” in that it describes information that is often true, hacked and leaked to discredit the user as well as the ideas and objectives Erdogan pledged to wipe out Twitter and even temporarily blocked it. However, civil society and opposition groups responded by using VPNs and other workarounds to virtually tunnel out of the country, and spread information about the shutdown through the hashtag #TurkeyBlockedTwitter that helped end the blockage relatively quickly.32 The use of state-led troll networks brings to bear state sponsored campaigns combined with members of the public that are influenced by them to post their own social media. This constitutes a distributed attack on democratic discourse, through the spread of state propaganda and the diminution of opposing themes, accounts, and content.
A second tactic used by the government was to tap these same networks to attack journalists by submitting complaints against their content on Facebook and Twitter. The objective of these repeated complaints from multiple different users would encourage social media platforms to remove the content. This technique ramped up in 2014. In the first half of the year, there were roughly 200 such complaints lodged on Twitter, while this doubled to more than 400 in the second half of the same year. These trends only increased over time, as it became one of the largest supplicants of account deletion or content removal on the network through 2017.33 After the 2016 coup attempt, attacks on journalists, their organizations, and others in civil society extended to the online sphere. User accounts associated with the regime, or simply supporters spurred on by a climate of hatred toward any opposition, launched attacks on anyone critical of the government. Female journalists became common targets. A study of tweets attacking journalists in 2016 by the International Press Institute (IPI) found that almost 10 percent of them were sexually related comments directed overwhelmingly at women. Other methods catalogued included humiliating tweets (9 percent) intimidating content (10 percent) and “Threats of violence, other abusive behaviors, legal threats and technical interferences (72 percent)”.34 These networks have encouraged a climate of fear, self-censorship, and suppressed social and political expression online in various forums.
The government and its allies have also moved to attack journalists via a third vector, through hacking their private accounts and spreading their own confidential conversations with sources, coworkers, and other contacts. IPI found 20 cases of journalists having their accounts hacked in this period, usually announced by the culprits taking control of their Twitter account and posting messages supporting the regime. For instance, when the journalist Can Ataklı’s account was hacked the attackers scrawled “I apologise to our honourable president to whom I was unfair and bashing all this time with my libels and insults” with a picture of the President attached; his direct messages were meanwhile shared in online forums.35 It should be noted that these types of attacks not only impact the journalists who are the targets, but they also serve to sow doubt and confusion among the broader population about who to trust. They create insecurity as it can become more difficult to know what is real and what is false online.
Finally, these tactics are combined with a fourth, more traditional tactic of simply prosecuting and jailing journalists. This is now bolstered by a new constitution that criminalizes many kinds of speech against the state or the security services. New forms of censorship, such as the use of troll armies and hackers to find incriminating materials, are more effective in combination with stringent laws against threatening state security or other equally nebulous concepts. Turkey provides a primary example of how distributed attacks on freedom of expression and the press can work in a country struggling to maintain a semblance of a democratic system. These armies of user accounts can be used in various ways: to attack opposition, identify accounts for removal under terms of service, or simply to promote the policies of the state. It is a powerful new tool in the arsenal of censorship that states can now employ, and combined with older methods, it can be a force multiplier in terms of policies and ideas, encouraging a public sphere defined by the narrative of the regime, and disparaging and inciting fear in any opposition. The combination of legal, physical, and online threats has taken a toll and promoted a kind hybrid censorship that has been effective in silencing the media, confusing users, and blunting the effects of critical press from any source.
Automated Bot Networks: Filipino Bots and the Social News Network Response
Automation brings another level of coordination and computing power to bear through distributed forms of censorship. Networks of automated accounts or bots, known as botnets, can be used to promote content, create trending topics, or attack others, generally for a relatively low investment, even compared to trolls, as individual users can operate thousands of individual accounts or even enable them to operate autonomously.36 Though the Philippines is now more commonly invoked in the study of how media freedoms and democracy can be unwound, the country’s experience also illustrates how a strong response from media organizations can push back against new forms of censorship and control.
Since the election of Rodrigo Duterte, the government has begun a campaign to eliminate drug usage in the country through harsh tactics that include mass incarceration and even vigilantism against drug dealers and users. This has led to rising attacks on people associated with the drug trade, but has also increased attacks on opposition parties, civil society, and the media. As in other contexts, these attacks have been bolstered by an increasing climate of intolerance online.37
Bots are especially good at inflating the importance of topics, repeating hashtags or other trends and content online, a tactic that is especially critical during elections, debates, and other moments of acute political importance. Four days after Duterte declared his candidacy, observers found examples of suspicious increases in the tags associated with his campaign rising to over 10 times the combined mentions of his rivals , likely caused by bots posting hundreds of times per minute.38 The Philippines provides an example of how these automated systems work, but also how they can be identified and confronted via new independent media networks.
In the Philippines, the “social news network” known as Rappler has created an organization of journalists, data scientists, and ordinary users to track political campaigns that use trolls, fake “sock puppet” accounts, botnets, and other manipulation to stir up and direct fervent supporter groups.39 As in many other developing countries with less infrastructure, expensive mobile data, and less access to full size computers or tablets, Facebook has become a particularly significant network for millions of people, often connecting through zero-rated services such as Facebook Free Basics, which provides low-income Filipinos with subsidized access to a bare-bones version of Facebook. They have uncovered a botnet supporting the President Duterte and his party, often connected to influencers such as the former sex blogger and singer Mocha Uson.40 Now an Assistant Secretary in the Government’s Presidential Communications Operations Office (PCOO), she has repeatedly attacked opponents of the regime on social media and promoted accounts that are supportive of the government. Through a combination of data science and old-fashioned reporting, Rappler has demonstrated how Uson’s popularity can direct her large follower base, and even influence the algorithm that ranks the content networks that her followers view.41
The organization has also profiled the use of bots by supporters of Duterte’s party and campaign, and how this led to a surge in support during his election in 2016. They interviewed members of the campaign apparatus as well as organizations and companies that supported them, augmenting their reporting of the content of the messages with network analysis and interviews. These tactics paint a compelling picture of the state of the online space in the Philippines and have angered supporters like Uson to the point that she has requested that they be reclassified as a social networking group rather than a news organization.42 Notably, this reclassification would make Rappler more accountable to Uson’s office. It has also been attacked through legal means, as the government has challenged its tax status by questioning its foreign funding, and others have sued it for libel under a 2012 cybercrime law.43 Besides the popularity of its content, the fact that the government is attempting to define Rappler as a social media company while pursuing it for tax evasion, suggests that its methods of finding and identifying government accounts while promoting opposing views have achieved a qualified but notable level of success.
Throttling Discourse: The Stifled Arab Spring in Bahrain
Governments have often engaged in forms of censorship that incorporate blocking websites, networks, or even the entire internet to control public discourse in different forms. However, website blocking can often be circumvented by technology such as VPNs that tunnel into other networks and hide the user’s origin. Blocking also tends to draw public attention and outrage, as was the case in Turkey when the government blocked Twitter.  Throttling the internet, however—slowing the speed of user’s access—provides another form of censorship that is more difficult for users to detect, to the point that they may believe their device or network has another technical issue unrelated to any form of government involvement. It is a distributed attack that covers many users who are reliant upon cellular networks to connect to their allies, friends, and family, coordinate, and generally understand social and political systems.
In the wake of the Arab Spring, several regimes in the region developed new systems for the control of their domestic internet, and Bahrain provides an important example. As a small gulf kingdom under the control of a single family, the regime often censors speech that is harmful to its image, whether political, social, or related to security issues. The media in the country is tightly controlled; only outlets that are friendly to the government can operate, and multiple journalists have been jailed for covering taboo topics. Television stations have also been closed for similar reasons.44 Because of this restricted media environment, the internet provides a key conduit for citizens to access information about the world. Besides documented cases of blocking, the country engages in widespread surveillance of activists and other opponents of the regime, including with software that hacks the phones, computers, and other devices.45 These  advanced surveillance systems are marketed by corporations as a method for law enforcement or intelligence investigations, but in the hands of authoritarian regimes can also be used to stifle opposition, track dissidents, incite fear in citizens and inhibit the ability of activists and journalists to cultivate sources or work within teams. Cybersecurity thus becomes a critical element of operational security for any media organization working in these contexts.
In 2016, protests over the revoking of the citizenship of a popular cleric around the town of Duraz drew national attention. The regime responded by limiting the speed of different mobile services, and severing 3G and 4G connectivity, essentially rendering access to a slower, more basic velocity well below broadband, which is also much more difficult to encrypt and transmit through modern applications.46 This mirrors activities that occurred in Iran, where users were not cut off from access but it was significantly limited.47, June 18, 2013. http://arxiv.org/abs/1306.4361.’] This includes in terms of their access to independent information about the state of the government, the opposition, and basic facts about their political system and society, and makes it much more difficult for them to trust in or even find free media.
This throttling is a new kind of technique because it does not completely shut off access, but slows it and makes it difficult for groups or individual users to coordinate and share information as it is happening in real time. The technique stifles a useful ability when organizing a protest or promoting opposition media, and has the benefit of masking the nature of the problem. Users may potentially think there is another kind of technical difficulty with their device, or with those they are communicating with, rather than a complete disconnection. This fits with a pattern of attacks that are no longer in the open, but rather obfuscated, and as in other authoritarian or semi democratic states, bolstered with an increasing number of supporters entering the online space to defend the regime.48
“Throttling the internet—slowing the speed of user’s access—provides another form of censorship that is more difficult for users to detect.”
Strategic Distraction and Social Surveillance: China’s New Tactics to Constrain News and Information
It is well known that Chinese government has developed massive technical means to directly censor and filter information online. Indeed, the Chinese government has developed a model for the rest of the world in terms of network blocking through what has become known as the Great Firewall of China. This system allows the government to block access to news websites. This censorship is augmented by an omnipresent social media monitoring apparatus called the Golden Shield.49 Both systems are now increasingly empowered by an army of monitors, and intelligent filtering algorithms have become extremely effective in managing content on Chinese networks. Given that the media that operate in the country are already required to obtain a license from the government and are heavily restricted in terms of the type of stories they can cover, this censorship makes China one of the most restricted environments for press freedom around the world. However, what is less well known about China’s efforts to manage the information ecosystem is how it is now employing distributed forms of censorship to both strategically distract the public from contentious issues as well as employing new forms of “social credit” that provoke individual internet users to monitor others and censor themselves. These new tactics represent a fundamental threat to press freedom and access to information, and because they are more distributed and hidden, they are even more difficult to counteract.
China’s technological prowess as well as the size of its market it give it significant leverage is setting the ground terms for tech companies to operate in the country.  All domestic internet or social networking companies, such as Baidu, WeiBo and WeChat, have systems in place to register user IP addresses as well as real names and other identifying information. They participate in the Golden Shield system to proactively take down content related to sensitive subjects such as 1989 Tiananmen Square massacre or general democratic political reform. A study of various social networks and internet forums in 2013 found pervasive and rapid censorship throughout, with censors often deleting illicit content within a day.50 Foreign companies that wish to operate in China must agree to some form of these rules, or risk being blocked by the Great Firewall while their servers are located outside of the Chinese national internet. Facebook has been actively discussing a similar system of active censorship with the government as it has been attempting to negotiate access to the Chinese market for the past several years.51
Interestingly, researchers have found a relatively low level of bot activity in China.52 Automation, however, increasingly plays a role in the form of systems that are dedicated to understanding what users are saying and taking down content automatically. Such machine learning techniques will only sharpen and augment the regime’s ability to track users and take down content in real time going forward in future.53Intelligent systems that can identify patterns of communications, track themes and respond to them in real time are likely to replace the army of bureaucrats, online censors and collaborative party members that currently make up the online censorship system that exists in China today. Unfortunately, such an automated system has the potential to be much more powerful and far reaching than that which exists today. In a way, this increasingly hybrid censorship system mirrors those developed by other authoritarian regimes working with bots and trolls, in that the censorship is evolving to include both human and automated elements. “Bot” accounts do not perform the censoring as they do in Russia or other contexts, and far fewer are found operating in a political context in China,54 but automated systems are performing a role by blocking certain users, content, and themes across networks. The system collectively acts as an automated gatekeeper through algorithmic manipulation and other tactics, which has the effect of modifying public discourse based on the regime’s priorities.
The Chinese have also become adept at generating their own content through organized teams that control their own accounts and shape discussions. Researchers from Harvard have estimated users associated with the so-called “50 cent groups” that spread government supporting narratives generate 448 million comments a year on average.55 They conclude the goal is often to dilute the discussions of political topics, and create “strategic distraction.”
“The system collectively acts as an automated gatekeeper through algorithmic manipulation and other tactics, which has the effect of modifying public discourse based on the regime’s priorities.”
The government is implementing a new Social Credit System (SCS) with the help of Chinese internet companies that may prove to push users to self-censor, and avoid sensitive subjects for fear of negative ratings that translate into a lack of privileges and access to services throughout society, basic rights. The SCS rates users and assigns scores based on such factors as their social media usage, network of friends, credit history and shopping habits. These systems are being tested by Chinese affiliates of the online conglomerates Alibaba and Tencent. These companies are encouraging users to opt-in to these systems to gain credit bonuses and special services, but they will become mandatory for all Chinese citizens in 2020.56 This system has the potential to amplify self-censorship in powerful ways, as users restrict their writings, videos and other content to avoid negative commentary and score. A user is rated on characteristics, including the content they post, the number of times they have been censored or reprimanded online, and the circle of connections or “friends” they maintain. Depending on the nature of their network, even association with people with lower scores could have a negative effect on their own.
Similarly, the kind of media they are able to access independently has an effect on their views of the regime, its propaganda, and its supporters. A recent study by two Stanford University scholars found that when given the ability to access foreign news, very few younger Chinese students took the opportunity, suggesting that various forms of social and technical censorship have become deeply internalized.57 However the research also noted that when given encouragement as well as access, the students not only consumed more foreign sources of news, but also spread it to their peers, questioned government narratives, and even sought out more external sources of information after the study had ended. Conversely, the development of the social credit system may prove to only further internalize these beliefs and practices, and the avoidance of controversial themes, users, sources and media.
Chinese methods have been replicated in several regimes in its orbit, including Vietnam, Thailand, Malaysia and Indonesia, which have all erected various forms of blocking, technical, political and social.58 They are also becoming a strong model for countries beyond the region such as Iran, who have also adopted a domestically bounded network, and attempted to build national social networks and services while blocking global ones such as Facebook and Twitter.59
The Chinese model of censorship is nurturing new forms of control that are much more difficult to confront directly. To date, the development of circumvention tools that allow internet users in China to evade the Great Firewall and gain access to content on the global internet has been an essential form of combatting censorship. Technical tools such as VPNs to evade firewalls are in many ways simpler to apply than long term education about the importance of a free media and open access, freedom of expression and other democratic values. Both types of education become important in countries where this controlled model is applied.
Conclusion: Toward a Collective Response
The cases examined here show indisputable trends of hidden, distributed forms of censorship around the world. They are subtler than internet shutdowns and domain blockages, although they are often deployed in tandem. These techniques range from the documented Russian campaign to spread disinformation and interfere in political systems globally, to selective throttling in Bahrain, to armies of trolls and bots deployed in contexts such as the Philippines, Turkey and Ukraine. In China and its imitators, nationally delimited networks combined with powerful automation, big data, and monitoring systems are creating ways to replicate territorial censorship concepts globally. These examples defy expectations that the internet would become a medium for breaking down levers of government control. Increasingly sophisticated systems will amplify these techniques, as the Chinese model shows how intelligent systems can predict and respond to individuals in increasingly rapid, effective fashion, better informed by large automated systems.60 It is understandable why this highly regimented and regulated authoritarian society is investing so much in technology that will enable it to closely manage the growing Chinese internet.61
However, this study also highlights the growing responses to these threats. In Ukraine, there are several groups working to combat Russian and domestic threats to the information space, such as from StopFake, which brings together students, faculty, and alumni of the Kyiv-Mohyla School of Journalism to identify and counter false stories online. Media groups such as Texty.ua in Ukraine and Rappler in the Philippines show how journalists can partner with data scientists, graphic designers, and activists to identify fake patterns in social networks, as well as individual accounts. These networks are trackable, but will require new partnerships across social science and technical fields. Such partnerships will become increasingly valuable in confronting disinformation promoted by authoritarian regimes and their supporters, particularly to identify sources and networks quickly to respond to these trends in real time. Technology companies are developing various programs to partner with news organizations, notably Facebook’s Journalism Project and Google’s News Lab, and these too should respond to the censoring effects of these techniques.
Journalists have always needed the lawyers and watchdog groups to shield them from abuse and harassment and to defend their rights. What the examples in this report illustrate is that journalists need the expertise of an entirely new array of actors to protect them: data scientists, digital security experts, and digital platforms, among them. Journalists, however, may also have to play a more proactive role in conjunction with these actors. To truly neutralize these new distributed forms of censorship, civil society, the media, governments committed to democratic principles and the private sector will need to respond collectively, in a similarly distributed fashion. The internet, we now recognize, can be a tool for either the oppressor or the oppressed, but with this recognition comes an understanding that intelligent and coordinated responses can shape the existing socio-political reality, online and off.
About the Author
Daniel Arnaudo is a Senior Program Manager in the governance department of the National Democratic Institute (NDI) in Washington, DC. In this capacity, he covers the intersection of democracy and technology with a special responsibility to develop projects countering and tracking disinformation worldwide. Concurrently, he is a Research Fellow with the Igarapé Institute of Rio de Janeiro and a Cybersecurity Fellow at the University of Washington’s Jackson School of International Studies where he has worked on projects in Brazil, Myanmar, and the United States. Recently, he has also collaborated with the Oxford Internet Institute’s research group on Computational Propaganda. His research focuses on online political campaigns, digital rights, cybersecurity, as well as information and media literacy. He earned masters degrees in Information Management and International Studies at the University of Washington by completing a thesis on Brazil and its Internet Bill of Rights, the Marco Civil da Internet. In past, he has worked for the Arms Control Association, the Carnegie Endowment for International Peace, and the Carter Center. He has also consulted for a wide range of organizations including Microsoft, the Center on International Cooperation at New York University, and NASA.
Daniel Arnaudo
Footnotes
Kerr, Jaclyn. “The Digital Dictator’s Dilemma: internet Regulation and Political Control in Non-Democratic States.” Stanford, 2014. https://ift.tt/2JmK75p.
Diamond, Larry. “Liberation Technology.” Journal of Democracy 21, no. 3 (July 14, 2010): 69–83. doi:10.1353/jod.0.0190.
Meier, Patrick Philippe. “Do ‘Liberation Technologies’ Change the Balance of Power between Repressive States and Civil Society?” Fletcher School of Law and Diplomacy (Tufts University), 2012.
Liberation Technology: Whither internet Control?” Journal of Democracy 22, no. 2 (April 2011). https://ift.tt/2LkvywJ.
Faris, Robert, John Kelly, Helmi Noman, and Dalia Othman. “Structure and Discourse: Mapping the Networked Public Sphere in the Arab Region,” 2016. https://ift.tt/2JmK8X1.
Ghannouchi, Rached. “From Political Islam to Muslim Democracy: The Ennahda Party and the Future of Tunisia.” Foreign Affairs 95 (2016): 58.
Hessler, Peter. “Egypt’s Failed Revolution.” The New Yorker, December 26, 2016. https://ift.tt/2xquS2f.
Arnaudo, Daniel, Aaron Alva, Phillip Wood, and Jan Whittington. “Political and Economic Implications of Authoritarian Control of the internet.” In Critical Infrastructure Protection VII, 3–19. IFIP Advances in Information and Communication Technology. Springer, Berlin, Heidelberg, 2013. https://ift.tt/2JmKc9d.
Howard, Philip N., Sheetal D. Agarwal, and Muzammil M. Hussain. “When Do States Disconnect Their Digital Networks? Regime Responses to the Political Uses of Social Media.” The Communication Review 14, no. 3 (2011): 216–232.
Gupta, Apar, and Raman Jit Singh Chima. “The Cost of internet Shutdowns.” The Indian Express, October 26, 2016. https://ift.tt/2eBWP0Q.
West, Darrell M. “internet Shutdowns Cost Countries $2.4 Billion Last Year.” Washington D.C.: Brookings, October 6, 2016. https://ift.tt/2duSkV3.
Efe Kerem Sozeri. “Uncovering the Accounts That Trigger Turkey’s War on Twitter.” The Daily Dot, January 31, 2015. https://ift.tt/2LjodgO.
Clark, Justin, Rob Faris, Ryan Morrison-Westphal, Helmi Noman, Casey Tilton, and Jonathan Zittrain. “The Shifting Landscape of Global internet Censorship.” internet Monitor. Harvard Berkman Center for internet and Society, June 29, 2017. https://ift.tt/2t5Opna.
Krogerus, Hannes Grassegger & Mikael. “The Data That Turned the World Upside Down.” Motherboard, January 28, 2017. https://ift.tt/2tFb7T0.
Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006.
West, Darrell M. “President Dmitry Medvedev: Russia’s Blogger-in-Chief.” Brookings, November 30, 2001. https://ift.tt/2JmKdtN.
Kelly, John, Vladimir Barash, Karina Alexanyan, Bruce Etling, Robert Faris, Urs Gasser, and John G. Palfrey. “Mapping Russian Twitter.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, March 23, 2012. https://ift.tt/2Ljh7sM.
Chen, Adrian. “The Agency.” The New York Times, June 2, 2015, sec. Magazine. https://ift.tt/2kClg1z.
Ennis, Stephen. “Ukraine Hits Back at Russian TV Onslaught.” BBC News, March 12, 2014, sec. Europe. https://ift.tt/1g8KB4N.
Zhdanova et al.
Zhdanova et al.
Alexander, Lawrence. “The Curious Chronology of Russian Twitter Bots.” Global Voices, April 27, 2015. https://ift.tt/2JvbhHg.
Zhdanova et al.
Kramer, Andrew E. “To Battle Fake News, Ukrainian Show Features Nothing but Lies.” The New York Times, February 26, 2017, sec. Europe. https://ift.tt/2mAFZQs.
Kramer, Andrew E. “To Battle Fake News, Ukrainian Show Features Nothing but Lies.” The New York Times, February 26, 2017, sec. Europe. https://ift.tt/2mAFZQs.
Romanenko, Nadiya, Iaryna Mykhyalyshyn, Pavlo Solodko, and Orest Zog. “The Troll Network.” ТЕКСТИ.ORG.UA, October 4, 2016. https://ift.tt/2JpR1qt.
Zhdanova et al. pp. 16.
“Ukraine to Block Russian Social Networks.” BBC News, May 16, 2017, sec. Europe. https://ift.tt/2pRG0ka.
“Freedom of the Press 2017: Turkey Country Report.” Freedom House, April 26, 2017. https://ift.tt/2zfHPgs.
Shearlaw, Maeve. “Turkish Journalists Face Abuse and Threats Online as Trolls Step up Attacks.” The Guardian, November 1, 2016, sec. World news. https://ift.tt/2Lic0J9.
Kizilkaya, Emre. “Turkey‘s Ruling AKP Fields New ’digital Army”.” Hürriyet Daily News, May 14, 2015. https://ift.tt/1WzHOwV.
Arsu, Sebnem, and Dan Bilefsky. “In Turkey, Twitter Roars After Effort to Block It.” The New York Times, March 21, 2014, sec. Europe. https://ift.tt/2LjoeBo.
“Twitter Transparency Report: Removal Requests.” Twitter, June 2017. https://transparency.twitter.com/en/removal-requests.html.
Morales, Silvia. “Feature: Turkey Trolls’ Use of Insults Stifling Reporting.” International Press Institute. Accessed October 9, 2017. https://ift.tt/2Jmlvdf.
Shearlaw, Maeve. “Turkish Journalists Face Abuse and Threats Online as Trolls Step up Attacks.” The Guardian, November 1, 2016, sec. World news. https://ift.tt/2Lic0J9.
Cox, Joseph. “I Bought a Russian Bot Army for Under $100.” The Daily Beast, September 13, 2017, sec. tech. https://ift.tt/2l1rtFr.
“Freedom of the Press 2017: Philippines,” April 27, 2017.
Ressa, Maria A. “Propaganda War: Weaponizing the Internet.” Rappler, October 3, 2016. https://ift.tt/2v1qN6K.
Hofileña, Chay F. “Fake Accounts, Manufactured Reality on Social Media.” Rappler, October 16, 2016. https://ift.tt/2C3lhBr.
Ressa, Maria. “How Facebook Algorithms Impact Democracy.” Rappler, October 8, 2016. https://ift.tt/2JmlxSp.
Ressa, Maria. “How Facebook Algorithms Impact Democracy.” Rappler, October 8, 2016. https://ift.tt/2JmlxSp.
“Poe Hits Mocha’s Attempt to ‘Reclassify’ Rappler as Social Media.” Rappler, November 9, 2017. https://ift.tt/2zCq4IY.
“Panaligan, Rey. “Rappler Faces Tax Evasion, Libel Charges at DOJ Probe.” Manila Times, April 24, 2018. https://ift.tt/2Jmly8V.
“Freedom of the Press: Bahrain.” Freedom House, March 10, 2016. https://ift.tt/2hXH07m.
Perlroth, Nicole. “FinSpy Software Is Tracking Political Dissidents.” The New York Times, August 30, 2012, sec. Technology. https://ift.tt/O7xo0q.
Marczak, Bill. “‘Time for Some internet Problems in Duraz’: Bahraini ISPs Impose internet Curfew in Protest Village.” Bahrain Watch, August 3, 2016. https://ift.tt/2aKzBSU.
’Anderson,
Faris, Robert, John Kelly, Helmi Noman, and Dalia Othman. “Structure and Discourse: Mapping the Networked Public Sphere in the Arab Region,” 2016. https://ift.tt/2JmK8X1.
Bolsever, Gillian. “Computational Propaganda in China: An Alternative Model of a Widespread Practice.” Working Paper. Oxford, UK: OII, June 2017. https://ift.tt/2sJqywf.
King, Gary, Jennifer Pan, and Margaret Roberts. “How Censorship in China Allows Government Criticism but Silences Collective Expression.” American Political Science Review 107, no. 2 (May) (2013): 1–18.
Isaac, Mike. “Facebook Said to Create Censorship Tool to Get Back Into China.” The New York Times, November 22, 2016, sec. Technology. https://ift.tt/2gZdNap.
Bolsever
Mozur, Paul, and John Markoff. “Is China Outsmarting America in A.I.?” The New York Times, May 27, 2017, sec. Technology. https://ift.tt/2qpcGm2.
Bolsever, Gillian. “Computational Propaganda in China: An Alternative Model of a Widespread Practice.” Working Paper. Oxford, UK: OII, June 2017. https://ift.tt/2sJqywf.
King, Gary, Jennifer Pan, and Margaret E. Roberts. “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument.” American Political Science Review 111, no. 3 (2017): 484–501.
Botsman, Rachel. “Big Data Meets Big Brother as China Moves to Rate Its Citizens.” Wired, October 21, 2017. https://ift.tt/2xUHGNG.
Chen, Yuyu, and David Y. Yang. “The Impact of Media Censorship: Evidence from a Field Experiment in China.” Job Market Paper. Stanford University, November 2017. https://ift.tt/2iajqRP.
Clark et al.
Xynou, Maria, and Arturo Filastò. “internet Censorship in Iran: Network Measurement Findings from 2014-2017.” OONI, September 28, 2017. https://ift.tt/2yI8p1L.
Larson, Christina. “China’s Massive Investment in Artificial Intelligence Has an Insidious Downside.” Science, February 7, 2018. https://ift.tt/2nJpF2L.
Williams, Greg. “Why China Will Win the Global Race for Complete AI Dominance.” Wired UK, April 16, 2018. https://ift.tt/2qzWYFY.
from WordPress https://ift.tt/2sraT38 via IFTTT
0 notes
lindyhunt · 7 years ago
Text
What You Missed Last Month in Google
They say that April showers bring May flowers. And while April has only just arrived, we can say one thing for the previous month: At Google, when it rains, it pours.
March was quite a busy month for Google. From new payment capabilities to rumors flying about the search engine results page (SERP), news from this particular Tech Giant was quietly plentiful.
We’ve put together another list of the major highlights from Google -- this time, for the month of March. Read on for the full recap.
March News About Google
1. Is This the End of Search Results as We Know Them?
On March 14th, my colleague Victor Pan (HubSpot's head of search) shared a SERP screenshot that suggested Google might be experimenting with the idea of no search results -- just one answer to a query.
That day, in response to the query, "What time is it?", many users reported seeing a SERP with just the answer to the question displayed -- and no ranked pages below it, with only a single button to click for more results displayed instead.
Apparently, one of those users was Dr. Peter J. Meyers, who the next day wrote a post for Moz describing the "zero-result SERP ... future we should've known was coming."
In it, he pointed out that results similar these aren't exactly new. For months now, we've been writing about the growing importance of the featured snippet, and last month reported that Google would be testing double featured snippets to help answer questions with different connotations or intentions.
And while the absence of additional displayed results below the Knowledge Card (the information box where quick answers or featured snippets are displayed) is new, it was a temporary experiment, said Google Public Search Liaison Danny Sullivan -- one that was limited to queries like time, calculations, and measurement unit conversions.
Update! We have enough data and feedback -- which is appreciated -- to conclude that the condensed view experiment should stop for now. The team will look at improving when and how it appears.
— Danny Sullivan (@dannysullivan) March 20, 2018
But not long after this experiment took place, Dictionary.com, a site beloved by word nerds everywhere, announced that it was up for sale: perhaps, suggested Pan, as part of the road toward the zero-result SERP.Ever typed a word you didn't completely understand into the Google search bar? If so, you likely received a Knowledge Box result containing the definition.
The sale made us all wonder if sites that were designed to provide information increasingly provided in Knowledge Cards -- time and date, travel reservations, or measurement unit conversions -- are going to face a similar fate in the not-so-distant future.
And while Sullivan said that the experiment has ended, we wouldn't be surprised to see more zero-result SERPS in the future -- and when we do, we'll report on it.
2. The Google News Initiative
Google also announced a new initiative last month to partner with and help news publishers grow, aptly named the Google News Initiative.
It includes several parts, like Subscribe with Google, which is expected to help users subscribe to digital news content from different outlets more easily.
And in addition to making the subscription process more seamless for readers, one goal of the new subscription program is to help publishers grow their audiences, partly with a new feature the search engine giant is testing to measure a user's "Propensity to Subscribe" through machine learning.
Source: Google
The initiative also includes new analytics available to publishers by way of the News Consumer Insights dashboard within Google Analytics. The additional data will display subscription-oriented metrics, like audiences size and segmentation, as well as the best ways to reach and engage those audiences in a way that can help convert them to subscribers.
Another major goal of the Google News Initiative is to combat the spread of false news or misinformation, again, with the help of machine learning that Google says is able to recognize the most authoritative content during breaking news events. That should result in the promotion of content from Google's verified news sources displayed in a “Top News” shelf.
These efforts include several partnerships with journalism-focused organizations and certain publications, like First Draft, with whom Google formed the Disinfo Lab to help detect and limit the amount of misinformation users are exposed to, especially during major events like elections.
Google has also partnered with the Poynter Institute, Stanford University, and the Local Media Association to form MediaWise: an initiative to help younger news consumers understand the veracity (or lack thereof) of different news items.
3. Send and Request Money Through Google Assistant
If you're anything like I am, chances are, you rarely carry cash. Split the bill without a credit card? Pay someone back for a soda or a beer right away without my phone and an app like Venmo? Sadly, I can barely remember such a time.
And now, Google has entered that arena, announcing last month that users could now use Google Assistant to send and request money.
The feature, which is currently only available in the U.S., requires the person starting the transaction to have a Google Pay account, which she'll be prompted to create (if she doesn't have one already) upon asking the assistant to send or request a payment to one of her contacts.
Once that process is complete, the recipient of the request or payment will receive either a text message or email -- or a Google Assitant notification if they use it -- with a prompt to complete the transaction.
Being able to send and receive payment to and from phone or email contacts isn't new -- platforms ranging from PayPal, to Messenger, to the aforementioned Venmo app have facilitated this type of transaction for quite some time now.
But by entering this space, Google is working toward eliminating the need for yet another sector of third-party apps and tools to complete payment transaction -- especially among the user base already largely entrenched in the Google ecosystem. That will become especially true when the service becomes available through the Google Home smart speaker, which the company expects to take place within "the coming months."
I should note that not all payment methods seem to work properly with this function -- when I tried to use the credit card I have on file with Google Pay, for example, the Assistant let me know that I couldn't use it for this transaction, though I couldn't find an explanation or more information as to why.
4. The Tenor Acquisition
Around here, we absolutely love GIF images. We use them to express every emotion, from despair to elation, and frequently find that there's no situation for which a good facepalm GIF isn't an appropriate reaction.
Source: Reaction GIFs
Over the years, Google has made a few efforts to make it easier for users to find GIFs -- one of the most memorable being the introduction of a quick, top-of-the-screen "gif" button to filter image results on its mobile platform.
Now, it seems that Google is trying to make it even easier for users to find GIF images, in part by way of a vertical integration with GIF image database Tenor.
Tenor is a GIF-image finder currently available through a number of messaging platforms, including Messenger and iMessage, where users can seamlessly search for and send these animated images to contacts within conversations and threads.
Google's plans for the database are to help users "do this more effectively in Google Images as well as other products that use GIFs, like Gboard," according to the official announcement.
If it sounds like a familiar move for Google, that's because it is -- if you review the other news items included above, it could be deduced that Google is making a number of changes to eliminate the need for third-party apps, from information sources for unit conversion, to payment platforms, to image search tools. The Tenor acquisition is just one example -- and we anticipate seeing more.
5. Other Google News You May Have Missed
Wheelchair Accessible Routes Have Been Added to Google Maps
Google Maps has evolved quite a bit over the years, with different layers, route options, and transportation methods being added in ways that make it easy for everyone to travel. And ow, wheelchair accessible routes have been added, too. The new feature will show users which public transportation stations are the most accessible, for example, due to elevator availability or step-free entrances. Read full announcement >>
New Job-Searching and Recruiting Capabilities
Last year, Google announced that it would be launching Google for Jobs: a new type of Knowledge Card that would list job openings right in the SERP, depending on the search criteria (e.g., "marketing jobs in Boston"). Now, Google has unveiled a beta version of Hire by Google: a candidate discovery system that will help recruiters create a short list of previous applicants who would be strong candidates for new roles. The biggest benefit? Saving recruiters time, according to the statement, and helping them "easily identify and re-engage known candidates instead of spending time trying to find new ones." Read full announcement >>
Where's Waldo?
We'll let you discover and enjoy this one for yourselves, folks. Read full announcement >>
Until Next Month
As always, we’re watching all things Google. We’ll continue to pick out top news items, algorithm updates, and trends that can aid your marketing.
And until those May flowers finally appear, have a great April.
0 notes
digital-strategy · 8 years ago
Link
http://ift.tt/2whv6eK
by Frederic Filloux
“Hello big advertisers, somebody out there? Do you copy?” (Nasa Commons)
Storyzy’s business is alerting brands to their presence on fake news sites. By and large, the advertising community's response is simply appalling.
French startup Storyzy spotted six hundred forty-four brands on questionable sites ranging from hard core fake news sites, hyper-partisan ones, to clickbait venues hosting bogus content with no particular agenda, except making a quick buck.
Storyzy showed me the list of brands that fund the fake news ecosystem but didn’t want Monday Note to publish it. Never mind. With 600+ advertisers, you can expect many household names to show up. And they do: tech companies, banks, retailers, airlines, cosmetic, luxury, universities, NGOs. Reputed media brands ending up on hyper-partisan and fake news sites. As shown below, The New York Times was spotted on RealtimePolitics and The Wall Street Journal on America’s Freedom Fighters:
While some advertisers know and choose to turn a blind eye, most of the brands feeding the fake news industry do so completely unaware of their complicity.
In fact, they are caught in a combination of negligence and greed from media buyers and the cohort of intermediaries that rule the digital advertising sector.
The real surprise comes from the brands' reaction once they are notified. Normally, one would expect most of them to take radical measures, to notify the chain of intermediaries, such as media buyers or trading desks.
To alert advertisers caught on junk or blatant fake news sites, Storyzy sends them an email with eloquent screenshots attached.
“We contacted about 400 brands, says Pierre-Albert Ruquier, marketing director and co-founder of Storyzy. Reaction varies. Some clearly don’t care and don’t even bother to respond. The biggest advertisers usually refer us to their media buying partners. We talk to most of them, even though we are often received coldly. Weirdly enough, we are also sent to large to consulting firms that advise big clients on brand safety issues. The vast majority of advertisers don’t know where their ads land. Or choose to ignore it. That's why when they refer us to their media buying agency these won’t budge. The reason is that almost all campaigns are ROI-based, a field dominated by behavioral targeting and retargeting.”
In other words: Most of the brands don’t really care where their ads show up as long as the overall return on investment is fine. “One of the world's largest hotel chains told us they don’t mind showing up on questionable sites if it is the result of a retargeting process…” A convenient way to say that as long as they can invoke deniability, performance supersedes any damage caused to the brand, or ethics considerations such as fueling a vast network of misinformation. And if by chance the brand does care, they can’t always trust intermediaries whose incentives are tied to the campaign's financial performance.
Such a combination of deniability and greed is toxic. It explains the millions of dollars that contribute to the well-being of a fake news/junk news system, one that has little to worry about survivability. (Misinformation media are, in addition, super efficient at maximizing the bang for their own buck: their production costs are but a tiny fraction of those required for legit sites.)
Storyzy derived its current brand safety business from an expertise in fact-checking that goes back to 2012. At the time, the startup was called Trooclick; its goal, using natural language processing and machine learning algorithms, was detecting false information in financial news
Five years later, the initial product expanded to a more general quote verification system called Quote Verifier.
An example of a search on Storyzy Quote Verifier
Currently, the service is available via a paid-for API (the access to the web site, however, is free). This function is at the core of the company’s fake news detection.
According to Ramon Ruti, CTO and co-founder of Storyzy, extracting quotes in a reliable fashion — and being able to find and properly attribute indirect quotes — is complex and requires layers of techniques: sentence splitting to detect misleading sentences boundaries like in certain acronyms; morphosyntactic analysis, to understand the nature of each word; topic and named entities extraction; reported speech extraction, for both direct and indirect speech; and other tweaks to deal with language ambiguity and newswriting imprecision…
Each day, Storyzy collects and sorts 50,000 new quotes in English lifted from 5,000 trusted and untrusted sources. The quote is a quintessential element of journalism — especially in the Anglo-Saxon world. Quotes are also the most often forged and the most hijacked items used by fake news sites. Even if a fake quote can be debunked in a matter of hours, the delay is largely sufficient for broad viral propagation on social nets. Hence the utility of the quote verifier to quickly pinpoint false information. Combined with other “signals”, it proves quite effective at fingering fake news sites.
Quotes also have great value for documentation purposes. Storyzy is currently building a private website for a global media brand to be used by journalists, fact-checkers, and moderators who will verify quotes, attribution and context; the service will be plugged on the publisher’s CMS to warrant the accuracy of archived quotes.
For its brand safety-related business, Storyzy is just warming up. A few months ago, the company started to provide a list of 750 questionable websites to its customers, and more bad sites are added at a rate of 20~30 a month. Storyzy is also working on a full monitoring service to ensure brand safety — for those who are interested.
Careless advertisers and media buyers’s are actually harmful to everybody:
— Their negligence supports an information apparatus that is both powerful and dangerous for society at a time where democracy is globally receding.
— While three-quarters of the digital ad money flows to Facebook and Google, every dollar counts for legitimate news outlets which rely on a ever-shrinking slice of advertising.
— The vast ecosystem of clickbait and fake news sites relies on large volumes of ads carrying ultra-low CPMs. It is a race to the bottom in terms of quality of product promotions, a race that keeps fueling the massive deflation we observed over the recent years.
More than 600 global brands still feed the fake news ecosystem was originally published in Monday Note on Medium, where people are continuing the conversation by highlighting and responding to this story.
via Monday Note
0 notes