Tumgik
#ai-and-deepfakes
lokapriya · 10 months
Text
New Post has been published on Lokapriya.com
New Post has been published on https://www.lokapriya.com/detecting-and-regulating-deepfake-technology-the-challenges-part-ii/
Detecting and Regulating Deepfake Technology: The Challenges! Part II
The Challenges of Detecting and Regulating Deepfake Technology
The current state of deepfake detection and regulation is still evolving and faces many challenges. Some of the reasons why it is difficult to identify and prevent deepfake content from spreading online are:
1. Advancement and Ease of Access
The quality and realism of deepfake content are improving as the artificial neural networks that generate them become more sophisticated and trained on larger and more diverse datasets. The availability and affordability of deepfake software and services are also increasing, making it easier for anyone to create and share deepfake content online.
2. Non-Scalability and Unreliability of Detection Methods:
The existing methods for detecting deepfake content rely on analyzing various features or artifacts of the images, videos, or audio, such as facial expressions, eye movements, skin texture, lighting, shadows, or background noise. However, these methods are not always accurate or consistent, especially when the deepfake content is low-quality, compressed, or edited. Moreover, these methods are not scalable or efficient, as they require a lot of computational resources and time to process large amounts of data.
3. Complex and Controversial Regulations:
The legal and ethical issues surrounding deepfake content are not clear-cut or uniform across different jurisdictions, contexts, and purposes. For example, deepfake content may implicate various rights and interests, such as intellectual property rights, privacy rights, defamation rights, contract rights, freedom of expression rights, and public interest. However, these rights and interests may conflict or overlap with each other, creating dilemmas and trade-offs for lawmakers and regulators.
Furthermore, the enforcement and oversight of deepfake regulation may face practical and technical difficulties, such as identifying the creators and distributors of deepfake content, establishing their liability and accountability, and imposing appropriate sanctions or remedies.
Current and Future Strategies and Solutions to Detect, Prevent, and Combat Deepfake Technology
1.  Social Media Platforms’ Policies:
Social media platforms can implement policies, guidelines, and standards to regulate the creation and dissemination of deepfake content on their platforms, by banning or labeling harmful or deceptive deepfakes, or by requiring users to disclose the use of deepfake technology. This strategy can be effective in reducing the exposure and spread of harmful or deceptive deepfakes on popular and influential platforms, such as Facebook, Twitter, or YouTube. Deepfake detection and verification tools, such as digital watermarks, blockchain-based provenance systems, or reverse image search engines can also be deployed to guide against the upload of any deepfake. These platforms can also collaborate with other stakeholders, such as fact-checkers, researchers, or civil society groups, to monitor and counter deepfake content. However, these solutions may face challenges such as scalability, accuracy, transparency, and accountability.
2. Detection Algorithms:
Detection algorithms can use machine learning and computer vision techniques to analyze the features and characteristics of deepfake content, such as facial expressions, eye movements, lighting, or audio quality, and identify inconsistencies or anomalies that indicate manipulation. Researchers can develop and improve deepfake detection and verification technologies, such as artificial neural networks, computer vision algorithms, or biometric authentication systems to improve detection algorithms.
They can also create and share datasets and benchmarks for evaluating deepfake detection and verification methods, and conduct interdisciplinary studies on the social and ethical implications of deepfake technology. This strategy can be effective in the analysis of features by identifying inconsistencies or anomalies that indicate manipulation. However, these solutions may face challenges such as data availability, quality, and privacy, as well as ethical dilemmas and dual-use risk.
3. Internet Reaction:
This refers to the collective response of online users and communities to deepfake content, such as by flagging, reporting, debunking, or criticizing suspicious or harmful deepfakes, or by creating counter-narratives or parodies to expose or ridicule them. Users can adopt critical thinking and media literacy skills to identify and verify deepfake content, and can also use deepfake detection and verification tools, such as browser extensions, mobile apps, or online platforms to sniff out deepfakes they encounter on social media or other platforms, which they can report or flag as deepfake content. The internet reaction strategy can be effective in mobilizing the collective response of online users and communities to deepfake content. However, these solutions may face challenges such as cognitive biases, information overload, digital divide, and trust issues.
4. Legal Response:
This is the application of existing or new laws and regulations to address the legal and ethical issues raised by deepfake technology, such as by protecting the rights and interests of the victims of deepfake abuse, or by holding the perpetrators accountable for their actions. Governments can enact laws and regulations that prohibit or restrict the creation and dissemination of harmful deepfake content, such as non-consensual pornography, defamation, or election interference. They can also support research and development of deepfake detection and verification technologies, as well as public education and awareness campaigns.
Some laws address deepfake technology in different countries, but they are not very comprehensive or consistent.
For example:
In the U.S, The National Defense Authorization Act (NDAA) requires the Department of Homeland Security (DHS) to issue an annual report on deepfakes and their potential harm. The Identifying Outputs of Generative Adversarial Networks Act requires the National Science Foundation (NSC) and the National Institute of Standards (NIS) and Technology to research deepfake technology and authenticity measures. However, there is no federal law that explicitly bans or regulates deepfake technology.
In China, a new law requires that manipulated material have the subject’s consent and bear digital signatures or watermarks and that deepfake service providers offer ways to “refute rumors”. However, some people worry that the government could use the law to curtail free speech or censor dissenting voices.
In India, there is no explicit law banning deepfakes, but some existing laws such as the Information Technology Act or the Indian Penal Code may be applicable in cases of defamation, fraud, or obscenity involving deepfakes.
In the UK, there is no specific law on deepfakes either, but some legal doctrines such as privacy, data protection, intellectual property, or passing off may be relevant in disputes concerning an unwanted deepfake or manipulated video.
Legal responses can be an effective strategy in fighting the dubiousness of deepfakes. However, these solutions may face challenges such as balancing free speech and privacy rights, enforcing cross-border jurisdiction, and adapting to fast-changing technology.
Recommendations and Directions for Future Research or Action on Deepfake Technology
DeepFake technology is still on the rise and rapidly evolving to better and more realistic versions every day. This calls for a need to be more proactive in tackling the menace that may accompany this technology. Below are some of the actions that I believe can be implemented to mitigate its negative impact:
Verification and Authentication of Content: Consumers should always check the source and authenticity of the content they encounter or create, by using reverse image or video search, blockchain-based verification systems, or digital watermarking techniques.
Multiple and Reliable Sources of Information: Consumers of digital media should always seek out multiple and reliable sources of information to corroborate or refute the content they encounter or create, by consulting reputable media outlets, fact-checkers, or experts.
Development of Rapid, Robust and Adaptive Detection Algorithms and Tools for Verification and Attribution: There should be more focus on developing more robust and adaptive detection algorithms that can cope with the increasing realism and diversity of deepfake content, such as by using multi modal or cross-domain approaches, incorporating human feedback, or leveraging adversarial learning. New tools and methods for verification and attribution of digital content should be explored, such as by using blockchain-based verification systems, digital watermarking techniques, or reverse image or video search and more research is needed to develop and improve deepfake detection and verification technologies, as well as to understand and address the social and ethical implications of deepfake technology.
Establishment of Ethical and Legal Frameworks and Standards for Deepfake Technology: More research should be made to create ethical and legal frameworks and standards for deepfake technology, such as by defining the rights and responsibilities of the creators and consumers of deepfake content, setting the boundaries and criteria for legitimate and illegitimate uses of deepfake technology, or enforcing laws and regulations to protect the victims and punish the perpetrators of deepfake abuse. More legal action is needed to enact and enforce laws and regulations that protect the rights and interests of the victims and targets of harmful deepfake content, such as non-consensual pornography, defamation, or election interference.
Actions should be coordinated, consistent, and adaptable, taking into account the cross-border nature of deepfake content and the fast-changing nature of deepfake technology, and should be balanced, proportionate, and respectful, taking into account the free speech and privacy rights of the creators and consumers of deepfake content.
Promotion of Education and Awareness about Deepfake Technology: Future research or action on deepfake technology should promote education and awareness about deepfake technology among various stakeholders, such as by providing training and guidance for journalists, fact-checkers, educators, policymakers, and the general public on how to create, consume, and respond to deepfake content responsibly and critically.
Report or Flag Suspicious or Harmful Content: Consumers should be aware of the existence and prevalence of deepfake content and should use critical thinking and media literacy skills to identify and verify it. They should be fast in reporting or flagging deepfake content that they encounter on social media or other platforms, by using the reporting tools or mechanisms provided by social media platforms, law enforcement agencies, or civil society organizations.
Respect the Rights and Interests of Others: Producers of digital media should always respect the rights and interests of others when creating or sharing content that involves deepfake technology, by obtaining consent, disclosing the use of deepfake technology, or avoiding malicious or deceptive purposes. They should be aware of the potential harms and benefits of deepfake technology and should use it responsibly and ethically, following the principles of consent, integrity, and accountability.
Conclusion:
Deepfake technology has the potential to create false or misleading content that can harm individuals or groups in various ways. However, deepfake technology can also have positive uses for entertainment, media, politics, education, art, healthcare, and accessibility. Therefore, it is important to balance the risks and benefits of deepfake technology and to develop effective and ethical ways to detect, prevent, and regulate it.
To achieve this goal, governments, platforms, researchers, and users need to collaborate and coordinate their efforts, as well as raise their awareness and responsibility. By doing so, we can harness the power and potential benefits of deepfake technology, while minimizing its harm.
0 notes
ayeforscotland · 1 year
Text
Tumblr media
Yeah we need legislation on this like yesterday.
Ideally with everyone involved being catapulted into the sun.
7K notes · View notes
Text
I assure you, an AI didn’t write a terrible “George Carlin” routine
Tumblr media
There are only TWO MORE DAYS left in the Kickstarter for the audiobook of The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There's also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
On Hallowe'en 1974, Ronald Clark O'Bryan murdered his son with poisoned candy. He needed the insurance money, and he knew that Halloween poisonings were rampant, so he figured he'd get away with it. He was wrong:
https://en.wikipedia.org/wiki/Ronald_Clark_O%27Bryan
The stories of Hallowe'en poisonings were just that – stories. No one was poisoning kids on Hallowe'en – except this monstrous murderer, who mistook rampant scare stories for truth and assumed (incorrectly) that his murder would blend in with the crowd.
Last week, the dudes behind the "comedy" podcast Dudesy released a "George Carlin" comedy special that they claimed had been created, holus bolus, by an AI trained on the comedian's routines. This was a lie. After the Carlin estate sued, the dudes admitted that they had written the (remarkably unfunny) "comedy" special:
https://arstechnica.com/ai/2024/01/george-carlins-heirs-sue-comedy-podcast-over-ai-generated-impression/
As I've written, we're nowhere near the point where an AI can do your job, but we're well past the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job:
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
AI systems can do some remarkable party tricks, but there's a huge difference between producing a plausible sentence and a good one. After the initial rush of astonishment, the stench of botshit becomes unmistakable:
https://www.theguardian.com/commentisfree/2024/jan/03/botshit-generative-ai-imminent-threat-democracy
Some of this botshit comes from people who are sold a bill of goods: they're convinced that they can make a George Carlin special without any human intervention and when the bot fails, they manufacture their own botshit, assuming they must be bad at prompting the AI.
This is an old technology story: I had a friend who was contracted to livestream a Canadian awards show in the earliest days of the web. They booked in multiple ISDN lines from Bell Canada and set up an impressive Mbone encoding station on the wings of the stage. Only one problem: the ISDNs flaked (this was a common problem with ISDNs!). There was no way to livecast the show.
Nevertheless, my friend's boss's ordered him to go on pretending to livestream the show. They made a big deal of it, with all kinds of cool visualizers showing the progress of this futuristic marvel, which the cameras frequently lingered on, accompanied by overheated narration from the show's hosts.
The weirdest part? The next day, my friend – and many others – heard from satisfied viewers who boasted about how amazing it had been to watch this show on their computers, rather than their TVs. Remember: there had been no stream. These people had just assumed that the problem was on their end – that they had failed to correctly install and configure the multiple browser plugins required. Not wanting to admit their technical incompetence, they instead boasted about how great the show had been. It was the Emperor's New Livestream.
Perhaps that's what happened to the Dudesy bros. But there's another possibility: maybe they were captured by their own imaginations. In "Genesis," an essay in the 2007 collection The Creationists, EL Doctorow (no relation) describes how the ancient Babylonians were so poleaxed by the strange wonder of the story they made up about the origin of the universe that they assumed that it must be true. They themselves weren't nearly imaginative enough to have come up with this super-cool tale, so God must have put it in their minds:
https://pluralistic.net/2023/04/29/gedankenexperimentwahn/#high-on-your-own-supply
That seems to have been what happened to the Air Force colonel who falsely claimed that a "rogue AI-powered drone" had spontaneously evolved the strategy of killing its operator as a way of clearing the obstacle to its main objective, which was killing the enemy:
https://pluralistic.net/2023/06/04/ayyyyyy-eyeeeee/
This never happened. It was – in the chagrined colonel's words – a "thought experiment." In other words, this guy – who is the USAF's Chief of AI Test and Operations – was so excited about his own made up story that he forgot it wasn't true and told a whole conference-room full of people that it had actually happened.
Maybe that's what happened with the George Carlinbot 3000: the Dudesy dudes fell in love with their own vision for a fully automated luxury Carlinbot and forgot that they had made it up, so they just cheated, assuming they would eventually be able to make a fully operational Battle Carlinbot.
That's basically the Theranos story: a teenaged "entrepreneur" was convinced that she was just about to produce a seemingly impossible, revolutionary diagnostic machine, so she faked its results, abetted by investors, customers and others who wanted to believe:
https://en.wikipedia.org/wiki/Theranos
The thing about stories of AI miracles is that they are peddled by both AI's boosters and its critics. For boosters, the value of these tall tales is obvious: if normies can be convinced that AI is capable of performing miracles, they'll invest in it. They'll even integrate it into their product offerings and then quietly hire legions of humans to pick up the botshit it leaves behind. These abettors can be relied upon to keep the defects in these products a secret, because they'll assume that they've committed an operator error. After all, everyone knows that AI can do anything, so if it's not performing for them, the problem must exist between the keyboard and the chair.
But this would only take AI so far. It's one thing to hear implausible stories of AI's triumph from the people invested in it – but what about when AI's critics repeat those stories? If your boss thinks an AI can do your job, and AI critics are all running around with their hair on fire, shouting about the coming AI jobpocalypse, then maybe the AI really can do your job?
https://locusmag.com/2020/07/cory-doctorow-full-employment/
There's a name for this kind of criticism: "criti-hype," coined by Lee Vinsel, who points to many reasons for its persistence, including the fact that it constitutes an "academic business-model":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
That's four reasons for AI hype:
to win investors and customers;
to cover customers' and users' embarrassment when the AI doesn't perform;
AI dreamers so high on their own supply that they can't tell truth from fantasy;
A business-model for doomsayers who form an unholy alliance with AI companies by parroting their silliest hype in warning form.
But there's a fifth motivation for criti-hype: to simplify otherwise tedious and complex situations. As Jamie Zawinski writes, this is the motivation behind the obvious lie that the "autonomous cars" on the streets of San Francisco have no driver:
https://www.jwz.org/blog/2024/01/driverless-cars-always-have-a-driver/
GM's Cruise division was forced to shutter its SF operations after one of its "self-driving" cars dragged an injured pedestrian for 20 feet:
https://www.wired.com/story/cruise-robotaxi-self-driving-permit-revoked-california/
One of the widely discussed revelations in the wake of the incident was that Cruise employed 1.5 skilled technical remote overseers for every one of its "self-driving" cars. In other words, they had replaced a single low-waged cab driver with 1.5 higher-paid remote operators.
As Zawinski writes, SFPD is well aware that there's a human being (or more than one human being) responsible for every one of these cars – someone who is formally at fault when the cars injure people or damage property. Nevertheless, SFPD and SFMTA maintain that these cars can't be cited for moving violations because "no one is driving them."
But figuring out who which person is responsible for a moving violation is "complicated and annoying to deal with," so the fiction persists.
(Zawinski notes that even when these people are held responsible, they're a "moral crumple zone" for the company that decided to enroll whole cities in nonconsensual murderbot experiments.)
Automation hype has always involved hidden humans. The most famous of these was the "mechanical Turk" hoax: a supposed chess-playing robot that was just a puppet operated by a concealed human operator wedged awkwardly into its carapace.
This pattern repeats itself through the ages. Thomas Jefferson "replaced his slaves" with dumbwaiters – but of course, dumbwaiters don't replace slaves, they hide slaves:
https://www.stuartmcmillen.com/blog/behind-the-dumbwaiter/
The modern Mechanical Turk – a division of Amazon that employs low-waged "clickworkers," many of them overseas – modernizes the dumbwaiter by hiding low-waged workforces behind a veneer of automation. The MTurk is an abstract "cloud" of human intelligence (the tasks MTurks perform are called "HITs," which stands for "Human Intelligence Tasks").
This is such a truism that techies in India joke that "AI" stands for "absent Indians." Or, to use Jathan Sadowski's wonderful term: "Potemkin AI":
https://reallifemag.com/potemkin-ai/
This Potemkin AI is everywhere you look. When Tesla unveiled its humanoid robot Optimus, they made a big flashy show of it, promising a $20,000 automaton was just on the horizon. They failed to mention that Optimus was just a person in a robot suit:
https://www.siliconrepublic.com/machines/elon-musk-tesla-robot-optimus-ai
Likewise with the famous demo of a "full self-driving" Tesla, which turned out to be a canned fake:
https://www.reuters.com/technology/tesla-video-promoting-self-driving-was-staged-engineer-testifies-2023-01-17/
The most shocking and terrifying and enraging AI demos keep turning out to be "Just A Guy" (in Molly White's excellent parlance):
https://twitter.com/molly0xFFF/status/1751670561606971895
And yet, we keep falling for it. It's no wonder, really: criti-hype rewards so many different people in so many different ways that it truly offers something for everyone.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
Tumblr media Tumblr media
Back the Kickstarter for the audiobook of The Bezzle here!
Tumblr media
Image:
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Ross Breadmore (modified) https://www.flickr.com/photos/rossbreadmore/5169298162/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
2K notes · View notes
hahahax30 · 29 days
Text
Hey hey, for those who don't know, South Korean women and girls are currently suffering from a disgusting misogyny epidemic by the hands of men who use Telegram as a forum to exchange deepfake porn of women and girls in their family, school, university, work, etc. These men also exchange videos of them groping those women and girls. It's a repeat of the Nth Room crime.
There are hundreds of thousands of men in those Telegram chats. Think about it. Hundreds of thousands of men--someone's father, uncle, brother, son; a girl's classmate or a uni student's 'friend'--drugging the women and girls in their life, groping there for plenty of others to see, create deepfake porn or providing pictures of the women and girls they know so that others may create the deepfake porn, and men posting private information about the women and girls in their life to encourage others to sexually assault them.
Think. About. It.
Many Korean feminists have been trying to shed light into this crime. They've specifically been trying to make this known outside of Korea because Korea's rampantly misogynistic and their newspapers won't talk about this new Nth Room unless other, international newspapers report on it.
These Korean feminists are also suffering from harassment on all fronts: from YouTube, the men in the Telegram chats who demand when women decided they should have rights, etc. One of the most vocal feminists, @/dvu84djp on Twitter, has suffered much of this harassment. I urge everyone to check her page for more info on the matter, since I don't live in Korea, don't know Korean, am not Korean and all I can see is a repeat of what she and other K feminists are saying. I also urge everyone to go report this asshole on YouTube, since he's been one of the most vocal in spreading hate against the feminists fighting for basic human rights for their women
Tumblr media
642 notes · View notes
gothgleek · 1 year
Text
Something about Joan is Awful and the digital age… stealing from people to create faster content… creating deepfakes that actually ruin lives… having no control over your image… people… creating a system of exploitation to avoid creating your own content and paying writers… capitalism thriving on negative self image… people doing whatever they want without your consent… telling you this is what you signed up for with your fame/gender/sexuality…
Much to think about
2K notes · View notes
Text
Tumblr media
(Source: note, do NOT scan the QR code)
169 notes · View notes
gwydionmisha · 2 months
Text
Have something you want to tell your Congress Critters? If you can't safely contact them in person, here are some other options:
Call the Capitol Switchboard at (202) 224-3121 and ask to be connected to the representative of your choice. Here is one that will send your reps a fax: https://resist.bot/ To get your Critters' numbers to call direct: https://www.congress.gov/members/find-your-member
162 notes · View notes
odinsblog · 4 months
Text
Tumblr media
“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.
After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named “Sky” sounded like me.
When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word “her” - a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.
As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAl, setting out what they had done and asking them to detail the exact process by which they created the “Sky” voice. Consequently, OpenAl reluctantly agreed to take down the “Sky” voice.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”
—Scarlett Johansson
220 notes · View notes
mindblowingscience · 2 months
Text
The eyes, the old saying goes, are the window to the soul — but when it comes to deepfake images, they might be a window into unreality. That's according to new research conducted at the University of Hull in the U.K., which applied techniques typically used in observing distant galaxies to determine whether images of human faces were real or not. The idea was sparked when Kevin Pimbblet, a professor of astrophysics at the University, was studying facial imagery created by artificial intelligence (AI) art generators Midjourney and Stable Diffusion. He wondered whether he could use physics to determine which images were fake and which were real. "It dawned on me that the reflections in the eyes were the obvious thing to look at," he told Space.com. 
Continue Reading.
107 notes · View notes
candy616 · 11 months
Text
Tumblr media Tumblr media
I wanted to see Warren Kole dressed as Phillip Graves so badly.
Now I'm happy, but I had to do some science 👀💖🖤💖♠️
304 notes · View notes
gacha-incels · 19 days
Text
Tumblr media Tumblr media
59 notes · View notes
lokapriya · 10 months
Text
New Post has been published on Lokapriya.com
New Post has been published on https://www.lokapriya.com/deepfake-technology-the-potential-risks-of-future-part-i/
DeepFake Technology: The Potential Risks of Future! Part I
Imagine you are watching a video of your favorite celebrity giving a speech. You are impressed by their eloquence and charisma, and you agree with their message. But then you find out that the video was not real. It was a deep fake, a synthetic media created by AI “(Artificial Intelligence) that can manipulate the appearance and voice of anyone. You feel deceived and confused.
This is no longer a hypothetical scenario; this is now real. There are several deepfakes of prominent actors, celebrities, politicians, and influencers circulating the internet. Some include deepfakes of Film Actors like Tom Cruise and Keanu Reeves on TikTok, among others. Even Indian PM Narendra Modi’s deepfake edited video was made.
In simple terms, Deepfakes are AI-generated videos and images that can alter or fabricate the reality of people, events, and objects. This technology is a type of artificial intelligence that can create or manipulate images, videos, and audio that look and sound realistic but are not authentic. Deepfake technology is becoming more sophisticated and accessible every day. It can be used for various purposes, such as in entertainment, education, research, or art. However, it can also pose serious risks to individuals and society, such as spreading misinformation, violating privacy, damaging reputation, impersonating identity, and influencing public opinion.
In this article, I will be exploring the dangers of deep fake technology and how we can protect ourselves from its potential harm.
How is Deepfake Technology a Potential Threat to Society?
Deepfake technology is a potential threat to society because it can:
Spread misinformation and fake news that can influence public opinion, undermine democracy, and cause social unrest.
Violate privacy and consent by using personal data without permission, and creating image-based sexual abuse, blackmail, or harassment.
Damage reputation and credibility by impersonating or defaming individuals, organizations, or brands.
Create security risks by enabling identity theft, fraud, or cyber attacks.
Deepfake technology can also erode trust and confidence in the digital ecosystem, making it harder to verify the authenticity and source of information.
The Dangers and Negative Uses of Deepfake Technology
As much as there may be some positives to deepfake technology, the negatives easily overwhelm the positives in our growing society. Some of the negative uses of deepfakes include:
Deepfakes can be used to create fake adult material featuring celebrities or regular people without their consent, violating their privacy and dignity. Because it has become very easy for a face to be replaced with another and a voice changed in a video. Surprising, but true.
Deepfakes can be used to spread misinformation and fake news that can deceive or manipulate the public. Deepfakes can be used to create hoax material, such as fake speeches, interviews, or events, involving politicians, celebrities, or other influential figures.
Since face swaps and voice changes can be carried out with the deepfake technology, it can be used to undermine democracy and social stability by influencing public opinion, inciting violence, or disrupting elections.
False propaganda can be created, fake voice messages and videos that are very hard to tell are unreal and can be used to influence public opinions, cause slander, or blackmail involving political candidates, parties, or leaders.
Deepfakes can be used to damage reputation and credibility by impersonating or defaming individuals, organizations, or brands. Imagine being able to get the deepfake of Keanu Reeves on TikTok creating fake reviews, testimonials, or endorsements involving customers, employees, or competitors.
For people who do not know, they are easy to convince and in an instance where something goes wrong, it can lead to damage in reputation and loss of belief in the actor.
Ethical, Legal, and Social Implications of Deepfake Technology
Ethical Implications
Deepfake technology can violate the moral rights and dignity of the people whose images or voices are used without their consent, such as creating fake pornographic material, slanderous material, or identity theft involving celebrities or regular people. Deepfake technology can also undermine the values of truth, trust, and accountability in society when used to spread misinformation, fake news, or propaganda that can deceive or manipulate the public.
Legal Implications
Deepfake technology can pose challenges to the existing legal frameworks and regulations that protect intellectual property rights, defamation rights, and contract rights, as it can infringe on the copyright, trademark, or publicity rights of the people whose images or voices are used without their permission.
Deepfake technology can violate the privacy rights of the people whose personal data are used without their consent. It can defame the reputation or character of the people who are falsely portrayed in a negative or harmful way.
Social Implications
Deepfake technology can have negative impacts on the social well-being and cohesion of individuals and groups, as it can cause psychological, emotional, or financial harm to the victims of deepfake manipulation, who may suffer from distress, anxiety, depression, or loss of income. It can also create social divisions and conflicts among different groups or communities, inciting violence, hatred, or discrimination against certain groups based on their race, gender, religion, or political affiliation.
Imagine having deepfake videos of world leaders declaring war, making false confessions, or endorsing extremist ideologies. That could be very detrimental to the world at large.
I am afraid that in the future, deepfake technology could be used to create more sophisticated and malicious forms of disinformation and propaganda if not controlled. It could also be used to create fake evidence of crimes, scandals, or corruption involving political opponents or activists or to create fake testimonials, endorsements, or reviews involving customers, employees, or competitors.
Read More: Detecting and Regulating Deepfake Technology: The Challenges! Part II
0 notes
rapeculturerealities · 8 months
Text
It’s not just Taylor Swift: AI-generated porn is targeting women and kids all over the world | CNN Business
The circulation of explicit and pornographic pictures of megastar Taylor Swift this week shined a light on artificial intelligence’s ability to create convincingly real, damaging – and fake – images.
But the concept is far from new: People have weaponized this type of technology against women and girls for years. And with the rise and increased access to AI tools, experts say it’s about to get a whole lot worse, for everyone from school-age children to adults.
Already, some high schools students across the world, from New Jersey to Spain, have reported their faces were manipulated by AI and shared online by classmates. Meanwhile, a young well-known female Twitch streamer discovered her likeness was being used in a fake, explicit pornographic video that spread quickly throughout the gaming community.
“It’s not just celebrities [targeted],” said Danielle Citron, a professor at the University of Virginia School of Law. “It’s everyday people. It’s nurses, art and law students, teachers and journalists. We’ve seen stories about how this impacts high school students and people in the military. It affects everybody.”
But while the practice isn’t new, Swift being targeted could bring more attention to the growing issues around AI-generated imagery. Her enormous contingent of loyal “Swifties” expressed their outrage on social media this week, bringing the issue to the forefront. In 2022, a Ticketmaster meltdown ahead of her Eras Tour concert sparked rage online, leading to several legislative efforts to crack down on consumer-unfriendly ticketing policies.
138 notes · View notes
reasonsforhope · 7 months
Text
"Major technology companies signed a pact on Friday to voluntarily adopt "reasonable precautions" to prevent artificial intelligence (AI) tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. 
Twelve other companies - including Elon Musk's X - are also signing on to the accord...
The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio, and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote".
The companies aren't committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. 
It notes the companies will share best practices and provide "swift and proportionate responses" when that content starts to spread.
Lack of binding requirements
The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.
"The language isn't quite as strong as one might have expected," said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. 
"I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be keeping an eye on whether they follow through." ...
Several political leaders from Europe and the US also joined Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, "it contains very impactful and positive elements".  ...
[The Accord and Where We're At]
The accord calls on platforms to "pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression".
It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.
Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.
That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law [in the US], but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.
Many social media companies already have policies in place to deter deceptive posts about electoral processes - AI-generated or not... 
[Signatories Include]
In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn't immediately respond to a request for comment on Friday.
The inclusion of X - not mentioned in an earlier announcement about the pending accord - was one of the surprises of Friday's agreement."
-via EuroNews, February 17, 2024
--
Note: No idea whether this will actually do much of anything (would love to hear from people with experience in this area on significant this is), but I'll definitely take it. Some of these companies may even mean it! (X/Twitter almost definitely doesn't, though).
Still, like I said, I'll take it. Any significant move toward tech companies self-regulating AI is a good sign, as far as I'm concerned, especially a large-scale and international effort. Even if it's a "mostly symbolic" accord, the scale and prominence of this accord is encouraging, and it sets a precedent for further regulation to build on.
146 notes · View notes
Text
a novel method for Gini!
According to Nature, researchers at the University of Hull in the U.K. used the CAS system—a method used by astronomers that measures concentration, asymmetry and smoothness (or clumpiness) of galaxies—and a statistical measure of inequality called the Gini coefficient to analyze the light reflected in a person’s eyes in an image.
“It’s not a silver bullet, because we do have false positives and false negatives,” Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull, U.K., said when he presented the research at the U.K. Royal Astronomical Society’s National Astronomy Meeting last week, according to Nature. Pimbblet said that in a real photograph of a person, the reflections in one eye should be “very similar, although not necessarily identical,” to the reflections in the other.
26 notes · View notes
odinsblog · 7 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
These are demo videos made from prompts on OpenAI’s Sora. It’s similar to how you would prompt ChatGPT and get text or a still image output, but with Sora the output is video. (source)
I cynically believe that by November, Sora will have perfected its algorithm enough to make the upcoming 2024 election online ads … very interesting.
And even after the terrible job that Facebook, Instagram and Twitter (never calling it x) did in the 2016 elections and Brexit, they somehow still decided to cut back on their departments that could at least theoretically curtail attempts at political disinformation.
Anyway, be forewarned: Social media manipulation and disinformation campaigns are very real things. Don’t believe everything you see on social media. Slightly similar A.I. deepfake technologies already exist. (example) (example) (example) (example)
220 notes · View notes