Tumgik
#how deepfakes work
reallytoosublime · 7 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
youtubemarketing1234 · 7 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
songmingisthighs · 30 days
Text
dear "creators" who depends on ai
how does it feel to be so absolutely lazy and useless as a human being ?? how does it feel to have a functioning brain but can't use it properly ?? how does it feel to be so absolutely incredibly dull and uncreative that you can't muster 200 words to convey your ideas ?? how does it feel to have to face the fact that you suck so damn bad and you have to live with it ??
16 notes · View notes
c-h-stevens · 8 months
Text
A couple of months ago radfems on Xitter were making some noise about readers of Booktok smut being "porn addicted". Tumblr didn't take them seriously, which is as it should be. However, I don't think many people know the real issue with this "porn addiction" thing: namely, that it's a fake disorder invented by religious conservatives to shame people.
See, the people who take porn addiction seriously don't mean it in a metaphorical way, as in "I'm addicted to potato chips" or whatever. They literally claim that watching too much internet porn will rewire your brain chemistry just like shooting heroin. If you think that sounds unlikely, scientists generally agree with you. There's no solid evidence for porn addiction. The Wikipedia page calls it "scientifically controversial" and "hotly contested". It's also peppered up with [citation needed]s from both sides. On the other hand, there's papers such as this one (paywalled) that link "religiosity and moral disapproval" with "perceived addiction to pornography". Unsurprisingly, the promoters of porn addiction tend to scaremonger about masturbation, first and foremost among them the NoFap movement, which is a hotbed of alt-right bullshit and all sorts of bigotries. (Want another paper?)
You might wonder, if a belief in porn addiction comes from the religious right instead of credible evidence, and one of its loudest proponents is a very anti-feminist group, why would radical feminists accuse anybody of being porn addicted? Well, that's because radfems are dumb as bricks. They've been holding hands with the religious right for decades on the anti-porn issue. This is a short and concise article on the "lesbian sex wars", where radfems burned lesbian BDSM books to defeat the patriarchy. (I meant it when I said they're dumb as bricks.)
In my own personal observation, radfems seem to have ramped up their anti-porn side lately, probably because blatant, in-your-face transphobia is less popular in progressive circles lately. On the other hand, the internet has gotten more puritanical lately, so it makes sense they'd emphasize their anti-porn side instead. With USAMerican Republicans trying to fuck the internet up even more, I wouldn't be surprised if radfems continued to support the Leopards Eating People's Faces with this nonsense. Hence why I wrote this whole essay.
Radfems have admitted to toning down their views to try and "recruit" people they see as women/more mainstream feminists, so if you get a comment mentioning porn addiction, especially anonymously, it's likely a radfem peddling nonsense. If I thought they were clever enough, I'd say they were pushing the whole "Booktok porn addicts" angle to make other takes on the idea seem more credible. Sure, it's silly to call a reader of romance novels porn addicted, but what about a fratboy who's always in PornHub? Surely he's porn addicted, right? Well, no. Because porn addiction isn't a thing, and the people taking it seriously are trying to make you buy into censorship and an overly strict control of the internet. Things that are going to harm marginalized people first, as anybody who isn't dumb as bricks can tell.
13 notes · View notes
sainamoonshine · 1 year
Text
A note to all creatives:
Right now, you have to be a team player. You cannot complain about AI being used to fuck over your industry and then turn around and use it on somebody else’s industry.
No AI book covers. No making funny little videos using deepfakes to make an actor say stuff they never did. No AI translation of your book. No AI audiobooks. No AI generated moodboards or fancasts or any of that shit. No feeding someone else’s unfinished work into Chat GPT “because you just want to know how it ends*” (what the fuck is wrong with you?). No playing around with AI generated 3D assets you can’t ascertain the origin of. None of it. And stop using AI filters on your selfies or ESPECIALLY using AI on somebody else’s photo or artwork.
We are at a crossroad and at a time of historically shitty conditions for working artists across ALL creative fields, and we gotta stick together. And you know what? Not only is standing up for other artists against exploitation and theft the morally correct thing to do, it’s also the professionally smartest thing to do, too. Because the corporations will fuck you over too, and then they do it’s your peers that will hold you up. And we have a long memory.
Don’t make the mistake of thinking “your peers” are only the people in your own industry. Writers can’t succeed without artists, editors, translators, etc making their books a reality. Illustrators depend on writers and editors for work. Video creators co-exist with voice actors and animators and people who do 3D rendering etc. If you piss off everyone else but the ones who do the exact same job you do, congratulations! You’ve just sunk your career.
Always remember: the artists who succeed in this career path, the ones who get hired or are sought after for commissions or collaboration, they aren’t the super talented “fuck you I got mine” types. They’re the one who show up to do the work and are easy to get along with.
And they especially are not scabs.
*that’s not even how it ends that’s a statistically likely and creatively boring way for it to end. Why would you even want to read that.
60K notes · View notes
kajmasterclass · 8 months
Text
youtube
0 notes
lizardsfromspace · 2 months
Text
There was lots of concern about AI and deepfakes spreading misinformation this election cycle, so let's check in on that!
A lot of right-wing accounts are trying to prove that Kamala Harris' crowds are fake - that all the photos of them are generated with AI. We'll get to that, but let's look at the most...amusing example first.
There's lots of people pointing at an image of a crowd that's obviously been generated with AI, due to extra arms and gibberish writing.
Tumblr media Tumblr media Tumblr media
So obviously someone generated this image with AI. Who was it and why? Well, we can actually find the origin pretty easily. It was...
Tumblr media
A right-wing satire account. Who put "unexcited Kamala Harris crowd" into a AI image generator to make a "wow, crowds are electric!" joke.
An image their own side generated as satire is now being spread by the right as something Harris/Walz created as proof they're doing the thing they're doing. Incredible. Just a masterclass
Now let's look at the dark stuff.
By and large, AI isn't being used for hoaxes. AI is being used as a excuse: people aren't being tricked by AI images, they're being tricked by accusations real images are AI.
So this hoax went around recently...
Tumblr media
This image is old. It spread in 2017, it spread in 2020, and now it's spread in 2024. The Harris/Walz rally wasn't even in the Phoenix Convention Center, or in Phoenix. But it's now a core part of the "no one's attending their rallies!" campaign going on now
This may seem strange if you experience the news by non-conservative media, where you can't escape stories of Kamala Harris filling up massive stadiums & of Trump rallies full of empty seats. All evidence in reality points to Kamala Harris being extremely popular and to Trump's campaign faltering.
Tumblr media
But in MAGA land, Kamala Harris' crowds were generated with AI.
Tumblr media Tumblr media
Their claims: the crowd isn't in the reflection, and uh, the arms look weird. But also that there "aren't any other images".
But? There are? There was a livestream of that very plane landing (starting at 25)!
They're subjecting what they think is The Only Image of this rally in long Youtube videos and on Twitter and TikTok, and just...don't realize there's full, uncut, commentary-free video of it that was broadcast live. So why don't they realize that?
Well, I did a search for this rally on Fox News' website and guess what? They reported it exactly twice: once a interview with her while leaving that slammed her for "not taking press questions" enough, and a few clips of the Palestinian protestors at it, but not many, bc if you don't follow the far right they're trying to frame Kamala Harris as, like, a radical pro-Palestinian (or, as they say, "pro-Hamas") activist who wants to destroy Israel which, uhhhhhhhh
But another thread I found debunking this, by a former Trump-voting evangelical conservative turned critic of the same, gets at the heart of it
Tumblr media
People who said AI and deepfakes would be used to mislead the right were missing how misinformation actually works. Fox News et al don't suppress information by confusing the audience with misinformation; they suppress information by never letting them see it in the first place. They know that they have a captive audience who doesn't watch non-right-wing news (unlike the left, who are constantly aware of what's going on over at Fox).
They can just never mention or show Kamala Harris' rallies, or do so only in close-up, and they can frame Trump to only show shots where a crowd has gathered & make it seem like that's the norm, and their audience just has no chance to find out the truth. They're so propagandized that they just accept that there are no other images of that Kamala Harris rally, because, well, they were told there weren't, and would the news personalities they trust really lie to them?
And if any stray bits of reality float into the bubble, well, it was just AI. You know, how they have AI these days? AI's most important role in all this isn't as a vector for misinformation, it's as a rhetorical device for claiming real images and video are misinformation. You don't have to make images full of people with weird hands if you can get people circling real people's hands in red and pointing at it to prove reality was made in a computer.
These people don't know how popular Kamala Harris is. They don't know she's polling eight points higher than Trump. They don't know Harris is leading in right-wing biased polls. They're being told she's hiring actors at rallies, that her crowd photos are generated by AI, that Trump rallies are popular and hers are desolate.
This only likely to increase as we near the election because, well, Trump '24 is a shitshow. His campaign started out less popular than '16 or even '20, he picked maybe the worst VP pick in history, and he hasn't made a single effective attack on Harris/Walz. They were banking on facing Biden, and then on a chaotic open convention, but instead everyone closed ranks around Harris, and she instantly became the most popular Democratic candidate since 2008. We are cruising towards Trump/Vance not only losing, but losing in the closest thing to a landslide that's possible in our current system.
They're already laying the groundwork for, in the likely case of defeat, playing the "she stole the election!" card. Last time it was ad-hoc, because Trump thought he'd win. This time, they're already making "all her rallies are fake, all of her supporters are AI, they've already rigged it against Trump" a key strategy, and, I have to assume, their primary strategy as it gets closer to election day and the polls get worse. They've always lived in a bubble, but now it's a bubble designed explicitly to cause another January 6th. By claiming real photos have weird hands, and must totally be AI
999 notes · View notes
hashtagloveloses · 2 years
Text
this is an earnest and honest plea and call in especially to fandoms as i see it happen more - please don't use AI for your transformative works. by this i mean, making audios of actors who play the characters you love saying certain things, making deepfakes of actors or even animated characters' faces. playing with chatGPT to "talk" or RP with a character, or write funny fanfiction. using stable diffusion to make interesting "crossover" AI "art." i KNOW it's just for fun and it is seemingly harmless but it's not. since there is NO regulation and since some stuff is built off of stable diffusion (which uses stolen artwork and data), it is helping to create a huge and dangerous mess. when you use an AI to deepfake actors' voices to make your ship canon or whatever, you help train it so people can use it for deepfake revenge porn. or so companies can replace these actors with AI. when you RP with chatGPT you help train it to do LOTS of things that will be used to harm SO many people. (this doesn't even get into how governments will misuse and hurt people with these technologies) and yes that is not your fault and yes it is not the technology's fault it is the companies and governments that will and already have done things but PLEASE. when you use an AI snapchat or instagram or tiktok filter, when you use an AI image generator "just for fun", when you chat with your character's "bot," you are doing IRREPARABLE harm. please stop.
8K notes · View notes
txttletale · 1 year
Note
Just wanna warn you, sag and wga are anti ai
being against netflix or WB using AI to write scripts or generate deepfaked actors is good. 'anti ai' and 'pro ai' are not two coherent positions--everything must be examined in its context. saying 'our union doesn't want studios to implement AI writing processes and then credit the AI instead of the human writers who work on it' is different to silly arguments about how we need new copyright protections because DALL-E is 'stealing your style'
5K notes · View notes
ladyshinga · 8 months
Text
I keep being told to "adapt" to this new AI world.
Okay.
Well first of all, I've been training myself more and more how to spot fake images. I've been reading every article with a more critical eye to see if it's full of ChatGPT's nonsense. I've been ignoring half the comments on stuff just assuming it's now mostly bots trying to make people angry enough to comment.
When it comes to the news and social issues, I've started to focus on and look for specific journalists and essayists whose work I trust. I've been working on getting better at double-checking and verifying things.
I have been working on the biggest part, and this one is a hurdle: PEOPLE. People whose names and faces I actually know. TALKING to people. Being USED to talking to people. Actual conversations with give and take that a chat bot can't emulate even if their creators insist they can.
All of this combined is helping me survive an AI-poisoned internet, because here's what's been on my mind:
What if the internet was this poisoned in 2020?
Would we have protested after George Floyd?
A HUGE number of people followed updates about it via places like Twitter and Tiktok. Twitter is now a bot-hell filled with nazis and owned by a petulant anti-facts weirdo, and Tiktok is embracing AI so hard that it gave up music so that its users can create deepfakes of each other.
Would information have traveled as well as it did? Now?
The answer is no. Half the people would have called the video of Floyd's death a deepfake, AI versions of it would be everywhere to sew doubt about the original, bots would be pushing hard for people to do nothing about it, half the articles written about it would be useless ChatGPT garbage, and the protests themselves… might just NOT have happened. Or at least, they'd be smaller - AND more dangerous when it comes to showing your face in a photo or video - because NOW what can people DO with that photo and video? The things I mentioned earlier will help going forward. Discernment. Studying how the images look, how the fake audio sounds, how the articles often talk in circles and litter in contradictory misinformation. and PEOPLE.
PEOPLE is the biggest one here, because if another 2020-level event happens where we want to be protesting on the streets by the thousands, our ONLY recourse right now is to actually connect with people. Carefully of course, it's still a protest, don't use Discord or something, they'll turn your chats over to cops.
But what USED to theoretically be "simple" when it came to leftist organizing ("well my tweet about it went viral, I helped!") is just going to require more WORK now, and actual personal communication and connection and community. I know if you're reading this and you're American, you barely know what that feels like and I get it. We're deprived of it very much on purpose, but the internet is becoming more and more hostile to humanity itself. When it comes to connecting to other humans… we now have to REALLY connect to other humans
I'm sorry. This all sucks. But adapting usually does.
485 notes · View notes
rainbowsuitcase · 1 year
Text
Armys, don’t use AI
"But why? It’s cute!" It also looks realistic and not everyone will want to use it "because it’s cute." AI can be used to create fake porn, harmful and defamatory deepfakes or to spread misinformation.
"I don’t have those intentions!" By giving AI the prompts you do, you are actively helping train it to look more realistic (among other things), which could in the future help someone who does have the intentions to hurt BTS.
The same goes for AI created songs - you are helping train AI to sound more like them which, again, someone could in the future use to hurt them.
There is no middle ground here. There is no "oh, I only make it create nice stuff!" It doesn’t work that way.
Not to mention, that it’s disrespectful to real artists who spend hours working on their pieces, putting actual energy and love into them.
Not to mention, and now this is just my opinion, creating (half) naked pictures of them is disrespectful to BTS themselves. They always choose when they feel comfortable showing us their bodies (ex. music videos or photoshoots in contract with behind the scenes videos).
Arguing that it’s not really their bodies doesn’t make it better in my opinion. Because how do those fake bodies look? Thin and muscular and perfect, by mainstream beauty standards. We know that some members have struggled with their body image in the past, there’s a strong diet culture in the kpop industry in general and you, thirsting over those perfect, fake bodies are only encouraging it.
Say no to the robot. Enjoy the real BTS and support real artists. Please.
560 notes · View notes
odinsblog · 4 months
Text
Tumblr media
“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.
After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named “Sky” sounded like me.
When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word “her” - a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.
As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAl, setting out what they had done and asking them to detail the exact process by which they created the “Sky” voice. Consequently, OpenAl reluctantly agreed to take down the “Sky” voice.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”
—Scarlett Johansson
220 notes · View notes
roach-works · 5 months
Text
for all its (apparently many?) flaws, i really enjoyed the fallout show, and i'm ride or die for maximus, obviously. but one of the things i enjoyed about lucy's arc isn't that she wasn't necessarily proved RIGHT or WRONG about her own moral code, she didn't learn that either kindness is its own reward or that niceness is suicidal in a fight for survival.
what she learned, i am pretty sure, is that context matters. you can't actually help people if you don't know anything about them. you can't enact justice if you don't know what the case on trial is. you can't come in out of nowhere and make snap decisions and be anything more than one more complication in a situation that was fucked up long before you were born.
that's what we see over and over: she comes in out of nowhere, she makes an attempt to help based on her immediate assumption of what's going on, and then everything continues to be dangerous and complicated and fucked up. she doesn't let the stoners explain that some ghouls will genuinely try to eat you the minute they get the chance, and she pays for it. she jumps to the wrong conclusion in vault 4 because not everyone who looks like a monster IS a monster, and she pays for it. yeah a lot of the time cooper is abusing her for his own satisfaction, but when she's a free agent she's a loose canon and it's not because the show is punishing her for TRYING to do the right thing. it's because the show is punishing her for jumping to conclusions.
this show gets a lot of laughs from Fish Out Of Water situations, but i think that even though cooper explicitly says "you'll change up here and not for the better, you'll become corrupted and selfish just to survive" that's not the real message. what lucy learns is how important it is to hear people out, meet them where they're at, and get the full story.
that's why the final confrontation with her father is so important. she hears everyone out. she gets the full story. she listens to all of it. and then she acts with full knowledge of situation. that's what the wasteland taught her: not to be cruel, not to be selfish, but that taking the time to understand what's actually going on really matters.
this is a show that's incredibly concerned with truth and lies. everyone is lying to each other and themselves. scenes change over and over as they're recontextualized. love and hate and grief and hope are just motives in a million interconnected shell games, not redeeming justifications. maximus's many compounded falsehoods are approved of by his own superior, who finds a corrupt pawn more useful than an honorable one. cooper finds out his wife has her own private agenda and this betrayal keeps him going for centuries. lucy's entire society is artificial and from the moment they find out they're not safe and maybe never have been, all the vault dwellers are scrambling to deal with that.
ANYWAY. i just think it's neat. sci fi is a lens to analyze our present through a hypothetical future, and i think it's pretty significant for this current age we live in, where we're all grappling with misinformation, conspiracy theories, propaganda, and deepfakes, there's a huge anxiety over how hard it can be to find the truth out about anything. i think the show suggests that it's always worth the work to try.
183 notes · View notes
leftistfeminista · 6 months
Text
Alexandria Ocasio-Cortez was in a car talking with her staffers about legislation and casually scrolling through her X mentions when she saw the photo. It was the end of February, and after spending most of the week in D.C., she was looking forward to flying down to Orlando to see her mom after a work event. But everything left her mind once she saw the picture: a digitally altered image of someone forcing her to put her mouth on their genitals. Adrenaline coursed through her, and her first thought was “I need to get this off my screen.” She closed out of it, shaken.
“There’s a shock to seeing images of yourself that someone could think are real,” the congresswoman tells me. It’s a few days after she saw the disturbing deepfake, and we’re waiting for our food in a corner booth of a retro-style diner in Queens, New York, near her neighborhood. She’s friendly and animated throughout our conversation, maintaining eye contact and passionately responding to my questions. When she tells me this story, though, she slows down, takes more pauses and plays with the delicate rings on her right hand. “As a survivor of physical sexual assault, it adds a level of dysregulation,” she says. “It resurfaces trauma, while I’m trying to — in the middle of a fucking meeting.”
The violent picture stayed in Ocasio-Cortez’s head all day. 
“There are certain images that don’t leave a person, they can’t leave a person,” she says. “It’s not a question of mental strength or fortitude — this is about neuroscience and our biology.” She tells me about scientific reports she’s read about how it’s difficult for our brains to separate visceral images on a phone from reality, even if we know they are fake. “It’s not as imaginary as people want to make it seem. It has real, real effects not just on the people that are victimized by it, but on the people who see it and consume it.”
“And once you’ve seen it, you’ve seen it,” Ocasio-Cortez says. “It parallels the same exact intention of physical rape and sexual assault, [which] is about power, domination, and humiliation. Deepfakes are absolutely a way of digitizing violent humiliation against other people.”
225 notes · View notes
mariacallous · 21 days
Text
As national legislation on deepfake pornography crawls its way through Congress, states across the country are trying to take matters into their own hands. Thirty-nine states have introduced a hodgepodge of laws designed to deter the creation of nonconsensual deepfakes and punish those who make and share them.
Earlier this year, Democratic congresswoman Alexandria Ocasio-Cortez, herself a victim of nonconsensual deepfakes, introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or Defiance Act. If passed, the bill would allow victims of deepfake pornography to sue as long as they could prove the deepfakes had been made without their consent. In June, Republican senator Ted Cruz introduced the Take It Down Act, which would require platforms to remove both revenge porn and nonconsensual deepfake porn.
Though there’s bilateral support for many of these measures, federal legislation can take years to make it through both houses of Congress before being signed into law. But state legislatures and local politicians can move faster—and they’re trying to.
Tumblr media
Last month, San Francisco City Attorney David Chiu’s office announced a lawsuit against 16 of the most visited websites that allow users to create AI-generated pornography. “Generative AI has enormous promise, but as with all new technologies, there are unintended consequences and criminals seeking to exploit the new technology. We have to be very clear that this is not innovation—this is sexual abuse,” Chiu said in a statement released by his office at the time.
The suit was just the latest attempt to try to curtail the ever-growing issue of nonconsensual deepfake pornography.
“I think there's a misconception that it's just celebrities that are being affected by this,” says Ilana Beller, organizing manager at Public Citizen, which has been tracking nonconsensual deepfake legislation and shared their findings with WIRED. “It's a lot of everyday people who are having this experience.”
Data from Public Citizen shows that 23 states have passed some form of nonconsensual deepfake law. “This is such a pervasive issue, and so state legislators are seeing this as a problem,” says Beller. “I also think that legislators are interested in passing AI legislation right now because we are seeing how fast the technology is developing.”
Last year, WIRED reported that deepfake pornography is only increasing, and researchers estimate that 90 percent of deepfake videos are of porn, the vast majority of which is nonconsensual porn of women. But despite how pervasive the issue is, Kaylee Williams, a researcher at Columbia University who has been tracking nonconsensual deepfake legislation, says she has seen legislators more focused on political deepfakes.
“More states are interested in protecting electoral integrity in that way than they are in dealing with the intimate image question,” she says.
Matthew Bierlein, a Republican state representative in Michigan, who cosponsored the state’s package of nonconsensual deepfake bills, says that he initially came to the issue after exploring legislation on political deepfakes. “Our plan was to make [political deepfakes] a campaign finance violation if you didn’t put disclaimers on them to notify the public.” Through his work on political deepfakes, Bierlein says, he began working with Democratic representative Penelope Tsernoglou, who helped spearhead the nonconsensual deepfake bills.
At the time in January, nonconsensual deepfakes of Taylor Swift had just gone viral, and the subject was widely covered in the news. “We thought that the opportunity was the right time to be able to do something,” Beirlein says. And Beirlein says that he felt Michigan was in the position to be a regional leader in the Midwest, because, unlike some of its neighbors, it has a full-time legislature with well-paid staffers (most states don’t). “We understand that it's a bigger issue than just a Michigan issue. But a lot of things can start at the state level,” he says. “If we get this done, then maybe Ohio adopts this in their legislative session, maybe Indiana adopts something similar, or Illinois, and that can make enforcement easier.”
But what the penalties for creating and sharing nonconsensual deepfakes are—and who is protected—can vary widely from state to state. “The US landscape is just wildly inconsistent on this issue,” says Williams. “I think there's been this misconception lately that all these laws are being passed all over the country. I think what people are seeing is that there have been a lot of laws proposed.”
Some states allow for civil and criminal cases to be brought against perpetrators, while others might only provide for one of the two. Laws like the one that recently took effect in Mississippi, for instance, focus on minors. Over the past year or so, there have been a spate of instances of middle and high schoolers using generative AI to make explicit images and videos of classmates, particularly girls. Other laws focus on adults, with legislators essentially updating existing laws banning revenge porn.
Unlike laws that focus on nonconsensual deepfakes of minors, on which Williams says there is a broad consensus that there they are an “inherent moral wrong,” legislation around what is “ethical” when it comes to nonconsensual deepfakes of adults is “squishier.” In many cases, laws and proposed legislation require proving intent, that the goal of the person making and sharing the nonconsensual deepfake was to harm its subject.
But online, says Sara Jodka, an attorney who specializes in privacy and cybersecurity, this patchwork of state-based legislation can be particularly difficult. “If you can't find a person behind an IP address, how can you prove who the person is, let alone show their intent?”
Williams also notes that in the case of nonconsensual deepfakes of celebrities or other public figures, many of the creators don’t necessarily see themselves as doing harm. “They’ll say, ‘This is fan content,’ that they admire this person and are attracted to them,” she says.
State laws, Jobka says, while a good start, are likely to have limited power to actually deal with the issue, and only a federal law against nonconsensual deepfakes would allow for the kind of interstate investigations and prosecutions that could really force justice and accountability. “States don't really have a lot of ability to track down across state lines internationally,” she says. “So it's going to be very rare, and it's going to be very specific scenarios where the laws are going to be able to even be enforced.”
But Michigan’s Bierlein says that many state representatives are not content to wait for the federal government to address the issue. Bierlein expressed particular concern about the role nonconsensual deepfakes could play in sextortion scams, which the FBI says have been on the rise. In 2023, a Michigan teen died by suicide after scammers threatened to post his (real) intimate photos online. “Things move really slow on a federal level, and if we waited for them to do something, we could be waiting a lot longer,” he says.
96 notes · View notes
olderthannetfic · 21 days
Note
Do you think deepfakes count as fictional, reality or a whole other thing.
I ask this because it's genuinely the biggest news in the kpop space currently. Deepfakes of idols have always existed, I've known about them since 2016 when i first became a fan and tbh, I didn't think much about them. Also, I'm sure people have been editing celebrities and people they know into porn for as long as they've had the ability to. It's just much more accessible and easy to do now.
However, the issue has blown up both internationally and in Korea after a middle school student committed suicide due to being deep faked.
The discussion is now if this counts as sexual harassment and abuse, or is it rpf? It's been a kpop staple for years and people are drawing parallels between it and fanfic. to me deep fakes are worse because they're meant to look exactly like the idols. It's difficult to distinguish reality while fanfiction is just that. Fanfic. No one is reading it and thinking it's really real and the idols actually do the freak nasty after work or smth.
What are your thoughts? I ask bc you and your followers generally are more nuanced and I'm interested in your thoughts on the matter.
--
I've been asked this exact question a number of times. The closest thing is, of course, the photomanips that were hugely popular in some parts of fandom in like 2005.
The difference between those (or the same thing done with better technology now) is that they're openly branded as fake. RPF is the same way. It's posted to fannish archives or on fanworks-oriented social media accounts. It often has watermarks on it indicating it's a fanwork, and that's assuming it isn't... like... a photomanip of somebody as a centaur or something. Fanfic, even fanfic by tinhats, is labeled as fic, not as tabloid news.
Deepfakes intended for harassment are presented as real. A shitty person could take certain types of fan art and repurpose them for this, sure, but that's not the fault of the fan artist. Disguising the fact that this is fan-made (or, well, hater-made) art is the whole point.
How realistic the art is isn't the main issue: it's whether they're pretending it's real.
Now, is drawing cartoony offensive art of your middle school classmate harassment? It could be. Even if it's obviously art and not a photo, it could certainly be used intentionally to bully. A lot of things can be part of a bullying campaign. It's obvious how fake photos could easily be more traumatic and cause more trouble more quickly, but there isn't really a huge difference between known-to-be-fake "photos" and cartoony rude art, nor is there a huge difference between fake photo "evidence" and lying gossip "evidence".
The intent and the social context are what matter here. The photorealism just makes it easier to achieve a specific bullying aim.
--
But, yes, given the current technological situation, if you are a person who uses AI to make celebrity porn or something, the ethical thing to do is to watermark the hell out of it and/or make it less photorealistic and/or make sure it's posted to some porn site with attribution where people can easily go to debunk it should it escape containment and be passed around as The Real Thing. You can't control what assholes do, but you can at least put some roadblocks in place so they can't misuse your stuff as easily.
69 notes · View notes