Tumgik
#Use of deepfakes
swamyworld · 1 month
Text
LinkedIn co-founder Reid Hoffman deepfakes himself in interview video viral of this AI Technology
AI Technology: The trend of Artificial Intelligence is increasing rapidly in the whole world including India. There are many advantages and disadvantages of this technology. Talking about losses, in the last few months, deepfake content with the help of AI technology has caused a lot of loss to people. Deepfake is a technology in which, with the help of AI technology, a fake version of a real…
View On WordPress
2 notes · View notes
lacebird · 2 months
Text
Tumblr media Tumblr media Tumblr media
in the s i l e n c e . . . [x]
152 notes · View notes
dailyfigures · 3 months
Note
Wait, just out of curiosity, but do you actually support AI art or is that one person just being weird
oh no sorry if that was unclear, i DO NOT support AI art!!!
i just had a problem with someone accusing real artists of being AI based on nothing, and then continuing to be rude when i told them they were real people. it really was just about someone being mean for no reason.
42 notes · View notes
feckcops · 11 months
Text
Joan Is Awful: Black Mirror episode is every striking actor’s worst nightmare
“A sticking point of the near-inevitable Sag-Aftra strike is the potential that AI could soon render all screen actors obsolete. A union member this week told Deadline: ‘Actors see Black Mirror’s Joan Is Awful as a documentary of the future, with their likenesses sold off and used any way producers and studios want. We want a solid pathway. The studios countered with ‘trust us’ – we don’t.’ ...
“If a studio has the kit, not to mention the balls, to deepfake Tom Hanks into a movie he didn’t agree to star in, then it has the potential to upend the entire industry as we know it. It’s one thing to have your work taken from you, but it’s another to have your entire likeness swiped.
“The issue is already creeping in from the peripheries. The latest Indiana Jones movie makes extensive use of de-ageing technology, made by grabbing every available image of Harrison Ford 40 years ago and feeding it into an algorithm. Peter Cushing has been semi-convincingly brought back to life for Star Wars prequels, something he is unlikely to have given permission for unless the Disney execs are particularly skilled at the ouija board. ITV’s recent sketch show Deep Fake Neighbour Wars took millions of images of Tom Holland and Nicki Minaj, and slapped them across the faces of young performers so adeptly that it would be very easy to be fooled into thinking that you were watching the real celebrities in action.
“Unsurprisingly, Sag-Aftra members want this sort of thing to be regulated, asking for their new labour contract to include terms about when AI likenesses can be used, how to protect against misuse, and how much money they can expect from having their likenesses used by AI.”
94 notes · View notes
stealingpotatoes · 10 months
Note
How do u think the first meeting between Luke and Ahsoka went
Im sad we didn’t get to see that interaction
so I haven't watched TBOBF bc everyone told me it was dogwater but like??? for real????? they dont?? explain how ahsoka and luke meet???!?!???????? thats INSANE TO ME
ALSO BC OF THIS I WENT AND LOOKED UP CLIPS OF THE SCENE WHERE THEY TALK AND?????????? THE DEEPFAKE LIMITS THE ACTING SO MUCH ITS UNREAL LITERALLY WHY DIDNT THEY JUST RECAST HIM THE PRE-DEEPFAKE ACTOR LOOKS FINE WTF. HELP. EVERYTHING I LEARN ABT THIS SHOW IS WORSE THAN WHATEVER INFORMATION CAME BEFORE
anyway ah. i have to fix this fuckup in the universe and I don't have any fully formed ideas rn but I'm in academy luke brainrot (kofi members know this too well bc they have early access to several academy luke comics loll) so yeah. i WILL do something along the concept of them I simply have to wtf
81 notes · View notes
columboscreens · 1 year
Text
Tumblr media
41 notes · View notes
livefromtheyard · 1 month
Text
Tumblr media
if one person is gonna get me i know it's gonna be y*ngl*ng
11 notes · View notes
bennymaxxing · 11 months
Text
hey modders just a reminder that using AI for your mods is as bad as AI art and visual deepfakes (especially you sick mfs using AI to make porn mods). just like AI art, you're taking someone's work that they are paid for without consent to get something similar for free and idk how anyone can think that's ok
35 notes · View notes
crispsandkerosene · 11 months
Text
Probably kind of a hot take, but I wish tumblr people would maybe make the nuance between "data scraping and companies doing whatever morally bankrupt corner cutting to maximize their profits are bad" and "the very concept of AI is evil, I hope Disney sues and makes artstyles copywritable, stuff you tell to chat GPT will be known to all AIs ever!!"
27 notes · View notes
nutmegdoggy · 5 months
Text
if you guys wanted to see how bad the tom macdonald and ben shapiro rap is but didn't wanna give them a view, here ya go
normally I'm against distributing shit from horrible people even to hatewatch it, but this is just so embarrassingly awful that I think the entertainment value is worth it
7 notes · View notes
hamofjustice · 1 year
Text
Tumblr media
76 notes · View notes
gayteensupreme · 9 months
Text
This is what we're facing with AI
Tumblr media
I got this ad on a YouTube video and instantly knew something was up. The logo looks scammy, the product is vague - and that's definitely not Elon Musk talking.
The video is low-quality on purpose because it's a shitty, yet at a glance convincing, deepfake coupled with an AI generated voice.
The link leads to a fake BBC article, which in turn leads to some scammy-looking website.
Sooner or later, we are going to have to confront how to deal with this technology. Not everyone will be able to tell the difference.
anyway i guess if you get this ad or similar ads i encourage you to report it by clicking on the little circled "i" next to the runtime of the ad.
15 notes · View notes
c-h-stevens · 5 months
Text
A couple of months ago radfems on Xitter were making some noise about readers of Booktok smut being "porn addicted". Tumblr didn't take them seriously, which is as it should be. However, I don't think many people know the real issue with this "porn addiction" thing: namely, that it's a fake disorder invented by religious conservatives to shame people.
See, the people who take porn addiction seriously don't mean it in a metaphorical way, as in "I'm addicted to potato chips" or whatever. They literally claim that watching too much internet porn will rewire your brain chemistry just like shooting heroin. If you think that sounds unlikely, scientists generally agree with you. There's no solid evidence for porn addiction. The Wikipedia page calls it "scientifically controversial" and "hotly contested". It's also peppered up with [citation needed]s from both sides. On the other hand, there's papers such as this one (paywalled) that link "religiosity and moral disapproval" with "perceived addiction to pornography". Unsurprisingly, the promoters of porn addiction tend to scaremonger about masturbation, first and foremost among them the NoFap movement, which is a hotbed of alt-right bullshit and all sorts of bigotries. (Want another paper?)
You might wonder, if a belief in porn addiction comes from the religious right instead of credible evidence, and one of its loudest proponents is a very anti-feminist group, why would radical feminists accuse anybody of being porn addicted? Well, that's because radfems are dumb as bricks. They've been holding hands with the religious right for decades on the anti-porn issue. This is a short and concise article on the "lesbian sex wars", where radfems burned lesbian BDSM books to defeat the patriarchy. (I meant it when I said they're dumb as bricks.)
In my own personal observation, radfems seem to have ramped up their anti-porn side lately, probably because blatant, in-your-face transphobia is less popular in progressive circles lately. On the other hand, the internet has gotten more puritanical lately, so it makes sense they'd emphasize their anti-porn side instead. With USAMerican Republicans trying to fuck the internet up even more, I wouldn't be surprised if radfems continued to support the Leopards Eating People's Faces with this nonsense. Hence why I wrote this whole essay.
Radfems have admitted to toning down their views to try and "recruit" people they see as women/more mainstream feminists, so if you get a comment mentioning porn addiction, especially anonymously, it's likely a radfem peddling nonsense. If I thought they were clever enough, I'd say they were pushing the whole "Booktok porn addicts" angle to make other takes on the idea seem more credible. Sure, it's silly to call a reader of romance novels porn addicted, but what about a fratboy who's always in PornHub? Surely he's porn addicted, right? Well, no. Because porn addiction isn't a thing, and the people taking it seriously are trying to make you buy into censorship and an overly strict control of the internet. Things that are going to harm marginalized people first, as anybody who isn't dumb as bricks can tell.
4 notes · View notes
Text
An Arizona state representative behind a new law that regulates deepfakes in elections used an artificial intelligence chatbot, ChatGPT, to write part of the law – specifically, the part that defines what a deepfake is.
Republican Alexander Kolodin’s bill, which passed unanimously in both chambers and was signed by the Democratic Governor this week, will allow candidates in Arizona or residents to ask a judge to declare whether a supposed deepfake is real or not, giving candidates a way to debunk AI-generated misinformation.
Kolodin said he used the chatbot ChatGPT to help define what “digital impersonation” is for the bill in part because it was a fun way to demonstrate the technology. He provided a screenshot of ChatGPT’s response to the question of what a deepfake is, which is similar to language that is included in the bill’s definition.
“I am by no means a computer scientist,” Kolodin said. “And so when I was trying to write the technical portion of it, in terms of what sort of technological processing makes something a deepfake, I was kind of struggling with the terminology. So I thought to myself, well, let me just ask the subject matter expert. And so I asked ChatGPT to write a definition of what was a deepfake.”
That portion of the bill “probably got fiddled with the least – people seemed to be pretty cool with that” throughout the legislative process. ChatGPT provided the “baseline definition” and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin said.
Kolodin has used ChatGPT in other lawmaking a couple times, he said, to help write the first drafts of amendments and save time. “Why work harder when you can work smarter,” Kolodin replied on Twitter when an Arizona reporter tweeted about his use of ChatGPT in the bill.
The federal government has not yet regulated the use of AI in elections, though groups have been pressuring the Federal Election Commission to do so because the technology has moved much faster than the law, creating concerns it could disrupt elections this year. The agency has said it expects to share more on the issue this summer.
The Federal Communications Commission, meanwhile, will consider whether to require disclaimers on AI-generated content on political ads running on radio and TV, the Associated Press reported Wednesday. The FCC previously made clear that AI-generated voices in robocalls, like an instance in which President Joe Biden’s voice was spoofed to New Hampshire voters, are illegal.
In the absence of federal regulations, many states have advanced bills to regulate deepfakes. It’s typically an area of rare bipartisan agreement.
Some bills have outlawed the use of deepfakes in political contexts in some instances, while others require disclosures that note whether the content is AI-generated.
Kolodin’s bill takes a different approach to concern over deepfakes in elections than that of many other states considering how to regulate the technology. Rather than outlaw or curb usage, Kolodin wanted to give people a mechanism to have the courts weigh in on the truthfulness of a deepfake. Having it taken down would be both futile and a first amendment issue, he said.
“Now at least their campaign has as a declaration from a court saying, this doesn’t look like it’s you, and they could use that for counternarrative messaging,” he said.
The bill does allow for a deepfake to be ordered removed, and the person could seek damages, if it depicts someone in a sexual act or nude, if the person in the deepfake is not a public figure and if the publisher knew it was false and refused to remove it.
The Arizona bill also takes a different approach on disclaimers. Rather than outright requiring them, as some state laws have, it says that a person bringing a potential court action wouldn’t have a case if the publisher of the digital impersonation had conveyed that the image or video was a deepfake or that its authenticity was in dispute, or whether it would be obvious to a reasonable person that it was a deepfake.
Kolodin said disclaimers carry speech concerns for him, too, because they cut into airtime or, in some cases, ruin the joke or the point of a message. He cited a recent instance where the Arizona Agenda, a local publication covering state politics, created a deepfake of the US Senate candidate Kari Lake, where it was obvious to a viewer that the video wasn’t real based on what Lake was saying. (Full disclosure: the reporter of this story was the co-founder of the Arizona Agenda, but is no longer involved.)
“Any reasonable person would have realized that [it was fake], but if you had a label on it, it would have ruined the joke, right?” Kolodin said. “It would have ruined the journalistic impact. And so I think a prescribed label is further than I wanted to go.”
In one instance in Georgia, a state representative trying to convince fellow lawmakers to approve a bill outlawing deepfakes in elections used an AI-generated image and audio of two people who opposed the bill, faking their voices to say they endorsed it.
Kolodin hopes his bill will become a model for other states because he has worried that well-intentioned efforts to regulate AI in elections could trample on speech rights.
“I think deepfakes have a legitimate role to play in our political discourse,” he said. “And when you have politicians regulating speech, you kind of have the fox guarding the hen house, so they’re gonna say, oh, anything that makes me look silly is a crime. I absolutely hope that other state legislators pick this up.”
4 notes · View notes
judgeverse · 4 months
Text
tremendously depressed about the video AI generation tool. this is going nowhere good. i wish i'd been born earlier, made my creative works earlier, just something, anything, to be able to meaningfully create before this happened
6 notes · View notes
crsentfairy · 6 months
Text
"god didn't create man, man created god" omg girl shut uppppp 💀
4 notes · View notes