Tumgik
#Deepfake dangers
swamyworld · 1 month
Text
LinkedIn co-founder Reid Hoffman deepfakes himself in interview video viral of this AI Technology
AI Technology: The trend of Artificial Intelligence is increasing rapidly in the whole world including India. There are many advantages and disadvantages of this technology. Talking about losses, in the last few months, deepfake content with the help of AI technology has caused a lot of loss to people. Deepfake is a technology in which, with the help of AI technology, a fake version of a real…
View On WordPress
2 notes · View notes
reallytoosublime · 4 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
youtubemarketing1234 · 4 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
p1325 · 1 year
Text
Demi Lovato AI - Dangerous Woman (Ariana Grande Cover)
Um I don't know what to say, I'm kinda speechless by how good this is 🧍
9 notes · View notes
lesbianraskolnikov · 1 year
Text
I am aware that there are people who are critical over AI art are certainly critical of Deepfakes of course even if they do find some of the deepfakes funny im nto going to assume that everyone whos ever laughed at them doesnt see a problem BUT It is weird to think now that I have seen way more fearmongering enraged rants over AI art but throwing Deepfaked AI videos left and right with no commentary on them like Come On
2 notes · View notes
meitantei-lavi · 1 year
Text
man i fuckin... hate ai so much lmao..
2 notes · View notes
tomathi · 1 year
Text
not to be a luddite on main but I genuinely don't understand why everyone is frothing at the mouth to put ai in everything now. like...what does it make better?
4 notes · View notes
pia-writes-things · 1 year
Note
Keanu Reeves absolutely doesn't have a tiktok, it's someone who's very good at deepfakes making a "funny" Keanu reeves account (which I personally find very upsetting)
Oh, ok. I understand your feeling, it's really not a cool things to do.
1 note · View note
averythepirate · 25 days
Text
“You can no longer solely rely on your human skills as a fact checker,” she said, explaining that standard reporting strategies had fallen short in verifying the video. Synthetic media experts largely agree that the rise of widely accessible generative AI tools over the past two years — across image, audio, and video — has led to a proliferation of political deepfakes. Some of this content is sophisticated enough that a studied human eye, or ear, in combination with ordinary fact-checking methods like a reverse image search or cross-checking details, is simply not enough to debunk it.
1 note · View note
hashtagloveloses · 1 year
Text
this is an earnest and honest plea and call in especially to fandoms as i see it happen more - please don't use AI for your transformative works. by this i mean, making audios of actors who play the characters you love saying certain things, making deepfakes of actors or even animated characters' faces. playing with chatGPT to "talk" or RP with a character, or write funny fanfiction. using stable diffusion to make interesting "crossover" AI "art." i KNOW it's just for fun and it is seemingly harmless but it's not. since there is NO regulation and since some stuff is built off of stable diffusion (which uses stolen artwork and data), it is helping to create a huge and dangerous mess. when you use an AI to deepfake actors' voices to make your ship canon or whatever, you help train it so people can use it for deepfake revenge porn. or so companies can replace these actors with AI. when you RP with chatGPT you help train it to do LOTS of things that will be used to harm SO many people. (this doesn't even get into how governments will misuse and hurt people with these technologies) and yes that is not your fault and yes it is not the technology's fault it is the companies and governments that will and already have done things but PLEASE. when you use an AI snapchat or instagram or tiktok filter, when you use an AI image generator "just for fun", when you chat with your character's "bot," you are doing IRREPARABLE harm. please stop.
8K notes · View notes
ladyshinga · 4 months
Text
I keep being told to "adapt" to this new AI world.
Okay.
Well first of all, I've been training myself more and more how to spot fake images. I've been reading every article with a more critical eye to see if it's full of ChatGPT's nonsense. I've been ignoring half the comments on stuff just assuming it's now mostly bots trying to make people angry enough to comment.
When it comes to the news and social issues, I've started to focus on and look for specific journalists and essayists whose work I trust. I've been working on getting better at double-checking and verifying things.
I have been working on the biggest part, and this one is a hurdle: PEOPLE. People whose names and faces I actually know. TALKING to people. Being USED to talking to people. Actual conversations with give and take that a chat bot can't emulate even if their creators insist they can.
All of this combined is helping me survive an AI-poisoned internet, because here's what's been on my mind:
What if the internet was this poisoned in 2020?
Would we have protested after George Floyd?
A HUGE number of people followed updates about it via places like Twitter and Tiktok. Twitter is now a bot-hell filled with nazis and owned by a petulant anti-facts weirdo, and Tiktok is embracing AI so hard that it gave up music so that its users can create deepfakes of each other.
Would information have traveled as well as it did? Now?
The answer is no. Half the people would have called the video of Floyd's death a deepfake, AI versions of it would be everywhere to sew doubt about the original, bots would be pushing hard for people to do nothing about it, half the articles written about it would be useless ChatGPT garbage, and the protests themselves… might just NOT have happened. Or at least, they'd be smaller - AND more dangerous when it comes to showing your face in a photo or video - because NOW what can people DO with that photo and video? The things I mentioned earlier will help going forward. Discernment. Studying how the images look, how the fake audio sounds, how the articles often talk in circles and litter in contradictory misinformation. and PEOPLE.
PEOPLE is the biggest one here, because if another 2020-level event happens where we want to be protesting on the streets by the thousands, our ONLY recourse right now is to actually connect with people. Carefully of course, it's still a protest, don't use Discord or something, they'll turn your chats over to cops.
But what USED to theoretically be "simple" when it came to leftist organizing ("well my tweet about it went viral, I helped!") is just going to require more WORK now, and actual personal communication and connection and community. I know if you're reading this and you're American, you barely know what that feels like and I get it. We're deprived of it very much on purpose, but the internet is becoming more and more hostile to humanity itself. When it comes to connecting to other humans… we now have to REALLY connect to other humans
I'm sorry. This all sucks. But adapting usually does.
483 notes · View notes
roach-works · 2 months
Text
for all its (apparently many?) flaws, i really enjoyed the fallout show, and i'm ride or die for maximus, obviously. but one of the things i enjoyed about lucy's arc isn't that she wasn't necessarily proved RIGHT or WRONG about her own moral code, she didn't learn that either kindness is its own reward or that niceness is suicidal in a fight for survival.
what she learned, i am pretty sure, is that context matters. you can't actually help people if you don't know anything about them. you can't enact justice if you don't know what the case on trial is. you can't come in out of nowhere and make snap decisions and be anything more than one more complication in a situation that was fucked up long before you were born.
that's what we see over and over: she comes in out of nowhere, she makes an attempt to help based on her immediate assumption of what's going on, and then everything continues to be dangerous and complicated and fucked up. she doesn't let the stoners explain that some ghouls will genuinely try to eat you the minute they get the chance, and she pays for it. she jumps to the wrong conclusion in vault 4 because not everyone who looks like a monster IS a monster, and she pays for it. yeah a lot of the time cooper is abusing her for his own satisfaction, but when she's a free agent she's a loose canon and it's not because the show is punishing her for TRYING to do the right thing. it's because the show is punishing her for jumping to conclusions.
this show gets a lot of laughs from Fish Out Of Water situations, but i think that even though cooper explicitly says "you'll change up here and not for the better, you'll become corrupted and selfish just to survive" that's not the real message. what lucy learns is how important it is to hear people out, meet them where they're at, and get the full story.
that's why the final confrontation with her father is so important. she hears everyone out. she gets the full story. she listens to all of it. and then she acts with full knowledge of situation. that's what the wasteland taught her: not to be cruel, not to be selfish, but that taking the time to understand what's actually going on really matters.
this is a show that's incredibly concerned with truth and lies. everyone is lying to each other and themselves. scenes change over and over as they're recontextualized. love and hate and grief and hope are just motives in a million interconnected shell games, not redeeming justifications. maximus's many compounded falsehoods are approved of by his own superior, who finds a corrupt pawn more useful than an honorable one. cooper finds out his wife has her own private agenda and this betrayal keeps him going for centuries. lucy's entire society is artificial and from the moment they find out they're not safe and maybe never have been, all the vault dwellers are scrambling to deal with that.
ANYWAY. i just think it's neat. sci fi is a lens to analyze our present through a hypothetical future, and i think it's pretty significant for this current age we live in, where we're all grappling with misinformation, conspiracy theories, propaganda, and deepfakes, there's a huge anxiety over how hard it can be to find the truth out about anything. i think the show suggests that it's always worth the work to try.
183 notes · View notes
odinsblog · 4 months
Text
Caveat Lector 🚨🚨🚨
Tumblr media
Donald Trump supporters have been creating and sharing AI-generated fake images of black voters to encourage African Americans to vote Republican.
BBC Panorama discovered dozens of deepfakes portraying black people as supporting the former president.
Mr. Trump has openly courted black voters, who were key to Joe Biden's election win in 2020.
But there's no evidence directly linking these images to Mr. Trump's campaign.
The co-founder of Black Voters Matter, a group which encourages black people to vote, said the manipulated images were pushing a “strategic narrative” designed to show Mr. Trump as popular in the black community.
A creator of one of the images told the BBC: “I'm not claiming it's accurate.”
The fake images of black Trump supporters, generated by artificial intelligence (AI), are one of the emerging disinformation trends ahead of the US presidential election in November.
Tumblr media
Unlike in 2016, when there was evidence of foreign influence campaigns, the AI-generated images found by the BBC appear to have been made and shared by US voters themselves.
One of them was Mark Kaye and his team at a conservative radio show in Florida.
They created an image of Mr. Trump smiling with his arms around a group of black women at a party and shared it on Facebook, where Mr. Kaye has more than one million followers.
Tumblr media
At first it looks real, but on closer inspection everyone's skin is a little too shiny and there are missing fingers on people's hands - some tell-tale signs of AI-created images.
“I'm not a photojournalist,” Mr. Kaye tells me from his radio studio. “I'm not out there taking pictures of what's really happening. I'm a storyteller.”
Tumblr media
Disinformation tactics in the US presidential elections have evolved since 2016, when Donald Trump won. Back then, there were documented attempts by hostile foreign powers, such as Russia, to use networks of inauthentic accounts to try to sow division and plant particular ideas.
In 2020, the focus was on home-grown disinformation - particularly false narratives that the presidential election was stolen, which were shared widely by US-based social media users and endorsed by Mr. Trump and other Republican politicians.
In 2024, experts warn of a dangerous combination of the two.
(continue reading)
146 notes · View notes
amalasdraws · 8 months
Text
It is so important and vital that we all learn how to detect ai and deepfake!! We need to learn media literacy! There is so much ai brain rot going around!! People who fall for clear ai images and fall for propaganda! But also people who claim something is ai when it isn’t just because they don't like what they see.
It is not easy to wade through the huge amount of information we have access to and that is pushed onto us. Ai makes all of this worse and it's dangerous! Be aware and be careful of what you share!! Learn how to detect ai! It will only get better and it will become harder and harder.
167 notes · View notes
Text
Amazon’s Alexa has been claiming the 2020 election was stolen
The popular voice assistant says the 2020 race was stolen, even as parent company Amazon promotes the tool as a reliable election news source -- foreshadowing a new information battleground
Tumblr media
This is a scary WaPo article by Cat Zakrzewski about how big tech is allowing AI to get information from dubious sources. Consequently, it is contributing to the lies and disinformation that exist in today's current political climate.
Even the normally banal but ubiquitous (and not yet AI supercharged) Alexa is prone to pick up and recite political disinformation. Here are some excerpts from the article [color emphasis added]:
Amid concerns the rise of artificial intelligence will supercharge the spread of misinformation comes a wild fabrication from a more prosaic source: Amazon’s Alexa, which declared that the 2020 presidential election was stolen. Asked about fraud in the race — in which President Biden defeated former president Donald Trump with 306 electoral college votes — the popular voice assistant said it was “stolen by a massive amount of election fraud,” citing Rumble, a video-streaming service favored by conservatives.
The 2020 races were “notorious for many incidents of irregularities and indications pointing to electoral fraud taking place in major metro centers,” according to Alexa, referencing Substack, a subscription newsletter service. Alexa contended that Trump won Pennsylvania, citing “an Alexa answers contributor.”
Multiple investigations into the 2020 election have revealed no evidence of fraud, and Trump faces federal criminal charges connected to his efforts to overturn the election. Yet Alexa disseminates misinformation about the race, even as parent company Amazon promotes the tool as a reliable election news source to more than 70 million estimated users. [...] Developers “often think that they have to give a balanced viewpoint and they do this by alternating between pulling sources from right and left, thinking this is going to give balance,” [Prof. Meredith] Broussard said. “The most popular sources on the left and right vary dramatically in quality.” Such attempts can be fraught. Earlier this week, the media company the Messenger announced a new partnership with AI company Seekr to “eliminate bias” in the news. Yet Seekr’s website characterizes some articles from the pro-Trump news network One America News as “center” and as having “very high” reliability. Meanwhile, several articles from the Associated Press were rated “very low.” [...] Yet despite a growing clamor in Congress to respond to the threat AI poses to elections, much of the attention has fixated on deepfakes. However, [attorney Jacob] Glick warned Alexa and AI-powered systems could “potentially double down on the damage that’s been done.” “If you have AI models drawing from an internet that is filled with platforms that don’t care about the preservation of democracy … you’re going to get information that includes really dangerous undercurrents,” he said. [color emphasis added]
165 notes · View notes
maliciousblog · 2 months
Text
NCT Dream as your saesangs +
SungTaro / NCT U
Jeno (Bodyguard) 🎀
He would be your manager or your bodyguard that way he could always be by your side and keep an eye on you.
Each time you tried to interact with someone he would stop you.
Each time you tried to go out alone he would accompany you. He never left you alone.
It started to get on your nerve, one day you snapped when he insisted on accompanying you to the clothing store, you told him that you could take care of yourself and didn't have to be babysitted you could tell that he was taken aback by what you said.
To which he replied that he was only trying to keep you safe and was just doing his best to make sure nothing bad happened to you.
You started to feel guilty as he started to guilt trip you, which was exactly what he wanted.
You gave into whatever he wanted as he started to manipulate you and each time you questioned him he would guilt trip you into being the bad guy.
Jaemin ( the problematic fansite)💝
He was a well known fansite. He would always take the best pictures, he would always be the first one to reach events, the first to take pictures. At first it was just pictures of you at concerts or fansigns.
Soon it became pictures of you at private schedule's or of you in front of your dorm or of you on the way to the company. Then it became pictures of you when you hung out with your family or friends. Then it became pictures of you going about your life whether it be of you on a grocery run or of you walking your dog.
What alarmed you the most was a picture of you in your room you never took a picture like this, you soon started to get paranoid.
He loved the look of terror on your face as you saw the pictures he took.
He would edit the pictures he took of you with himself to make it seem like both of you were a couple, he would even make the occasional deepfake.
Renjun (fatal)💓
No one would possibly suspect him. He would show up everywhere you went.
He would run a fan account dedicated to you. He would constantly defend you, no matter what.
Once he happened to catch an a well known anti that always spread false rumors about you a couple of days later no one heard of the anti ever again. No matter who the problem was it just magically dissapeared. A staff member that was a little rude to you the next day they didn't come. A choreographer that got a little too touchy magically dissapeared. When one of your group members got more lines than you they "accidentally" fell down the stairs.
It had become a hobby for him now, in the end he didn't know if he was doing it for you or for his own sick pleasure.
Chenle(fan boy)💖
Would also be a singer and would probably be your friend.
He would try his hardest to get collab stages with you.
He was a true fanboy he would promote you more than he would promote himself.
He would secretly enjoy the fans shipping both of you. In fact he would go out of his way to make it happen by trying to maintain eye contact with you during shows, or buying the same things you did to make it seem as if you were a couple, even go as far as to start rumors himself and leak pictures of you.
He would eventually ask you out, but when you rejected him that was the last anyone ever saw of you.
Jisung (puppy)💕
Would be a love sick puppy. Heart eyes , he would mostly be harmless a total shy fan boy.
Would be obsessed with you would do everything in his power to make sure you succeed.
It would start out as an innocent little crush would slowly turn into a dangerous obsession.
His walls would be filled with posters of you. To you he only seemed like a harmless fanboy but the more you saw him around the more uneasy you felt. No one would  believe you, cause everyone only saw him as a cute harmless fanboy.
Sungchan (delusional)🎀
Would have you as his lock wallpaper. Would never miss a live.
Would be extremely delusional.
In his mind you and him were already dating and nothing or no one could change his mind.
He would get angry any time someone mentioned the fact that you were an idol and he was just a fan and there is no way that you two could be together and at the end of the day that's all he would ever be he would just be one  among the millions of fans you had.
Unfortunately he would refuse to accept this fact and would live in the delusion that you were his. When news broke out that you were dating a fellow idol, all hell broke loose. That would be his breaking point. He would make you his whether you liked it or not.
Shotaro (dancer)💖
He would be a popular dancer who would cover popular songs that's how he came across you.
He would watch one of your dance practice videos and would be enamored by your grace and skill.
From being an occasional listener to a fan his obsession would steadily grow.
He would dance cover every single one of your songs and would even make choreo to the ones that didn't have it. On many occasions you would notice him and would be impressed with his skill.
He would find a way for your company to hire him as your new choreographer as your last one got into a nasty accident.
56 notes · View notes