Tumgik
#deepfake detection
Text
https://pi-labs.ai/pi-labs-joins-nvidia-inception/
0 notes
simplyforensic · 1 month
Text
The Deepfake Dilemma: Understanding the Technology Behind Artificial Media Manipulation
Introduction In our visually-driven world, the adage “seeing is believing” has taken on a new dimension with the advent of deepfake technology. This innovative yet controversial facet of artificial intelligence (AI) has the remarkable ability to manipulate audiovisual content, blurring the lines between reality and fabrication. As we delve into the complexities of deepfakes, we uncover a realm…
0 notes
Text
https://pi-labs.ai/
Tumblr media
https://pi-labs.ai/
deepfake detection
0 notes
techiexpertnews · 5 months
Text
Why Deepfakes are Dangerous and How to Identify?
Spotting deepfakes requires a keen eye and an understanding of telltale signs. Unnatural facial expressions, awkward body movements and inconsistencies in coloring or alignment can betray the artificial nature of manipulated media. Lack of emotion or unusual eye movements may also indicate a deepfake. You can check if a video is real by looking at news from reliable sources and searching for similar images online. These can help in find changes or defects in the tech-generated video.
Tumblr media
0 notes
lilithsaintcrow · 2 months
Text
"By comparing the reflections in an individual’s eyeballs, Owolabi could correctly predict whether the image was a fake about 70% of the time…researchers found that the Gini index was better than the CAS system at predicting whether an image had been manipulated."
4 notes · View notes
airyairyaucontraire · 2 years
Text
while not all murder mysteries would adapt well in this way I feel that Knives Out would make a really fun Columbo
15 notes · View notes
krissym72 · 7 months
Text
How to Determine if an Image is AI Generated?
In the ever-expanding digital landscape, the proliferation of AI-generated images has become a defining characteristic of the modern era. With algorithms wielding the power to conjure remarkably realistic visuals, the question of how to tell if an image is AI generated has taken center stage. These AI-generated images, ranging from deepfakes to computer-generated scenes, present a myriad of…
Tumblr media
View On WordPress
0 notes
newstrendline · 8 months
Text
What is Deepfake Technology
Deepfakes can be used for many purposes, from enhancing video games to providing medical research. However, they can also be used for malicious purposes. Deepfakes are created using machine learning algorithms to make or modify media. These algorithms take into account factors such as lighting, facial expressions, and the cadence of a person’s voice. Is deepfakes illegal? While deepfakes can have…
Tumblr media
View On WordPress
0 notes
jcmarchi · 3 months
Text
Deepfakes and Navigating the New Era of Synthetic Media
New Post has been published on https://thedigitalinsider.com/deepfakes-and-navigating-the-new-era-of-synthetic-media/
Deepfakes and Navigating the New Era of Synthetic Media
Remember “fake news“? The term has been used (and abused) so extensively at this point that it can be hard to remember what it initially referred to. But the concept has a very specific origin. Ten years ago, journalists began sounding the alarm about an influx of purported “news” sites flinging false, often outlandish claims about politicians and celebrities. Many could instantly tell these sites were illegitimate.
But many more lacked the critical tools to recognize this. The result was the first stirrings of an epistemological crisis that is now coming to engulf the internet—one that has reached its most frightening manifestation with the rise of deepfakes.
Next to even a passable deepfake, the “fake news” websites of yore seem tame. Worse yet, even those who believe themselves to possess relatively high levels of media literacy are at risk of being fooled. Synthetic media created with the use of deep learning algorithms and generative AI have the potential to wreak havoc on the foundations of our society. According to Deloitte, this year alone they could cost businesses more than $250 million through phony transactions and other varieties of fraud. Meanwhile, the World Economic Forum has called deepfakes “one of the most worrying uses of AI,” pointing to the potential of “agenda-driven, real-time AI chatbots and avatars” to facilitate new strains of ultra-personalized (and ultra-effective) manipulation.
The WEF’s suggested response to this problem is a sensible one: they advocate a “zero-trust mindset,” one that brings a degree of skepticism to every encounter with digital media. If we want to distinguish between the authentic and synthetic moving forward—especially in immersive online environments—such a mindset will be increasingly essential.
Two approaches to combating the deepfake crisis
Combating rampant disinformation bred by synthetic media will require, in my opinion, two distinct approaches.
The first involves verification: providing a simple way for everyday internet users to determine whether the video they’re looking at is indeed authentic. Such tools are already widespread in industries like insurance, given the potential of bad actors to file false claims abetted by doctored videos, photographs and documents. Democratizing these tools—making them free and easy to access—is a crucial first step in this fight, and we are already seeing significant movement on this front.
The second step is less technological in nature, and thus more of a challenge: namely, raising awareness and fostering critical thinking skills. In the aftermath of the original “fake news” scandal, in 2015, nonprofits across the country drew up media literacy programs and worked to spread best practices, often pairing with local civic institutions to empower everyday citizens to spot falsehoods. Of course, old-school “fake news” is child’s play next to the most advanced deepfakes, which is why we need to redouble our efforts on this front and invest in education at every level.
Advanced deepfakes require advanced critical thinking
Of course, these educational initiatives were somewhat easier to undertake when the disinformation in question was text-based. With fake news sites, the telltale signs of fraudulence were often obvious: janky web design, rampant typos, bizarre sourcing. With deepfakes, the signs are much more subtle—and quite often impossible to notice at first glance.
Accordingly, internet users of every age need to effectively re-train themselves to scrutinize digital video for deepfake indicators. That means paying close attention to a number of factors. For video, that could mean unreal-seeming blurry areas and shadows; unnatural-looking facial movements and expressions; too-perfect skin tones; inconsistent patterns in clothing and in movements; lip sync errors; on and on. For audio, that could mean voices that are too-pristine sounding (or obviously digitized), a lack of a human-feeling emotional tone, odd speech patterns, or unusual phrasing.
In the short-term, this kind of self-training can be highly useful. By asking ourselves, over and over again, Does this look suspicious?, we sharpen not merely our ability to detect deepfakes but our critical thinking skills in general. That said, we are rapidly approaching a point at which not even the best-trained eye will be able to separate fact from fiction without outside assistance. The visual tells—the irregularities mentioned above—will be technologically smoothed over, such that wholly manufactured clips will be indistinguishable from the genuine article. What we will be left with is our situational intuition—our ability to ask ourselves questions like Would such-and-such a politician or celebrity really say that? Is the content of this video plausible?
It is in this context that AI-detection platforms become so essential. With the naked eye rendered irrelevant for deepfake detection purposes, these platforms can serve as definitive arbiters of reality—guardrails against the epistemological abyss. When a video looks real but somehow seems suspicious—as will occur more and more often in the coming months and years—these platforms can keep us grounded in the facts by confirming the baseline veracity of whatever we’re looking at. Ultimately, with technology this powerful, the only thing that can save us is AI itself. We need to fight fire with fire—which means using good AI to root out the technology’s worst abuses.
Really, the acquisition of these skills in no way needs to be a cynical or negative process. Fostering a zero-trust mindset can instead be thought of as an opportunity to sharpen your critical thinking, intuition, and awareness. By asking yourself, over and over again, certain key questions—Does this make sense? Is this suspicious?—you heighten your ability to confront not merely fake media but the world writ large. If there’s a silver lining to the deepfake era, this is it. We are being forced to think for ourselves and to become more empirical in our day-to-day lives—and that can only be a good thing.
0 notes
amalasdraws · 11 months
Note
https://www.tumblr.com/bigmammallama5/732632789726478336?source=share do you have any tips on how to detect ai and deepfakes?
Good question and I'm gonna be honest, it's not always easy and it will only get harder and harder. I'm just an artist who has spent their personal time to dive into this topic and study images. I'm still learning and there is a lot I don't know. But let me show what I know. This will be long, but I will make a summary at the end! So far, even with ai having become better and better there are still almost always some things wrong with an image, and they all have a very specific look to them. So let me try to show you some and point out some of them.
As we all know, a biggest struggle ai had were hands. And even though here and there we still see messed up hands, I say "had", because the hands is actual a good example on how ai is improving and will only get better. Still, looking at pictures that show more hands is always worth it, because somewhere in the back there will be most likely at least one messed up hand.
Another issue a lot of ai still has is hair though!
Tumblr media Tumblr media Tumblr media
It's very obvious still in many ai "drawings" and in those otherwise well rendered portraits. Hair starts to blend with the ears a lot, or with the clothes.
There is also often this very odd look between something too sharp and way too blurry
Tumblr media Tumblr media Tumblr media
There is often a very specific texture to the hair. I actually do not know the artistic or specific name for it. I can only describe it as this weird sharp feeling that makes it look oddly pixely, and then you have areas where it's very blurry. And the kind of loops and almost flame like looking hair we see in the last pic out of the three here is also something very common with ai.
As an artist I know we make mistakes too! The way I draw hair is flawed too! But it's not only that it's flawed here, but it's following always the same pattern and falls into the same issues over and over again, no matter who is "creating" the image. Those flame like loops are a common one, next to the odd blends and weird sharp and blurry textures.
But ai is getting better, and we not only have "art" and something that tries to be a drawing/painting, but photos too.
Tumblr media Tumblr media Tumblr media Tumblr media
A lot of those "photos" have a very specific texture and look to them! Again, it's not always the mistakes, but the very specific optic too. A lot of the images are oddly smooth, too rendered, with always blurry backgrounds. And when you look closer at the background you will see the mistakes! The crowd behind Jesus is a hot mess once you look closer. Bob Marley's hair has the same issue than I described before. Lincoln is surrounded by people with messed up hands and don't even get me started on the faces behind Caesar.
So a lot of ai images look alright on a first and quick glance, but as more time you spend with them, as more mistakes you will notice. The wehre is Waldo of ai horror.
And those "photos" shared here are still very obvious. Not just the mistakes and messed up details but the very specific aesthetic too.
Those images get better and better and as less details you have, as less mistakes you have!
Tumblr media Tumblr media Tumblr media
With photos like this it becomes harder and harder. There are not many details and no hands. Not many mistakes can be made. Also the very obvious plastic looking smoothness isn't so much here anymore. It kinda still is...but differently. And always the blurry background!! Sometimes the hair is still a giveaway. Collars and clothe straps are also often still a giveaway upon close look. As is jewelry. Earrings will be different and necklaces often don't go all the way around, just end, or blend with the hair or clothes.
Tumblr media Tumblr media
Often details on jewelry is also blurry and not shown properly. This is a trick with many details. With jewelry, batches, hair, ears, text. So it's often blurred out and not shown properly because ai doesn't know what to really show here.
Tumblr media Tumblr media
It's often really just the small details and when we scroll down quickly we will miss them. Like the wedding ring on the middle finger, the pens on top of a closed pocket, the batches that are always blurry, messed up faces that blend with a blurry background.
And sometimes it's so subtle that I could only really tell that right is the ai image in comparison to the real photo on the left. The real photo shows hands clearly and even when things are blurred out it doesn't feel that it's done to hide things. The ai image on the right hides the hands. There is also a very dead look in the eyes :D
Tumblr media
And here I could only tell because the text in the back doesn't make sense. Even blurred out we should be able to make out something here
Tumblr media
And after seeing a lot of ai images I recognize the kind of blurred out bg in combination with a very smooth and well rendered foreground/characters.
And here the only giveaway is a closer look at the backgrounds as well
Tumblr media
To summarize it:
Ai and fake news rely on a fast living world. We are being bombarded with tons of information and messages daily and we scroll past quickly. But the best tool, for now, in detecting ai is taking our time! Those images get better and better but so far there are still always some things off!! Especially in the background!
Hair. Often weirdly smoothed out and oddly sharp at the same time
Hair often blends with the ears or the clothes
Details are blurred out.
Jewelry doesn't match (example earrings). Details on metal often blurred out and never shown. Necklaces blend with hair or the clothes, and don't go around the neck.
Background is always blurred out.
In this blurred mess there are often hidden very messed up faces and/or hands.
A very specific smooth and yet too sharp/too rendered aesthetic combines with an always blurry bg.
Text, especialyl in the background, is not legible and doesn't make sense.
Backgrounds are often (so far) the dead giveaway. Somewhere in the back things become muddled and messed up. This shows also very well in ai decor/architecture. There will be odd lines that don't align or align too well. Curtain poles that end in the furniture, a plant that is behind a lamp suddenly having leaves in front of the lamp. As longer you look as more you will notice.
Tumblr media Tumblr media
Conclusion:
Take your time with images! Sit with them! Especially when it's framed as important and political news. Is it ai and propaganda, or did it really happen? Don't fall for the quick buzz and outrage! Some things are obvious right away but with others you have to take your time. And it's time you have! If you are still unsure if a pic is real or not, do some research on top. Image reverse search. Can you find it anywhere else? Are other news outlets sharing it? Does the image/message make sense? For example there is now a deepfake of Bella Hadid voicing support for Israel. Ask yourself, does this make sense? If it feels out of line compared to previous behavior, do some research! Media literacy is not just as being able to recognize a fake or real right away, but being able to do research. To question things! Don't just take every post online for face value. Even when shared by a mutual you trust. They might have been tricked!
There are so many information online and it's great to have access to so information, but it's also difficult to wade through all of it. Media and truth are a weapon and it's being twisted and bend used to manipulate. Always has! But ai and so many people being able to post and share things, it becomes bigger and bigger and more dangerous. So don't just take everything that is handed to you and share it further no questions asked. Media literacy and being able to think for ourselves and do the research is important!! And as research becomes harder and harder, as sources are being messed up with ai and other fake news, it's even more important to sit with the images and study them. See the flaws, the mistakes. Compare it to other news and images.
This got long, and I started to ramble at the end. Sorry But I hope this helped
6K notes · View notes
Text
https://pi-labs.ai/pi-labs-joins-nvidia-inception/
0 notes
thefloatingstone · 11 months
Text
Dunno if anyone's mentioned this yet, but I found a site where you can drop images to check whether they're AI generated or not and it will give you a percentage of how likely something is to be a machine generated imagine, or machine scraped text.
(images I knew for a fact were not machine generated because they predate it)
Tumblr media Tumblr media
(Images I know were machine generated because they were marked as such on a booru)
Tumblr media Tumblr media
1K notes · View notes
tippenfunkaport · 1 year
Text
Tumblr media
If you're not great at spotting AI art, there are a few free sites to help you figure it out...
(The first link, AI or Not is the fastest / easiest imho but selection is rarely a bad thing.)
Really helpful if you're getting a fanart commission or book cover or something and someone is trying to trick you into paying for an AI creation. AI technology gets better all the time so you can't always rely on the usual "tells." But if it looks off and you want to double check, here are some ways to do it!
377 notes · View notes
techiexpertnews · 10 months
Text
How To Stay Away From Deepfakes?
Generative AI is getting more proficient at creating deepfakes that can sound and look realistic. As a result, some of the more sophisticated spoofers have taken social engineering attacks to a more sinister level.
Tumblr media
0 notes
mindblowingscience · 2 months
Text
The eyes, the old saying goes, are the window to the soul — but when it comes to deepfake images, they might be a window into unreality. That's according to new research conducted at the University of Hull in the U.K., which applied techniques typically used in observing distant galaxies to determine whether images of human faces were real or not. The idea was sparked when Kevin Pimbblet, a professor of astrophysics at the University, was studying facial imagery created by artificial intelligence (AI) art generators Midjourney and Stable Diffusion. He wondered whether he could use physics to determine which images were fake and which were real. "It dawned on me that the reflections in the eyes were the obvious thing to look at," he told Space.com. 
Continue Reading.
107 notes · View notes
reasonsforhope · 7 months
Text
"Major technology companies signed a pact on Friday to voluntarily adopt "reasonable precautions" to prevent artificial intelligence (AI) tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. 
Twelve other companies - including Elon Musk's X - are also signing on to the accord...
The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio, and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote".
The companies aren't committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. 
It notes the companies will share best practices and provide "swift and proportionate responses" when that content starts to spread.
Lack of binding requirements
The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.
"The language isn't quite as strong as one might have expected," said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. 
"I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be keeping an eye on whether they follow through." ...
Several political leaders from Europe and the US also joined Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, "it contains very impactful and positive elements".  ...
[The Accord and Where We're At]
The accord calls on platforms to "pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression".
It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.
Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.
That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law [in the US], but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.
Many social media companies already have policies in place to deter deceptive posts about electoral processes - AI-generated or not... 
[Signatories Include]
In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn't immediately respond to a request for comment on Friday.
The inclusion of X - not mentioned in an earlier announcement about the pending accord - was one of the surprises of Friday's agreement."
-via EuroNews, February 17, 2024
--
Note: No idea whether this will actually do much of anything (would love to hear from people with experience in this area on significant this is), but I'll definitely take it. Some of these companies may even mean it! (X/Twitter almost definitely doesn't, though).
Still, like I said, I'll take it. Any significant move toward tech companies self-regulating AI is a good sign, as far as I'm concerned, especially a large-scale and international effort. Even if it's a "mostly symbolic" accord, the scale and prominence of this accord is encouraging, and it sets a precedent for further regulation to build on.
146 notes · View notes