#Adversarial AI
Explore tagged Tumblr posts
ms2253 · 2 months ago
Text
youtube
0 notes
leonbasinwriter · 3 months ago
Text
AI is rewriting cybersecurity—faster, smarter, stronger. But here’s the brutal truth: it’s also weaponizing cybercrime at an unprecedented scale.
0 notes
fly-chicken · 1 month ago
Text
About to swing a bat at a hornets nest for this
BUT
Thunderbolts (or whatever the newest marvel movie 6373 is called) May be a good movie but I won’t watch it due to their absolutely shit line of dialogue in the advertisement
“He’s more powerful than all the avengers rolled into one”
…thank you whatever grade 7 child wrote that line. It’s not good and a terrible way to describe your villain
3 notes · View notes
taiwantalk · 4 months ago
Text
3 notes · View notes
willcodehtmlforfood · 2 years ago
Text
The time has come - humans ai prey upon one another
"Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave."
8 notes · View notes
kenyatta · 1 year ago
Text
Text-to-image generators work by being trained on large datasets that include millions or billions of images. Some generators, like those offered by Adobe or Getty, are only trained with images the generator’s maker owns or has a licence to use.  But other generators have been trained by indiscriminately scraping online images, many of which may be under copyright. This has led to a slew of copyright infringement cases where artists have accused big tech companies of stealing and profiting from their work. This is also where the idea of “poison” comes in. Researchers who want to empower individual artists have recently created a tool named “Nightshade” to fight back against unauthorised image scraping. The tool works by subtly altering an image’s pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human’s eyes. If an organisation then scrapes one of these images to train a future AI model, its data pool becomes “poisoned”. This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results.
6 notes · View notes
olivergisttv · 4 months ago
Text
How to Use AI to Detect Deepfake Videos
Deepfake videos, where artificial intelligence (AI) is used to manipulate videos to make people appear as if they are saying or doing things they never did, have become increasingly sophisticated. As these videos pose risks in various areas such as misinformation, fraud, and personal privacy, detecting deepfakes has become critical. Here’s how you can use AI to identify and protect yourself from…
0 notes
jcmarchi · 4 months ago
Text
Eric Schmidt: AI misuse poses an ‘extreme risk’
New Post has been published on https://thedigitalinsider.com/eric-schmidt-ai-misuse-poses-an-extreme-risk/
Eric Schmidt: AI misuse poses an ‘extreme risk’
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.
Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”
Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”
Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”
He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.
“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.
Oversight without stifling innovation
Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.
“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”
Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  
Global divisions around preventing AI misuse
The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.
The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  
However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 
Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  
This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 
Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.
“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.
Prioritising national and global safety
Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.
From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.
While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.
(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)
See also: NEPC: AI sprint risks environmental catastrophe
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
6 notes · View notes
nnctales · 8 months ago
Text
What AI Cannot Do: AI Limitation
Artificial Intelligence (AI) has made remarkable strides in recent years, revolutionizing industries from healthcare to finance. However, despite its impressive capabilities, there are inherent AI limitation to what it can achieve. Understanding these limitations is crucial for effectively integrating AI into our lives and recognizing its role as a tool rather than a replacement for human…
0 notes
thedevmaster-tdm · 9 months ago
Text
youtube
STOP Using Fake Human Faces in AI
1 note · View note
wanderlustwineblissblog · 1 year ago
Text
Tumblr media
0 notes
neonnomadnirvanablog · 1 year ago
Text
Tumblr media
0 notes
jcmarchi · 11 months ago
Text
Deepfake misuse & deepfake detection (before it’s too late) - CyberTalk
New Post has been published on https://thedigitalinsider.com/deepfake-misuse-deepfake-detection-before-its-too-late-cybertalk/
Deepfake misuse & deepfake detection (before it’s too late) - CyberTalk
Tumblr media Tumblr media
Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.
In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.
Can you explain how deepfake technology works? 
Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.
GANs sound cool, but technical. Could you break down how they operate?
GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.
The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.
What are some notable examples of deepfake tech’s misuse?
Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.
How was the crisis involving the Zelenskyy deepfake video managed?
The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.
What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?
Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.
Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.
How are law enforcement agencies addressing the challenges posed by deepfake technology?
Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.
What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?
Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.
What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?
On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.
The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.
The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.
Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.
As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.
For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.
2 notes · View notes
hologramartgalleryblog · 1 year ago
Text
Tumblr media
0 notes
thedevmaster-tdm · 9 months ago
Text
youtube
MIND-BLOWING Semantic Data Secrets Revealed in AI and Machine Learning
1 note · View note
krissym72 · 1 year ago
Text
A Beginner's Guide to Creating Your Own AI Image Generator
In recent years, the intersection of artificial intelligence (AI) and art has sparked a revolution in creative expression. AI art generation, powered by sophisticated algorithms and neural networks, has enabled artists and enthusiasts alike to explore new realms of creativity and produce mesmerizing artworks that push the boundaries of traditional art forms. The importance of creating your own…
Tumblr media
View On WordPress
1 note · View note