Tumgik
#deep learning
size-there-or · 11 hours
Text
121 notes · View notes
point-into · 2 days
Text
https://emily-129.tengp.icu/gy/0sQM14b
123 notes · View notes
pineappleciders · 1 year
Text
Tumblr media
thought of this while watching the new episode
3K notes · View notes
misskaboom · 6 months
Text
Tumblr media
Good morning 🖤🪢⛓️📖💋
771 notes · View notes
lesliemeyers · 1 month
Text
Tumblr media Tumblr media Tumblr media Tumblr media
i know im like a year late but im finally catching up on the most recent season [x]
245 notes · View notes
reality-detective · 9 months
Text
Tumblr media
865 notes · View notes
https://kimberly-452.mjcyd.asia/w/10JwhWf
127 notes · View notes
traumatoonz · 1 year
Text
"Kahl's not a girl, Stan- It's DIFFERENT!"
Tumblr media
452 notes · View notes
Text
125 notes · View notes
fibrielsolaer · 1 year
Text
"Ethical AI" activists are making artwork AI-proof
Hello dreamers!
Art thieves have been infamously claiming that AI illustration "thinks just like a human" and that an AI copying an artist's image is as noble and righteous as a human artist taking inspiration.
It turns out this is - surprise! - factually and provably not true. In fact, some people who have experience working with AI models are developing a technology that can make AI art theft no longer possible by exploiting a fatal, and unfixable, flaw in their algorithms.
They have published an early version of this technology called Glaze.
https://glaze.cs.uchicago.edu
Glaze works by altering an image so that it looks only a little different to the human eye but very different to an AI. This produces what is called an adversarial example. Adversarial examples are a known vulnerability of all current AI models that have been written on extensively since 2014, and it isn't possible to "fix" it without inventing a whole new AI technology, because it's a consequence of the basic way that modern AIs work.
This "glaze" will persist through screenshotting, cropping, rotating, and any other mundane transformation to an image that keeps it the same image from the human perspective.
The web site gives a hypothetical example of the consequences - poisoned with enough adversarial examples, AIs asked to copy an artist's style will end up combining several different art styles together. Perhaps they might even stop being able to tell hands from mouths or otherwise devolve into eldritch slops of colors and shapes.
Techbros are attempting to discourage people from using this by lying and claiming that it can be bypassed, or is only a temporary solution, or most desperately that they already have all the data they need so it wouldn't matter. However, if this glaze technology works, using it will retroactively damage their existing data unless they completely cease automatically scalping images.
Give it a try and see if it works. Can't hurt, right?
596 notes · View notes