#it's all about tags the algorithm really isn't all that intelligent
Explore tagged Tumblr posts
woman-becomer · 4 months ago
Text
Tumblr media
me when I shed my carapace and I'm reborn again, wiser and stronger. me when I rise from the flames like the glorious Phoenix
357 notes · View notes
crazy-pages · 10 months ago
Text
I've been blogging a bunch about AI and copyright today, and it comes down to this:
I think that if an artist says "I would like to be compensated fairly for a big billion dollar company using my art", that's reasonable. Any legal framework where that isn't the case isn't fair. I think that's especially reasonable when the big billion dollar companies in question claim their product will end all demand for the artist's work. And fair negotiations require the right of refusal, which means artists also need the ability to reject use of their work (yes even if you think what it's used for will be important).
I think that claiming big generative machine learning companies aren't really using artist's work is silly on its face. But it could hypothetically be equated to human inspiration, where we recognise that the vast majority of the creative work is happening within the human mind as opposed to the work used for inspiration, and be given an appropriate legal carveout ... if the artificial intelligence gets the profits. If something is the creative work of an intelligent agent, that intelligent agent has the right to benefit from its work. And if we are not at the point where machine learning algorithms can be said to have that kind of agency, then it doesn't get a special carve out for the act of expressing creative agency.
Does this bring up complex and novel questions about how to address the legality of non-human agents? Absolutely! Of course it does! But anybody in the machine learning space who genuinely thinks they are creating novel creative intelligence with its own agency, who complains that the resulting legalities are complex, does not get to be taken seriously. Anyone claiming that this is a justification for it taking the money derived from its work doubly does not get to be taken seriously, and anybody arguing that the creative labor invested in creating artificial life gives them a right to the profits of its labor ... but somehow not the creatives whose work it is comprised of ... is so far from the realm of serious debate it can only be darkly funny.
If you want to claim machine learning algorithms are so intelligent they can only be legally treated as human equivalent creative processes, but you don't want to deal with the legal complexities of them as human equivalents, I'm just gonna call that slavery and be done with it.
Now I'm not a legal expert. My limited understanding of copyright law is that generally you don't get to do things with copyrighted material that the copyright owner doesn't want you to do, except where it satisfies certain exemption criteria which are in the public artistic interest or general good, and that this should apply to scraping data for machine learning training purposes. But ultimately my expertise is kind of a moot point because the European Union clearly does think this falls under copyright law and is enforceable, and that generative machine learning companies need to get permission from copyright holders. And they do get to decide what their own copyright law is.
I also don't think enforcement of this needs to be particularly draconian or should be. For example, it would be draconian to say that accidental hosting of a bit of pirated media posted to a public forum in a training data set is legal grounds to shut down a company. However I think it is very reasonable to say that companies should be subject to reasonable regulations about trying to prevent that, and that intellectual property owners like artists should get to insist companies remove their works from datasets and apply machine learning methods to minimize their impact on generative outputs (a thing machine learning companies already do to deal with poisoned or improperly tagged data).
And because this is a case of a few billion dollar companies versus the copyright protections of millions artists, I think it's reasonable to have the onus be upon machine learning companies to double check the copyrights of the material they are using and to have it be illegal with very significant penalties to knowingly use copyrighted material. Which could be determined by either the difficult legal process of demonstrating deliberate action and intent and also through simple escalating penalty frameworks where the more people who point out you're using their copyrighted material, the more the penalties escalate.
Oh hey, that's what the EU is doing! Neat.
13 notes · View notes
junfoong · 12 days ago
Text
🌀 Week 12 : need help? crowdsource it 💀 (algorithm save me!!! oh, nvm...)
when disaster strikes, the network listens.
Huhhh, so you mean to tell me that we've all become emergency responders now? SINCE WHEN?
Tumblr media
🧠 Crowdsourcing error 101 feat. urgent version
It's called the Internet's collective intelligence assistance. Think more of it as a "tag everyone you know & pray that the algorithm is kind than "email a hotline"
I mean it's NOT NEW. For instance, back in 1714, the british government offered money to anyone who could figure out longitude (Chrum, 2013) . Dumb but crazy i guess...
Thanks to the digital networks, my Instagram stories, Tweet's and RedNote may be a part of a wider network of a real-time info during a huge crisis. Slayyy 💅
Tumblr media
🕸️ So like, it is crisis-ready for the network ?
No doubt that the network matters.
📞 Centralised = one hotline 🔁 Decentralised = a few connected nodes 🌐 Distributed = everyone yells into the void AND hears back
Which one works best when the electricity goes out and everything is on fire? Yes. distributed. The group chat and Twitter is flooded with this kind of stuff... The ABC even abandoned their previous methodology and began operating localised accounts amid bushfires and floods (Posetti & Lo, 2011). Wise? Indeed. Messy? Pretty much
📲 Witnessing… but digital
Once upon a time, no one paid attention to shaky phone videos. Now? Your TikTok might be the only proof of what really happened during a crisis.
Horbury & Hughes (2010) further explained that social media isn't just a place to share content—it's where we try to make sense of what's going on.
“What just happened? Who’s responsible? Are my friends safe? Am I safe?”
And Instagram’s like: “Boohoo, I don't know how to feel about this but you know what... You should post a photo dump.” 😭💅
Tumblr media
🚨 vEriFiCaTiOn? siGn mE uP!
Here's the tricky part
Not everyone is being truthful
Everyone can post any content
Verification is tuff but it's important
According to Ford (2012), technology such as Ushahidi provides a cool bird's-eye view, but the information is only useful if it is true. Otherwise, the comments section is a complete disaster.
🧠 Crisis? Community? Collective intelligence?
Pierre Lévy would scream right now (in a good way). He referred to this as "collective intelligence"—the notion that our collective brain can outperform any single organisation. We're talking networked publics, grassroots response, and a group of strangers on TikTok using Google Docs to save lives.
Thank you to CrisisCommons and Random Hacks of Kindness, whose names are well-known. 💅
Tumblr media
🤖 We are the emergency service? Huhh?
Nyeaaa, i meant it's safe to mentioned that the line between citizens and responder is blurring
We have built digital communities that reacts faster , even before the government knew what the hell happened! Bad news? The platform still controls on who gets seen, whose crisis goes viral, and who gets ignored.
Things could get out of control like yes... It can saves lives but it can also spread lies. Oh yea, what happened to Annabelle? Still no news? 🧍‍♂️
📚 References
Fostikov, A. A. (2023, January 19). Crowdsourcing in history and digital humanities. https://www.researchgate.net/publication/367272738_Crowdsourcing_in_history_and_digital_humanities
Howe, J. (2006, January). The Rise of Crowdsourcing. ResearchGate. https://www.researchgate.net/publication/200504371_The_Rise_of_Crowdsourcing
Mortensen, M. (2015). Connective witnessing: Reconfiguring the relationship between the individual and the collective. Information, Communication & Society, 18(11), 1393–1406. https://doi.org/10.1080/1369118x.2015.1061574
0 notes
jestershark · 2 years ago
Text
It's not intelligent! It's statistics.
One thing that I'm concerned about with ai art and the push to regulate it beyond how data should be regulated is to some extent it isn't meaningfully different than stuff like blackout poetry or collage: I think this is a controversial opinion among artists, but it's really hard to make laws around this (permission and sourcing) that will hurt openai and will not hurt small artists who create art about large corporations.
The real problems with a lot of these tools are
A) people treating them as magic
B) the mass displacement of jobs in creative fields
C) massive, massive biases in training data and code
D) misinformation
For the first, dad n pix are right that ai is a misnomer-- the industry standard around people who use these systems is "machine learning". If you hear someone talking about agi, they are a hack and a fool. These systems turn points of data into numbers and then find the lines between those numbers, and they do it billions of times. Ml systems are seriously good at pattern recognition and can recognize patterns based on what they're trained on. This is why a common tag for generating good ai images is "trending on artstation"-- things that trend on art station tend to have a more consistent appearance and style.
This goes to the mass displacement of jobs. We are running out of jobs. Even if we were automating simply jobs that suck, we would need a policy on how to ensure the planet doesn't sink under the weight of the suddenly unemployed. The criticisms people have with ai art, including fingers, patterns, over detailing-- they will not be problems forever. There is already ai art that exists which is indistinguishable from art made by humans.
There is a positive to this which is most of the stuff ai art is going to displace is soulless corporate dreck. In an ideal world this would open up creatives to pursue better creative work. You don't have to illustrate the SEO images for Ikea's bargain sock department: you can spend your time making actually good stuff. Unfortunately we live in a capitalist dystopia.
Thirdly, biases in training data get brought up a lot. And for good reason! Biases in training data are very hard to get rid of, and because it's a big messy statistical model, it will always bias towards most likely. This is also a reason it isn't easy to just remove someone's art from a model: it all gets baked in. You can't roll things back because it's such a complicated web connections will remain in place. This is also why a lot of bias training is manual: dudes earning 1.50 an hour telling the ai not to be racist over and over. It's super hard to remove bias from a system because it's a statistical average of what has gone in.
Which leads to the other big issue: misinformation. it's super easy to inject misinformation into these models. Chatgpt for example takes all your input data-- yes, all of it-- and uses it as training data. This means people who used it to rewrite patient notes or as a backend for a therapy app? That's all in the algorithm now. They take all of that and mash it up. Which means if we wanted to spread goncharov into chatgpt we could probably do that.
Garage in garbage out.
The model doesn't have to be good enough to fool you: it only has to be good enough to fool your uncle who watches fox news. This is where ai is going to get used: for manipulation, clicks and views.
In an ideal world the rise of ml and ai would be something to be celebrated. It would be an unalloyed good that someone without artistic talent could come up with fun wacky images for their own fun: because we'd value art and artistry as of themselves and not as content that exists to make clicks and views and money. The problems you have with ai are problems with capitalism and they're not gonna be solved any time soon
(I wrote about this a bit in December: https://jessbpeck.com/posts/artificialintelligence/)
Other sources/interesting links:
(https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer)
https://time.com/6275995/chatgpt-facebook-african-workers-union/
https://twitter.com/emilymbender/status/1652382613729050624?t=7PTUweWK8yFvwTJd4ld17Q&s=19
"AI Art" it's not AI. let's call it what it is: algorithm art. It's not an intelligence creating a piece of art inspired with a sense of self awareness, it's "art" made by statistically choosing the "best" aesthetic and "best" colours all chosen by committee.
17 notes · View notes