Tumgik
#to be clear im not even against ethical AI art
kvothbloodless · 1 year
Note
As someone who’s followed you for a long time, and a fellow writer….do you genuinely support chatGPT? Even if it has been proven to scrape people’s writings without their consent? And the fact that its a soulless husk of what actual writing is?
So Ive done my best to avoid AI discourse, but I am bad at it. So to put it clearly.
I dont Support chatgpt specifically. Its a program made by a corporation because they can make a profit from it, and curtail its capabilities in line with their goals. I use it sometimes because its fun ans convenient, but I dont pay for it or anything. I think pretty much all legitimate concerns about AI are really just concerns about corporate ethics, at which point the AI specifically becomes irrelevant
I also dont really think the situation in general is one where "support" is the right word to use; its like asking if I support cameras.
I want to be clear im not trying to be vague though; i think in general AI is pretty cool, and that 99% of the arguments against it are factually incorrect, based on inconsistent or bad moral frameworks, or both. Im not going to get into all the reasons AIs are cool, since I dont feel up to doing a Big AI Post, but I do want to gesture vahuely in the direction of how much more accessible this makes artistic creation and experimentation, and all the insanely fun things people can use AI for. However, i do want to address the common arguments against it, especially since it feels like a lot of people are just misinformed and falling into the very easy tumblr trap.
1. Scraping fics without consent: if you post something on a public site for the public to read, and someone downloads it to read it, its not stealing. Tumblr is generally pretty firmly against strict copywrite laws, so its a bit weird that so many people are Very concerned about AIs "stealing" their work in a way thats even less direct than fanfiction. Like, "your writing is so in character" and "wow youre really good at imitating the authors writing style" are common compliments on fics.
To be clear, there are absolutely ways of interacting with free public content thats bad (like reposting). But AIs arent frankensteining stuff together, or copy pasting anything; they literally could not do that. The training/model files are so insanely tiny compared to the amount of training data that it would be impossible for them to be storing that sort of thing. An AI learning from your writing isnt really any different than a human reading your writing, going "oh I like that" subconsciously, and using that to inform their future writing.
Whats that quote about "to get better at writing, read more"? If you believe that statememt is accurate, then Im a bit confused how youd be upset about an AI "reading" your writing and learning from it?
All of the Actual ways this could be used badly (copying someones art style and then selling art cheaper, etc.) arent actually unique to AI (a person can also do that!), and also usually arent really a thing thats happening enough to be a problem (most people who want art from an artist enough to commission them, are going to want it to actually be done By that artist). Using AI to make an original work, or even to finish a fanfic someone else wrote, is no more theft than if you did it by yourself.
2. "Soulless Husk": im sorry im genuinely trying my best here to be gentle and respectful, but this talking point genuinely makes my blood boil a bit, so im not trying to get personal or insult anyone specifically. Claiming that art made by humans is automatically deeper, that art is all about Meaning and Struggle, comes off as extremely pretentious and just doesnt engage with the reality of art. Im having trouble articulating a good argument here, but like. Art doesnt have to be deep, it doesnt have to have a Message, and the necessity of struggling in order to learn about and create art is an unequivocable bad thing.
Many many people make art (visual, written, etc.) becuase they want to see or read something cool or because they think others will find it fun. Many many people who would like to create art do not do so because it requires talent/practice that they are not willing to put the time and effort into developing, and this is not a moral issue. As Ive said before, laziness is a virtue and self-indulgance is a goal we should aspire to. Using a tool to make the incredibly fun act of creating easier and more accessible is awesome.
Also, who gets to define what "actual art" is? Genuinely seems a bit arrogant and presumptuous to declare that something is or is not "actual art" based on your own preferences.
The posts that Really annoy me are the ones that are like "cant understand the people RPing or playing a TTRPG with AI becuz its soulless. I only ever RP with other people because the fun part is being social or collaberatively building story" or whatever. Cool. I rp and play tabletops because i like to have fun and I enjoy those activities. Im gonna be honest, i find it a bit offensive and kinda.. dumb, to act like its somehow morally superior to only enjoy oneself in a way that Builds Community or Has A Meaning. People do meaningless things because they enjoy doing them and so long as no one is being hurt, thats fine.
3. Replacing workers: This is the only argument that has Any weight, but its still a bad argument. Its absolutrly true that AI is gonna put people out of work. The fact that more people will be out of work and struggling is a bad thing. This is also what happens when literally any new technology is developed, dating all the way back to the industrial revolution (at the Very latest). Automation is almost always a massive net benefit to humanity (im aware this topic could be its own series of posts but You Know What I Mean, please dont start industrialization discourse here), and the fact that it puts people out of work isnt actually an argument against it. Lots of radio operators lost their job when telephones were invented, but that doesnt mean people should have never used the telephone and obstinately stuck with radios. When phone technology advanced, a lot of phone operators lost their jobs, but that doesnt mean people were Morally Wrong to use the new developments that made phones far more convenient.
As in all things, el problema es capitalismo; AI isnt preventing anyone from doing art, its just reducing how many people can get paid to do their art. The correct solution to automation putting people out of work is not to stop the wheel of tech development, but to change society so that one doesnt have to slave away in order to survive.
This also raises the adjacent point, which is that halting tech development like this is impossible. Once the cat is out of the bag, its not going back in. AI tech is going to develop and become more widespread and theres literally nothing you can do to stop that. If you want to be upset about that, its your perogative ig, but im gonna be excitedly waiting for the dam to break and we get an open source LLM that I can use without worrying about data privacy or corporate filters, so I can have fun and RP however much I want and finally be able to write the stuff I want to write but cant do on my own because I have adhd and chronic fatigue, and Id kinda prefer if people stopped trying to rain on my parade because they think im not having fun or creating art in the "right way".
EDIT: I did want to add one more thing! A US court ruled that you cant copywrite something created completely by an AI, and i think thats an awesome move and I hope it gets expanded a bit and spreads elsewhere. This isnt a position based on whether something produced in this way counts as "real art" or is morally bad or whatever, I just think that reducing corporate and IP grasp on anything is good, and that AI specifically is going to be most enjoyable when its free and open source.
50 notes · View notes
automatopsy · 5 months
Note
im actually a bit interested in what you have to say about ai generated art
like I know there are ethical ways to use ai art (only using images from consenting participants) but I assume you also have other thoughts
i have a lot of thoughts but i don't have the focus to type them all out cohesively right now. irt that specific post i got mad about people tagging as "ai art isn't real art," not mainly because i disagree (though i DO disagree), but mainly because THAT DOESN'T MATTER FOR THE CONTEXT OF THE POST AT ALL. ai generated images dominating search results, especially unlabeled, is a matter of misinformation, not art philosophy. "misinformation" doesn't feel like quite the right word here, possibly just because i associate it more with written or spoken than visual information, but it's the closest i can think of.
the only two use cases i can easily think of right now are species identification and reference images but I'm sure there's more but anyway you straight up do need clear and accurate photos for a lot of searches. and like google's been going downhill shoving sponsored shit at the top for a while now and to make it even harder to actually find what you're looking for is incredibly frustrating. infuriating even. worse if you think you got something accurate and then it wasn't and then you've already learned wrong. i feel like I'm making it sound kind of trivial but i can't articulate further points right now. whatever you get it phenomenon bad filters good to have and i reblogged to spread the filters. if you're looking specifically for ai generated images that's a different story but that's not really relevant. i haven't personally been getting a lot (as far as i can tell, which is part of the big thing) but i know it's happening.
but if search results were full of painstakingly hand-painted photorealistic anatomically-inaccurate physical paintings of every species on gods green earth it would be the exact same problem. they aren't, because nobody has time for that. who give a shit.
granted i don't think this is some insidious plot by Big AI to make people forget what a peachick looks like or whatever. i think this sort of case– specifically this sort of case– of misinformation is more by negligence than intention.
anyway yeah i AM in support of ethical image synthesis and i DO consider it art. specifically i think of it as a unique artistic medium. i also consider "unethical" (quotes only because i don't really feel like trying to define ethical and unethical right now) image synthesis to be art because art can be unethical.
there are literally actual issues to talk about, mainly a) the aforementioned and more misinformation b) the use specifically of real people's likenesses (it's very weird to me how chill even the anti-ai crowd seems to be with ai voice generation considering your voice is kind of literally part of your body) and c) The Problem Is Capitalism. but all anyone ever seems to wanna do is shit on individual ai artists and go on and on about how "this is UNCLEAN, IMPURE, AUTOMATED PLAGIARISM, this is SACRILEGE against HOLY HUMAN CREATIVITY, [some inspiration porn thrown in there too somehow]" because god forbid the average tumblr user be forced to consider a problem on a material structural level instead of an individual one
those couple people on that post may've also been just using it as their catchall tag on the topic. don't care
2 notes · View notes
caredogstips · 7 years
Text
Google’s AI ambitions show promise- ‘if it doesn’t kill us’
Googles path to developing machine-learning tools illustrates the stark challenge that tech companies face in trying to make machines act like humans
Machines may yet take over the world, but first they must learn to recognize your dog.
To hear Google executives tell it at their annual developer conference this week, the technology industry is on the cusp of an artificial intelligence, or AI, revolution. Computers, without guidance, will be able to spot disease, engage humans in conversation and creatively outsmart world champions in competition. Such breakthroughs in machine learning have been the stuff of science fiction since Stanley Kubricks 1968 film 2001: A Space Odyssey.
Im incredibly excited about the progress were making, CEO Sundar Pichai told a crowd of 7,000 developers at Google I/O from an outdoor concert stage. Humans can achieve a lot more with the support of AI assisting them.
For better and worse, the companys near-term plans for the technology are more Office Space than Terminator. Think smartphones that can recognize pets in photos, appropriately respond to text messages, and find a window in your schedule where you should probably go to the gym. Googlers repeatedly boasted about how its computers could now automatically tag all of someones pictures with a pet.
Mario Klingemann, a self-described code artist, said he is using Googles machine-learning tools to have his computer make art for him by sorting through pictures on his computer and combining them to form new images.
All I have to do is sit back and let whatever it has created pass by and decide if I like it or not, Klingemann told the audience on Thursday night. In one of his pieces, called Run, Hipster. Run, Googles software had attached some fashionable leather boots to a hip bone.
It may seem like the latest example where Silicon Valley talks about changing society yet gives the world productivity apps. But it also illustrates the stark challenge that technology companies face in trying to make machines act like humans.
Itll be really, really small things that are just a bit more intuitive, said Patrick Fuentes, 34, a mobile developer for Nerdery in Minneapolis. He considered autocorrect on touchscreen keyboards a modern victory for machine learning. Referring to Skynet, the malicious computer network that turns against the human race in Terminator, Fuentes said: Were not there yet.
Mario Queiroz introduces Google Home during the Google I/O 2016 developers conference. Photograph: Stephen Lam/Reuters
Google is considered the sectors leader in artificial intelligence after it began pouring resources into the area about four years ago. During a three-day conference that took on the vibe of a music festival with outdoor merchandise and beer vendors, Pichai made clear he sees machine learning as his companys future.
He unveiled the new Google Assistant, a disembodied voice that will help users decide what movie to see, keep up with email, and control lights and music at home. After showing how Googles machines can now recognize many dogs, he explained how he wants to use the same image recognition technology to spot damage to the eyes caused by diabetes. He boasted that Googles AI software, AlphaGo, showed creativity when it beat a world champion at Go, the Korean board game considered more difficult than chess.
This might seem like an odd push for a firm that makes its money from cataloging the web and showing people ads. But the focus is part of a broader transition in the technology sector from helping consumers explore unlimited options online to telling them the best choice.
For instance, several developers gave the example of a smarter ways to predict what people are looking for online given their past interests.
If this guy likes sports and, I dont know, drinks, you should give him these suggestions, said Mikhail Ivashchenko, the chief technology officer of BeSmart in Kyrgyzstan. It will know exactly what youre looking for.
Unprompted, Ivashchenko said, its not quite Skynet. His nearby friend, David Renton, a recent computer science graduate from Galway, Ireland, then mused how it would be awesome if Google could eventually develop a Skynet equivalent. Think of the applications if it doesnt kill us, Renton said.
John Giannandrea, a Google vice-president of engineering who focuses on machine intelligence, said he wont declare victory until Googles software can read a text and naturally paraphrase it. Another challenge is that even the smartest machines these days have trouble transferring their knowledge from one activity to another.
For instance, AlphaGo, Googles software from the Go competition, wouldnt be able to apply its accumulated skills to chess or tic-tac-toe.
Still, Giannandrea said its hard not to get excited by recent gains in teaching computers how to recognize patterns in images.
The field is getting a little bit overhyped because of the progress were seeing, he said. Things that are hard for people to do we can teach computers to do. Things that are easy for people are hard for computers.
Of course, delegating even small decisions to machines has caused a flurry of discussions about the ethics of artificial intelligence. Several technology leaders, including Steven Hawking and Elon Musk, have called for more research on the social impact of artificial intelligence.
For instance, Klingemann, the code artist, said he is already contemplating whether he needs to change his title.
I have become more of a curator than a creator, he said.
Read more: www.theguardian.com
The post Google’s AI ambitions show promise- ‘if it doesn’t kill us’ appeared first on caredogstips.com.
from WordPress http://ift.tt/2sZTAXf via IFTTT
0 notes