#it's important I'd train it with my own data and transparency with its training database would be important of course
Explore tagged Tumblr posts
Text
OMG yes
It's not that I think that it's a-ok for AI to steal everything around it willy-nilly, but I really think that the current AI problem is about AI's practical aspects than a moral issue. I mean, tools and techniques are (most of the time, exceptions apply!!) relatively neutral in their existence, we are the ones giving it meaning and value. We are the ones that decide the good (cooking) and the bad (killing) use of a knife. And we are the ones who decide when the bad use might not be that bad ('citizen killing another citizen in a street' vs 'soldier killing another soldier in a war').
To me, the AI issue is a "it's not 'we' who are giving it meaning and purpose, it's a small group of people who barely even know how it works and are only interested in the profits it may or may not bring".
In a way, since AI is an intelligence that adapts with time, training it is a bit like raising a pet, isn't it? I would prefer that my pet is raised by me and people I trust, not handed to me pre-trained by a guy who has a record of using pets to steal wallets, and being required to hand it over to that guy's pet daycare every day for 'updates in the training'. Hm-hm, wallet-stealing guy, it's not suspicious at all.
ed zitron, a tech beat reporter, wrote an article about a recent paper that came out from goldman-sachs calling AI, in nicer terms, a grift. it is a really interesting article; hearing criticism from people who are not ignorant of the tech and have no reason to mince words is refreshing. it also brings up points and asks the right questions:
if AI is going to be a trillion dollar investment, what trillion dollar problem is it solving?
what does it mean when people say that AI will "get better"? what does that look like and how would it even be achieved? the article makes a point to debunk talking points about how all tech is misunderstood at first by pointing out that the tech it gets compared to the most, the internet and smartphones, were both created over the course of decades with roadmaps and clear goals. AI does not have this.
the american power grid straight up cannot handle the load required to run AI because it has not been meaningfully developed in decades. how are they going to overcome this hurdle (they aren't)?
people who are losing their jobs to this tech aren't being "replaced". they're just getting a taste of how little their managers care about their craft and how little they think of their consumer base. ai is not capable of replacing humans and there's no indication they ever will because...
all of these models use the same training data so now they're all giving the same wrong answers in the same voice. without massive and i mean EXPONENTIALLY MASSIVE troves of data to work with, they are pretty much as a standstill for any innovation they're imagining in their heads
#ai#ai critique#capitalism#I'll forever claim that if I had my own piece of AI that I could train just with open data and my own data I'd get it#because *I* get to determine which parts of my creative process I want to engage with and which ones are chores#if you think I wouldn't get it to help me draw the same character and backgrounds 100 time in a comic you are mistaken#but I would still do or redo whatever I don't think is working up to erasing everything and redoing by hand#you know - how we already use graphic software#it's important I'd train it with my own data and transparency with its training database would be important of course#I'd even buy training data if I thought it would improve what I want and the person is willing to sell#but all of this pipe dream is conditioned to “is it possible? and feasible? in this year of our lord 2025?”#and sadly I think it's not yet - we still have a lot to do and dismantling capitalism is one of them
76K notes
·
View notes