#LLm
Explore tagged Tumblr posts
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
#sarahposts#ai#ai art#llm#chatgpt#artificial intelligence#genai#anti genai#capitalism is bad#tech companies#i really don't like these people if that wasn't clear#sarahAIposts
2K notes
·
View notes
Text
ChatGPT (and all the rest) really do work on exactly the same principle as mediums, psychics, etc. It's called cold reading (and really, given how much info we've given to these bots, hot reading in many cases). It's a millennia old trick, and it's just a trick.
Short short version: you use vague, probabilistic guesses based around the question posed, let the mark I mean client/user fill in the blanks and flesh out your reading, and they hook themselves.
9K notes
·
View notes
Text
“Slopsquatting” in a nutshell:
1. LLM-generated code tries to run code from online software packages. Which is normal, that’s how you get math packages and stuff but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
#slopsquatting#ai generated code#LLM#yes ive got your package right here#why yes it is stable and trustworthy#its readme says so#and now Google snippets read the readme and says so too#no problems ever in mimmic software packige
14K notes
·
View notes
Text
TERFS FUCK OFF
One of the common mistakes I see for people relying on "AI" (LLMs and image generators) is that they think the AI they're interacting with is capable of thought and reason. It's not. This is why using AI to write essays or answer questions is a really bad idea because it's not doing so in any meaningful or thoughtful way. All it's doing is producing the statistically most likely expected output to the input.
This is why you can ask ChatGPT "is mayonnaise a palindrome?" and it will respond "No it's not." but then you ask "Are you sure? I think it is" and it will respond "Actually it is! Mayonnaise is spelled the same backward as it is forward"
All it's doing is trying to sound like it's providing a correct answer. It doesn't actually know what a palindrome is even if it has a function capable of checking for palindromes (it doesn't). It's not "Artificial Intelligence" by any meaning of the term, it's just called AI because that's a discipline of programming. It doesn't inherently mean it has intelligence.
So if you use an AI and expect it to make something that's been made with careful thought or consideration, you're gonna get fucked over. It's not even a quality issue. It just can't consistently produce things of value because there's no understanding there. It doesn't "know" because it can't "know".
44K notes
·
View notes
Text

Experimental ethics are more of a guideline really
3K notes
·
View notes
Text
★ lino linoing moments: 18/∞
#stray kids#skz#lee know#lee minho#han jisung#hwang hyunjin#hyunjin#bystay#staydaily#linosource#llm#by01ino#usersa#userlau#usersemily#melontrack#mimotag#dancerachasource#gagwanzsource#dailyminchan#createskz#skzco#1k♡
2K notes
·
View notes
Text
A trio of business analysts at Duke University has found that people who use AI apps at work are perceived by their colleagues as less diligent, lazier and less competent than those who do not use them. In their study, published in Proceedings of the National Academy of Sciences, Jessica Reif, Richard Larrick and Jack Soll carried out four online experiments asking 4,400 participants to imagine they were in scenarios in which some workers used AI and some did not, and how they viewed themselves or others working under such circumstances.
Continue Reading.
2K notes
·
View notes
Text
31% of employees are actively ‘sabotaging’ AI efforts. Here’s why
"In a new study, almost a third of respondents said they are refusing to use their company’s AI tools and apps. A few factors could be at play."
1K notes
·
View notes
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes
·
View notes
Text
1K notes
·
View notes
Text
people have noticed that large language models get better at stuff like math and coding if the models have to spend time showing their work. apparently it can get excessive.
the prompt was "hi"
the eventual output was "Hi there! How can I assist you today?"
source
#llm#relatable though#look at it. it's got anxiety#how much extra energy does it take to run an llm this way#it's probably a lot
516 notes
·
View notes
Note
Hi there! I'm a human artist who is (very loosely) following the Disney/Universal vs. Midjourney case, and you seem like you're pretty knowledgeable about it and it's potential consequences, so if you have time/energy to answer a question I have about it I'd greatly appreciate it! If not, no worries, feel free to ignore! I haven't had the chance to read through the whole complaint document itself, but at the very top, point 2 mentions:
"...distributing images (and soon videos) that blatantly incorporate and copy Disney’s and Universal’s famous characters—without investing a penny in their creation—Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism. Piracy is piracy, and whether an infringing image or video is made with AI or another technology does not make it any less infringing."
Do you know if human-made fanart would also be included in this? Or is this something that would only be aimed at big companies? the "incorporate Disney's characters" part is giving me some pause, but like I said I haven't had the chance to read the full document and I'm not confident in my knowledge of copyright law. 😅 Thank you in advance if you're able to answer this! (Brought to you by a concerned fanartist with near-equal disdain for both Disney and AI. also sorry for the essay-length question 😅)
No problem at all, I'm happy to help ease your worries!
To put it simply, nothing is going to change for us. This is only going to affect unethical LLMs like MidJourney, OpenAI, etc. trained on copyrighted material without consent.
This is because Disney (and Universal) are arguing that LLMs are already infringing current copyright law. LLMs make money by directly using their copyrighted images fed into machine that then regurgitates their IP, and is sold for a premium, en mass.
So there's that, but even more importantly: it's already illegal to make money off of fanart.
Which, corporations don't really care about unless you're making a LOT of money or getting a LOT of attention. This is because it's quite expensive to take someone to court, and you have to prove your business was negatively affected by said fanart (nearly impossible in most cases). You've got to be making quite a bit more money than the court costs, and provide documented proof of damages (to your wallet or name) for corporations to go after you.
Which, your individual/indie fanartists don't qualify... but MJ most certainly does.
So, not to say something bad can't possibly crop up from this court case, but there are quite a few things protecting us: there's no angle in the court case that targets fair use (this indirectly protects non-commercial fanart), the court case touches on human interpretation being essential for transformative art (which LLMs don't have since they're automatic), LLMs are already infringing existing copyright law (making money using Disney's images), Disney has quantifiable proof of damages to their company by said LLMs (nigh impossible for individuals to do), corporations have a vested interest in keeping fair use around as free advertisement (fanart is akin to spoken word about your product), and fair use is intensely tied to freedom of speech.
So don't worry! There are reasonable concerned voices considering how evil Disney and Universal both are--but most of the vehement arguments being made against this court case are from scared techbros who want unfettered access to your money and labor. Current copyright and IP law is far from perfect, but anyone calling for total abolition thereof wants protection taken from individuals like us.
#zilly squeaks#copyright#ai#llm#Disney#I'm getting some techbros in my mentions and i ain't babysitting y'all#so if u come at me with any of your psyops I'm just blocking you#y'all are dumb as hell and obvious as fuck
251 notes
·
View notes
Text
MMO with an integrated AI, but it never actually says anything, it just analyses the vocabulary and phrasing of player chatter and bans you if it detects OOC on public channels.
#concepts#gaming#video games#mmos#ai#llm#ooc#the only way to appeal is to explain why it was actually in character for you to say that
2K notes
·
View notes
Text
I asked Google "who ruined Google" and they replied honestly using their AI, which is now forced on all of us. It's too funny not to share!
1K notes
·
View notes
Text
★ lino linoing moments: 13/∞ © nn_sam02
#minbin 🥰#lino is literally a cat pretending to be a human#skz#stray kids#bystay#lee know#lee minho#seo changbin#changbin#linosource#staydaily#llm#by01ino#1k♡
2K notes
·
View notes