#llm/ai
Explore tagged Tumblr posts
variablejabberwocky · 8 months ago
Text
#I can pick up a pencil and do this in a notebook and mesmerize the person sitting next to me#I am a goddamn magician and I loved every second of learning it and love every second of doing it more#The full saying is that 'imitation is the sincerest form of flattery THAT MEDIOCRITY CAN PAY GREATNESS'#and that's the truth#Using generative AI for your 'art' makes you a mediocre little thief without the guts for Real Magic and I fucking PITY you.#Pick up a FUCKING PENCIL and USE YOUR BRAIN for once in your life you sniveling twit#yes it looks bad at first#it's the Moritfying Ordeal of Becoming Good At Something and if my deeply emotionally unstable ass could weather it at age 4#and age 10 and age 13 and age 20 and last week when I turned 35#then you can deal with an understandable lack of skill after your pathetic display of thin-skinned laziness earlier#Get. Fucking. Good. (x)
'do you think you're superior for not using AI in your work' thank you for asking! yes i do
143K notes · View notes
unforth · 1 year ago
Text
Y'all I know that when so-called AI generates ridiculous results it's hilarious and I find it as funny as the next guy but I NEED y'all to remember that every single time an AI answer is generated it uses 5x as much energy as a conventional websearch and burns through 10 ml of water. FOR EVERY ANSWER. Each big llm is equal to 300,000 kiligrams of carbon dioxide emissions.
LLMs are killing the environment, and when we generate answers for the lolz we're still contributing to it.
Stop using it. Stop using it for a.n.y.t.h.i.n.g. We need to kill it.
Sources:
64K notes · View notes
prokopetz · 26 days ago
Text
A couple of years ago we were all terribly concerned about the fact that a lot of American high schools are assigning such crushing homework loads that some kids literally don't have enough time to eat or sleep (and all this in spite of the fact that there's no good evidence that assigning homework actually improves academic outcomes at the pre-university level), but now we're hearing stories about those same schools struggling to stop kids from using ChatGPT to write their essays and suddenly It's The Children Who Are Wrong. Like, do you think maybe there's a certain level of cause and effect in play here?
20K notes · View notes
aiweirdness · 2 months ago
Text
“Slopsquatting” in a nutshell:
1. LLM-generated code tries to run code from online software packages. Which is normal, that’s how you get math packages and stuff but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
14K notes · View notes
variablejabberwocky · 8 months ago
Text
#so to add some nuance to last night's still entirely correct diatribe:#the thing i have beef with is generative AI#non-generative AI which is 'a tedious process#which is the machine that steals the art of other people and puts it in a blender while sucking up huge amounts of energy and water#non-generative AI which is 'we automated a tedious process so you can get to the creative part faster'#is the technological equivalent of a sewing machine. it doesn't replace the skill of fiber engineering. it just makes it faster#vocaloid too is a sewing machine#vocaloid is the digital version of all the fun switches on a sound-mixing machine#those are just tools#and more importantly#not theft#the cancer-cell-detector software is also a sewing machine#it looks for shapes instead so your lab interns can do more important stuff#that's the real difference#are you using a sewing machine to make your quilt#or are you stealing someone else's quilt and trying to pass the work of selecting your theft target off as sewing?#ONE of those is art#the other is criminal (x)
38K notes · View notes
sreegs · 2 years ago
Text
TERFS FUCK OFF
One of the common mistakes I see for people relying on "AI" (LLMs and image generators) is that they think the AI they're interacting with is capable of thought and reason. It's not. This is why using AI to write essays or answer questions is a really bad idea because it's not doing so in any meaningful or thoughtful way. All it's doing is producing the statistically most likely expected output to the input.
This is why you can ask ChatGPT "is mayonnaise a palindrome?" and it will respond "No it's not." but then you ask "Are you sure? I think it is" and it will respond "Actually it is! Mayonnaise is spelled the same backward as it is forward"
All it's doing is trying to sound like it's providing a correct answer. It doesn't actually know what a palindrome is even if it has a function capable of checking for palindromes (it doesn't). It's not "Artificial Intelligence" by any meaning of the term, it's just called AI because that's a discipline of programming. It doesn't inherently mean it has intelligence.
So if you use an AI and expect it to make something that's been made with careful thought or consideration, you're gonna get fucked over. It's not even a quality issue. It just can't consistently produce things of value because there's no understanding there. It doesn't "know" because it can't "know".
44K notes · View notes
sistersorrow · 2 months ago
Text
Tumblr media
Experimental ethics are more of a guideline really
3K notes · View notes
mindblowingscience · 1 month ago
Text
A trio of business analysts at Duke University has found that people who use AI apps at work are perceived by their colleagues as less diligent, lazier and less competent than those who do not use them. In their study, published in Proceedings of the National Academy of Sciences, Jessica Reif, Richard Larrick and Jack Soll carried out four online experiments asking 4,400 participants to imagine they were in scenarios in which some workers used AI and some did not, and how they viewed themselves or others working under such circumstances.
Continue Reading.
2K notes · View notes
variablejabberwocky · 11 months ago
Text
but if there's no symbolism then why is my children's hospital painted red?
AI people: we're just as much artists as you are, you gotta be so observant and go through so many correcting phases for the picture to look good uwu also AI people:
Tumblr media
77K notes · View notes
bitchesgetriches · 1 month ago
Text
31% of employees are actively ‘sabotaging’ AI efforts. Here’s why
"In a new study, almost a third of respondents said they are refusing to use their company’s AI tools and apps. A few factors could be at play."
Tumblr media
1K notes · View notes
river-taxbird · 10 months ago
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes · View notes
hms-no-fun · 8 months ago
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
2K notes · View notes
variablejabberwocky · 5 months ago
Text
#the tiniest violin#my kind of good news#I'm in tears#This is pure beauty#I am wheezing this is glorious#Thoughts and prayers for the poor cryptobros!!!
There is something deliciously funny about AI getting replaced by AI.
tl;dr: China yeeted a cheaper, faster, less environmental impact making, open source LLM model onto the market and US AI companies lost nearly 600 billions in value since yesterday.
Silicone Valley is having a meltdown.
And ChatGTP just lost its job to AI~.
27K notes · View notes
10001gecs · 7 months ago
Note
one 100 word email written with ai costs roughly one bottle of water to produce. the discussion of whether or not using ai for work is lazy becomes a non issue when you understand there is no ethical way to use it regardless of your intentions or your personal capabilities for the task at hand
with all due respect, this isnt true. *training* generative ai takes a ton of power, but actually using it takes about as much energy as a google search (with image generation being slightly more expensive). we can talk about resource costs when averaged over the amount of work that any model does, but its unhelpful to put a smokescreen over that fact. when you approach it like an issue of scale (i.e. "training ai is bad for the environment, we should think better about where we deploy it/boycott it/otherwise organize abt this) it has power as a movement. but otherwise it becomes a personal choice, moralizing "you personally are harming the environment by using chatgpt" which is not really effective messaging. and that in turn drives the sort of "you are stupid/evil for using ai" rhetoric that i hate. my point is not whether or not using ai is immoral (i mean, i dont think it is, but beyond that). its that the most common arguments against it from ostensible progressives end up just being reactionary
Tumblr media
i like this quote a little more- its perfectly fine to have reservations about the current state of gen ai, but its not just going to go away.
1K notes · View notes
variablejabberwocky · 6 days ago
Text
i don't have the exact quote in front of me but one of the college classes i took had the perfect quote in its syllabus for how people treat college:
"an education is one of the few things people are willing to pay for and not get"
Why are you using chatgpt to get through college. Why are you spending so much time and money on something just to be functionally illiterate and have zero new skills at the end of it all. Literally shooting yourself in the foot. If you want to waste thirty grand you can always just buy a sportscar.
20K notes · View notes