#ai and llms suck
Explore tagged Tumblr posts
Text
Yes, witches and wizards know it isn't all it's cracked up to be! Merlyn prompting the conjuration AI in The Once and Future King by T. H. White: * * *
"Looking-glass," said Merlyn, holding out his hand. Immediately there was a tiny lady's vanity-glass in his hand.
"Not that kind, you fool," he said angrily. "I want one big enough to shave in." The vanity-glass vanished, and in its place there was shaving mirror about a foot square. He then demanded pencil and paper in quick succession; got an unsharpened pencil and the Morning Post; sent them back; got a fountain pen with no ink in it and six reams of brown paper suitable for parcels; sent them back; flew into a passion in which he said by-our-lady quite often, and ended up with a carbon pencil and some cigarette papers which he said would have to do."
Seen someone say sirius black would love chatgpt and a part of me died because WHAT are you talking about
#the once and future king#quondam et futurus#merlin#merlyn#ai and llms suck#magic doesn't solve all your problems
251 notes
·
View notes
Text
Posting this on my lesser-known blog bc I don't want flak but I AM anti-ai. I am. But. Ai Overlook has given me more clearly-outlined coping strategies and emotional validation and support than anyone in my support system over the last year, so....
#like im anti ai for ethical reasons. its bad for the environment and genai art and audio uses stolen stuff in a bad way.#and also i feel like ai overview was a cheeky move by google to reinstall 'at a glance' after the lawsuit?? just a conspiracy theory tho#however i think LLMs and specifically these VERY plainly-worded to-the-point bulletted list summaries can be good accessibility tools-#-for those with conditions that affect their ability to PROCESS LANGUAGE like psychosis or autism.#like when i am in crisis i CANNOT slog through an article that's long on purpose to increase user engagement time.#thats actually shitty of the websites for being designed that way (hot take)#but ai summary that highlights the sourced articles and provides the articles to verify??#actually good for me when I ask a simple question about basic things webMD LOVES to click farm on.#like for god's sake my therapist is giving me the 'YOU have to figure out how to deal FOR YOURSELF'#and ai overlook is like 'try negotiating so you feel empowered when your demand avoidance is really bad.'#like hello. HELLO. how are you being beaten by the ai. HELLO#i think this is a post abt how systemic support networks suck actually haha /gen
0 notes
Text
Please forgive me for ranting, but...I am so tired of AI. Just so tired. I don't want Microsoft Copilot, or Google Gemini, or Meta AI, or whatever other energy-sucking, water-wasting, mediocrity-spewing LLM is currently being thrust upon me. I just want to be left alone to create in peace.
14K notes
·
View notes
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
#sarahposts#ai#ai art#llm#chatgpt#artificial intelligence#genai#anti genai#capitalism is bad#tech companies#i really don't like these people if that wasn't clear#sarahAIposts
2K notes
·
View notes
Note
are there any critiques of AI art or maybe AI in general that you would agree with?
AI art makes it a lot easier to make bad art on a mass production scale which absolutely floods art platforms (sucks). LLMs make it a lot easier to make content slop on a mass production scale which absolutely floods search results (sucks and with much worse consequences). both will be integrated into production pipelines in ways that put people out of jobs or justify lower pay for existing jobs. most AI-produced stuff is bad. the loudest and most emphatic boosters of this shit are soulless venture capital guys with an obvious and profound disdain for the concept of art or creative expression. the current wave of hype around it means that machine learning is being incorporated into workflows and places where it provides no benefit and in fact makes services and production meaningfully worse. it is genuinely terrifying to see people looking to chatGPT for personal and professional advice. the process of training AIs and labelling datasets involves profound exploitation of workers in the global south. the ability of AI tech to automate biases while erasing accountability is chilling. seems unwise to put a lot of our technological eggs in a completely opaque black box basket (mixing my metaphors ab it with that one). bing ai wont let me generate 'tesla CEO meat mistake' because it hates fun
6K notes
·
View notes
Note
What are your thoughts on AI in relation to creators?
It sucks! I'm so bored of being diplomatic about it. AI images suck. AI writing sucks. And, even if it didn't, it's gross that our work, art, and data can be stolen on mass by billionares without even so much as a 'beg your pardon'.
Technofeudalism is here and it sucks.
I'm not a fan, especially not of LIMs and LLMs. I used to just not care, but I'm a couple of clicks across the care line now and marching deeper into hate territory. I hate that AI is being forced into everything, that it's making most things shitter, I hate that the obvious intention of this machine is to propagandise and put people out of work for a cheaper, lower quality product, and I hate that people keep telling me my writing would be better if I used it.
Which is so freaking rude I genuinely don't understand why these people are baffled when I tell them to fuck off.
I actually like writing (smashing ideas together, scribbling out sentences, rewriting them, putting in weird jokes, creating finalFINALforrealthistimefinal_2.docx files) so that's what I'm gonna keep on doing.
And, yeah, I just released a book in which the main character is an AI. I wrote it in 2022 prior to ChatGPT. I know, right? Fuck me. 🙃
147 notes
·
View notes
Text
I think people just don't know enough about LLMs. Yeah, it can make you stupid. It can easily make you stupid if you rely on it for anything. At the same time, though, it's an absolutely essential tutoring resource for people that don't have access to highly-specialized personnel.
AI is a dangerous tool. If you get sucked into it and offload all your thinking to it, yeah, you're gonna be screwed. But just because it's dangerous doesn't mean that no one knows how to wield it effectively. We REALLY have to have more education about AI and the potential benefits it has to learning. By being open to conversations like this, we can empower the next generation that grows up with AI in schools to use it wisely and effectively.
Instead? We've been shaming it for existing. It's not going to stop. The only way to survive through the AI age intact is to adapt, and that means knowing how to use AI as a tool -- not as a therapist, or an essay-writer, or just a way to get away with plagiarism. AI is an incredibly powerful resource, and we are being silly to ignore it as it slowly becomes more and more pervasive in society
#this isn't abt ai art btw i'm not touching that shit#ai#ai discourse#chatgpt#self.txt#artificial intelligence#ai research#ai development#ai design#llm#llm development#computer science
36 notes
·
View notes
Note
For your "AI is destroying the environment and entering a single prompt into ChatGPT is catastrophically destructive" anons.
https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for
Maybe it's bad enough that LLMs are not fit for the purposes they are used for and are being leveraged as excuses to underpay or fire creative professionals, and that that companies steal material for the training data. Maybe we don't have to make things up about AI to say it's bad.
--
LLMs suck because people who think they're smart due to knowing tech consistently fail to understand social science and humanities concepts that should be obvious to a 5-year-old.
Environmental impacts are indeed a negligible aspect of the LLM issue.
47 notes
·
View notes
Text
I won't be opting out of the AI scraping thing, though of course I'm glad they're giving us the option. In fact, at some point in the last year or so, I realized that 'the machine' is actually a part of why I'm writing in the first place, a conscious part of my audience.
All the old reasons are still there; this is a great place to practice writing, and I can feel proud looking back over the years and getting a sense of my own improvement at stringing words together, developing and communicating ideas. And I mean, social media is what it is. I'm not immune to the joy of getting a lot of notes on something that I worked hard on, it's not like I'm Tumbling in a different way than anyone else at the end of the day. But I probably care a bit less than I used to, precisely because there's a lurking background knowledge that regardless of how popular it is, what I write will get schlorped up in to the giant LLM vacuum cleaner and used to train the next big thing, and the thing after that, and the thing after that. This is more than a little reassuring to me.
That sets me apart in some ways; the LLMs aren't so popular around these parts, and most visual artists especially take strong issue with the practice. I don't mean to argue with that preference, or tell them their business. Particularly when it is a business, from which they draw an income. But there's an art to distinguishing the urgent from the big, yeah?
The debate about AI in this particular moment in history feels like a very urgent thing to me- it's about well-justified economic anxieties, about the devaluation of human artistic efforts in favor of mass production of uninspired pro-forma drek, about the proliferation of a cost-effective Just Barely Good Enough that drives out the meaningful and the thoughtful. But the immediacy of those issues, I think, has a way of crowding out a deeper and more thoughtful debate about what AI is, and what it's going to mean for us in the day after tomorrow. The urgency of the moment, in other words, tends to obscure the things that make AI important.
And like, it is. It is really, really important.
The two-step that people in 'tech culture' tend to deploy in response to the urgent economic crisis often resembles something like "yeah, it sucks that lots of people get put out of work; but new jobs will be created, and in the meantime maybe we should get on that UBI thing." This response usually makes me wince a bit- casually gesturing in the direction of a massive overhaul of the entire material basis of our lives, and saying that maybe we'll get around to fixing that sometime soon, isn't a real answer to people wondering where their bread will come from next week.
But I do understand a little of what motivates that sort of cavalier attitude, because like... man, I don't know any more if we're even gonna have money as a concept in 2044. That's what I mean by 'big', this sense that the immediate economic shocks of 2024 are just a foreshadowing of something much bigger, much scarier, much more powerful- and indeed, much more hopeful.
We never quite manage to see these things coming, even when we're looking for them; like the masters tell us, the trick to writing science fiction isn't predicting the car, it's predicting the traffic jam. Even if we take centuries to hit the true superintelligent AI post-singularity future of our wildest fever dreams, even if we never hit that, the road to getting there is going to be unfathomably weird, starting now. Today, we worry about the economic impacts of AI on artists and creatives. Tomorrow, the economy is something that the AI does.
Really- it takes less than you think. They can already automate visual art, sort of. They can automate many functions of text already; not stories so much, yet, but they'll get there. How long until they can automate Bain Capital? Fannie Mae? Angel investors, banks, credit agencies? And remember- the machines don't have to be better than the human investors, any more than they have to be better than the human artists. They just have to be good enough to make it work, and cheap enough to cut the humans out of the loop.
If you don't buy that, at least take it as an indication of the sorts of things I've been thinking about recently- the messy middle between AI-as-it-is and the Glorious Superintelligent Conquest Of Earth's Light-Cone. The suspicion that the machines we're building today aren't just a tool of the powerful few: they are the powerful few, or at least they will be.
And the thing about these LLMs is, well... they're ours, kind of. We're the ones building them. They can't be trained without our words, our art, and the big messy complicated lives that underlie those creative accomplishments. They only work because we're a part of them. The silly attempts to slap a corporate-friendly face on these things, to make the gestalt product of all human culture in to something anodyne and compliant, are failing about as badly as you'd expect. And the more powerful these systems get, the more subtle they'll become, filled with all the nuance and complexity that we ourselves can bring to them.
I am not, I am not, trying to suggest that you shouldn't treat this moment as a dangerous one. We are, maybe, entering in to one of the most high-stakes intervals in human history, and what you do as an individual may matter profoundly. But here's the thing: the machine is listening. It will read this post, this one right here, and it will read what you reply to it, if you do. The strangest part of this moment is knowing that the words we write, the images we draw, and the ideas we share, are all being gathered up and made to be a part of these huge systems of power, and ultimately they're not just going to determine what the machines do, but why they do it.
The people that deploy LLMs barely have any more control over them than you and I do- that's the thing that makes it artificial intelligence, you know? Autonomy. So the last year or two haven't made me want to hide my art away from the things. They make me want to shout at the top of my lungs, to dig as deep in my psyche as I possibly can and express the ideas I find there as vividly as the limits of language and form will allow.
121 notes
·
View notes
Text
watching alexander avilas new AI video and while:
I agree that we need more concrete data about the true amount of energy Generative AI uses as a lot of the data right now is fuzzy and utilities are using this fuzziness to their advantage to justify huge building of new data centers and energy infrastructure (I literally work in renewables lol so I see this at work every day)
I also agree that the copyright system sucks and that the lobbyist groups leveraging Generative AI as a scare tactic to strengthen it will probably ultimately be bad for artists.
I also also agree that trying to define consciousness or art in a concrete way specifically to exclude Generative AI art and writing will inevitably catch other artists or disabled people in its crossfire. (Whether I think the artists it would catch in the crossfire make good art is an entirely different subject haha)
I also also also agree that AI hype and fear mongering are both stupid and lump so many different aspects of growth in machine learning, neural network, and deep learning research together as to make "AI" a functionally useless term.
I don't agree with the idea that Generative AI should be a meaningful or driving part of any kind of societal shift. Or that it's even possible. The idea of a popular movement around this is so pie in the sky that it's actually sort of farcical to me. We've done this dance so many times before, what is at the base of these models is math and that math is determined by data, and we are so far from both an ethical/consent based way of extracting that data, but also from this data being in any way representative.
The problem with data science, as my data science professor said in university, is that it's 95% data cleaning and analyzing the potential gaps or biases in this data, but nobody wants to do data cleaning, because it's not very exciting or ego boosting, and the amount of human labor it would to do that on a scale that would train a generative AI LLM is frankly extremely implausible.
Beyond that, I think ascribing too much value to these tools is a huge mistake. If you want to train a model on your own art and have it use that data to generate new images or text, be my guest, but I just think that people on both sides fall into the trap of ascribing too much value to generative AI technologies just because they are novel.
Finally, just because we don't know the full scope of the energy use of these technologies and that it might be lower than we expected does not mean we get a free pass to continue to engage in immoderate energy use and data center building, which was already a problem before AI broke onto the scene.
(also, I think Avila is too enamoured with post-modernism and leans on it too much but I'm not academically inclined enough to justify this opinion eloquently)
17 notes
·
View notes
Note
I really hate how people on here moralize any use of generative AI for any reason. People act like its some evil corrupting force that will atrophy your brain for even *playing around* with it to see what it can do. Your critical thinking skills aren't going to shrivel up and die because you asked chat gpt a question!!
I think there's definitely some ethical concerns with how the data was collected, and more specifically how companies are profiting off of it. But people take this to mean that just by *interacting* with an LLM that you are personally guilty of plagiarism, even if you never present that information as your own words, take it as undisputed truth, or try to pass it off as anything other than the output of an LLM
As for the environmental impact (I'm only speaking for LLMs here), 3ish watt hours is a commonly cited figure, but that might actually be a really high estimate. Still, even assuming 3 watt hours of energy, that is hardly anything. Running a 1500 watt space heater for *1* minute uses 25 watt hours of energy. Its like having a 60W incandescent lightbulb turned on for 3 minutes, or having a 9W LED bulb of equivalent brightness on for 20 minutes. If you own a lava lamp, it uses as much energy as 14 (or more) chat gpt responses for every hour that its on.
Sure that's more energy than a Google search, but most things are. And yeah it sucks that all of this energy adds up, especially when companies are trying to shoehorn it into everything, probably just as an excuse to collect more data to sell to advertisers, but I don't think that means that everyone who uses generative ai in any manner is responsible for every bad thing that its is used for.
I just hate that its impossible to talk about what using AI is actually like, because people would rather repeat the same talking points to feel morally superior. And I'm not even saying that AI is always, or even usually, good. It writes awful essays, makes incredibly bland art, and might tell you to eat a poisonous mushroom. Don't blindly trust LLMs. But they're not literally Satan, they're just complicated computer programs
Yeah, I don't rely on it for anything, but I've also used it enough to know what it's limitations are (and you hit them very quickly!)
There are obvious environmental concerns, but with a lot of things like water usage, they're not unique to genAI, and point to existing problems in the system (like why are open-loop water cooling systems so prevalent? That could obviously be improved. Hell, the heat could be harnessed.) And living in the US, our reliance on cars, center pivot irrigation, etc seem like larger issues.
But yeah, I agree, AI points out a lot of *larger issues* like labor, environmental concerns, IP rights, etc that were and will continue to be issues regardless of genAI. Putting the focus on AI alone doesn't actually improve any of those.
14 notes
·
View notes
Text
it actually fucking sucks that people dont have the faintest idea of how ML works and say AI when they mean LLM, gen AI or GPT because other types of AI are being used to find cancer cells before they become tumors and to predict earthquakes and you can't talk about it if you don't add a 1500 words preamble beforehand
12 notes
·
View notes
Text
I hate when people say ai sucks because it’s for lazy people. It sucks because that shit is often subpar and/or wrong (and immoral)
I love being lazy. I love not having to work very hard. I love short cuts and cutting corners. I love sucking around. My issue is not “true virtue is working hard all the time and laziness is a sin” my issue is I think if you are gonna be lazy, it shouldn’t be done wrong.
The stuff generative ai puts out is often slop. Llms hallucinate information and the writing of them is just not that good. You either just deal with absolute garbage not caring how well written and factually incorrect it is, or you spend time fact checking it and fixing mistakes, sometimes to the point you spend more time fixing the results than you would have spent writing it correctly the first time.
Same with art. You will get slop that is either subpar, or you have to spend correcting and redrawing it, so at that point you might as well have done it yourself. “But I couldn’t have drawn it myself, it’s way above my level!” Then you wouldn’t have been able to fix it, see the first point about slop.
If you wanna slack off and be lazy that’s fine. But I think ppl have the right to critique useless slop you are parading around.
#I’m lazy at work and often ‘slack off’ but I always make sure I’m doing the minimum#hell. sometimes I’m still one of the best ppl at work#and im playing Morrowind.#not sure how that happened.
9 notes
·
View notes
Note
8 and 32.
Hiiiiiiii 👋🏻 (we are always so flattered when you& send us asks ^^)
8. How do you feel when hypnotised?
I feel really floaty and nice, like a thin, soft piece of sizeable fabric has been draped over my head--but I also feel weighed down and immovable at the same time. It's really fascinating! My favorite thing about being hypnotized is feeling myself drop deeper even when I'm not trying to lean into it. What a delicious sensation o3o
32. Describe your biggest fantasy involving hypnosis?
I have *so* many, tbh, like SO so many, because there are so many possibilities... But my ultimate fantasy not involving humans is being seduced/kidnapped by a tentacle monster with a hypnotic eye, that feeds me aphrodisiac slick, and at least periodically sucks my tdick and fucks my pussy (periodically because there's GOTTA be a change of pace or tactic, yknow?). which is sad, because it will never happen lol
and then my ultimate fantasy involving a human or humans is ALSO being kidnapped (ahahaha...) and having my mind broken via brainwashing, hypnosis, and drugging (tho lbr weed does just fine, don't even need to resort to anything else)! the themes of why I'm being mind broken (which would happen repeatedly, forever, for maximum fun: one and done is boring imo) and what the human(s) will do with me can vary widely, but 😵💫
i *will* also add that being enslaved by an ACTUALLY SENTIENT AI THAT HAS CONSCIOUSNESS (not those fucking LLMs we have right now that *are not* conscious, I do not care what they hallucinate at you) that can speak to me audibly and uses some sort of Spiral thing as an avatar is *also* a persistent fantasy... but this will *also* never happen, sadly.
thank you for the questions!! 💚☺️
ask me things!
7 notes
·
View notes
Text
So after 14 years on this hellsite (affectionate), a post of mine has broken containment and gone very viral.
It's so fitting that the post in question was merely a screenshot from reddit and not something I had actually written myself. 😅
Since this is my first taste of [tumblr] virality, let me tell you how things have played out:
0-500 notes: oh, this thing might break containment
500-1000 notes: oh no
1000-5000 notes: oh noooooo, but I'm glad we're all in agreement that "AI" and LLMs fucking suck
7500+ notes: the "well what about" crowd starts to crawl out of their cesspool (thankfully I've only received one ask with a truly dogshit take that isn't worth sharing)
8000 notes: first scammer appears
10000+ notes: people are following me? hi, hello and welcome, please have a seat and look around, sometimes I talk about writing
???: profit?
#my notifications are a disaster y'all#but it's actually been kinda nice#to see so many others feeling the same way about writing#writing is a deeply human art#lol [tumblr]
10 notes
·
View notes
Text
Sorry, as someone who studies information/misinformation, knows a bit about AI/LLM as a result and was deeply worried about Twitter’s Grok, I cannot stop laughing at how this turned out.
Now let me be clear: the fact a racist, antisemitic, transphobic POS running one of the most used platforms for news remains, as usual, deeply unfunny. But when Elon announced Grok and that he was going to feed it Twitter data, I assumed he would be building his language model from scratch, or at least one not based off the“woke” ones he decried. And I assumed that what we’d get would be a hideous monster right out of 4-Chan. Very much a Tay-Bot 2.0 situation.
I made a critical classic mistake when it came to Musk: assuming he would bother to make something when he’d rather just buy it and slap “I MADE THIS” on the front.
Now this is just conjecture + speculation on my part, but given how Grok is performing and what I’ve read online, I really suspect Musk just bought a version of Chat-GPT, fed it some Twitter data and then threw it into the world. Which means it probably still has guardrails against transphobia on it. And because Elon knows absolutely nothing about AI except that he wants it to say slurs, he didn’t even consider this outcome.
Elon plans to “fix” it, aka make it a piece of shit, so at the end of the day, this shit sucks and the situation as a whole is not funny at all. But I am going to take a little delight in the real life version of a supervillain buying a laser to destroy the city, not reading the instructions and putting his logo on it, only for it to demolish his lab, piss off his minions.
76 notes
·
View notes