Tumgik
#llms
unforth · 4 months
Text
Y'all I know that when so-called AI generates ridiculous results it's hilarious and I find it as funny as the next guy but I NEED y'all to remember that every single time an AI answer is generated it uses 5x as much energy as a conventional websearch and burns through 10 ml of water. FOR EVERY ANSWER. Each big llm is equal to 300,000 kiligrams of carbon dioxide emissions.
LLMs are killing the environment, and when we generate answers for the lolz we're still contributing to it.
Stop using it. Stop using it for a.n.y.t.h.i.n.g. We need to kill it.
Sources:
60K notes · View notes
prokopetz · 7 days
Text
I 100% agree with the criticism that the central problem with "AI"/LLM evangelism is that people pushing it fundamentally do not value labour, but I often see it phrased with a caveat that they don't value labour except for writing code, and... like, no, they don't value the labour that goes into writing code, either. Tech grifter CEOs have been trying to get rid of programmers within their organisations for years – long before LLMs were a thing – whether it's through algorithmic approaches, "zero coding" development platforms, or just outsourcing it all to overseas sweatshops. The only reason they haven't succeeded thus far is because every time they try, all of their toys break. They pretend to value programming as labour because it's the one area where they can't feasibly ignore the fact that the outcomes of their "disruption" are uniformly shit, but they'd drop the pretence in a heartbeat if they could.
6K notes · View notes
hbbisenieks · 1 year
Text
ok, i've gotta branch off the current ai disc horse a little bit because i saw this trash-fire of a comment in the reblogs of that one post that's going around
Tumblr media
[reblog by user makiruz (i don't feel bad for putting this asshole on blast) that reads "So here's the thing: every Diane Duane book that I have is stolen, I downloaded it illegally from the Internet; and I am not sorry, I am a thief of books and I don't think I'm doing anything wrong, ideas are not property, they should be free to be used by anyone as they were before the invention of capitalism; for that reason I don't believe it's wrong to use books to train AI models"]
this is asshole behavior. if you do this and if you believe this, you are a Bad Person full stop.
"Capitalism" as an idea is more recent than commerce, and i am So Goddamn Tired of chuds using the language of leftism to justify their shitty behavior. and that's what this is.
like, we live in a society tm
if you like books but you don't have the means to pay for them, the library exists! libraries support authors! you know what doesn't support authors? stealing their books! because if those books don't sell, then you won't get more books from that author and/or the existing books will go out of print! because we live under capitalism.
and like, even leaving aside the capitalism thing, how much of a fucking piece of literal shit do you have to be to believe that you deserve art, that you deserve someone else's labor, but that they don't deserve to be able to live? to feed and clothe themselves? sure, ok, ideas aren't property, and you can't copyright an idea, but you absolutely can copyright the Specific Execution of an idea.
so makiruz, if you're reading this, or if you think like this user does, i hope you shit yourself during a job interview. like explosively. i hope you step on a lego when you get up to pee in the middle of the night. i hope you never get to read another book in your whole miserable goddamn life until you disabuse yourself of the idea that artists are "idea landlords" or whatever the fuck other cancerous ideas you've convinced yourself are true to justify your abhorrent behavior.
4K notes · View notes
Text
How plausible sentence generators are changing the bullshit wars
Tumblr media
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Tumblr media
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
Tumblr media Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
Text
192 notes · View notes
fipindustries · 1 month
Text
AIs dont understand the real world
a lot has been said over the fact that LLMs dont actually understand the real world, just the statistical relationship between empty tokens, empty words. i say "empty" because in the AI's mind those words dont actually connect to a real world understanding of what the words represent. the AI may understand that the collection of letters "D-O-G" may have some statistical connection to "T-A-I-L" and "F-U-R" and "P-O-O-D-L-E" but it doesnt actually know anything about what an actual Dog is or what is a Tail or actual Fur or real life Poodles.
and yet it seems to be capable of holding remarcably coherent conversations. it seems more and more, with each new model that comes out, to become better at answering questions, at reasoning, at creating original writing. if it doesnt truly understand the world it sure seems to get better at acting like it does with nothing but just a statistical understanding of how words are related to each other.
i guess the question ultimatly is "if you understand well enough the relationship between raw symbols could you have an understanding of the underlying relatonships between the things these symbols represent?"
now, let me take a small tangent towards human understanding. specifically towards what philosophy has to say about it.
one of the classic problems philosophers deal with is, how do we know the world is real. how do we know we can trust our senses, how do we know the truth? many argue that we cant. that we dont really percieve the true world out there beyond ourselves. all we can percieve is what our senses are telling us about the world.
lets think of sight.
if you think about it, we dont really "see" objects, right? we just see the light that bounces off those objects. and even then we dont really "see" the photons that collide with our eye, we see the images that our brain generates in our mind, presumably because the corresponding photons collided with our eye. but colorblind people and people who experience visual hallucinations have shown that what we see doesnt have to always correspond with actual phisical phenomena occuring in the real world.
we dont see the real world, we see referents to it. and from the relationships between these referents we start to infer the properties of the actual world out there. are the lights that hit our eye similar to the words that the LLM is trained on? only a referent whose understanding allows it to function in the real world, even if it cant actually "percieve" the "real" world.
but, one might say, we dont just have sight. we have other senses, we have smell and touch and taste and audition. all of these things allow us to form a much richer and multidimensional understanding of the world, even if by virtue of being human senses they have all the problems that sight has of being one step removed from the real world. so to this i will say that multimodal AIs also exist. AIs that can connect audio and visuals and words together and form relationships between all of this disparate data.
so then, can it be said that they understand the world? and if not yet, is that a cathegorical difference or merely a difference of degree. that is to say, not that they cathegorically cant understand the world but simply that they understand it less well than humans do.
40 notes · View notes
Text
The other thing about AI/Large image models is that there are things that they could be trained to do that are actually useful.
In college one of my friends was training a machine learning model to tell apart native fish species from invasive fish species in the Great Lakes to try to create a neural network program that could monitor cameras and count fish populations.
Yesterday I was reading about a man on a cruise ship who was reported missing in the afternoon, and when they went through the security footage, they found out that he had fallen overboard at 4 AM.
Imagine if an AI program had been trained on "people falling overboard" and been monitoring the security cameras and able to alert someone at 4 am that someone had fallen into the water! Imagine the animal population counts that an AI monitoring multiple trailcams in a wildlife area could do!
There are valid uses for this kind of pattern-matching large-data-processing recognition!
But no we're using it to replace writers and artists. Because it's easier, and more profitable.
400 notes · View notes
look-like-my-sound · 1 month
Text
Made a cool patch for my vest that im working on! I think it turned out pretty cool.
Tumblr media Tumblr media
Progress photos!
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
33 notes · View notes
cbirt · 6 months
Link
Picture a world where computers can not only translate languages but also decipher biology’s convoluted language. This is the exciting frontier of Large Language Models (LLMs) that could transform our knowledge of genes and cells, which are the foundation of all life forms. Researchers from the Center, Chinese Academy of Sciences, China, explore this intriguing crossroad. Genes, which are passed down from one generation to another, hold the truths of our being. Cells, the tiny factories that keep us alive, execute these instructions coded in genes. Decoding how genes and cells interact helps to unravel health complications, diseases, and even mysteries regarding evolution.
Traditionally, scientists have used gene sequencing to study these intricate associations. But LLMs present an alternative way forward with immense promise. These models are trained using huge volumes of text data, enabling them to understand complicated patterns as well as mappings between them. Perhaps scientists can get a breakthrough by passing such datasets through LLMs.
Continue Reading
36 notes · View notes
chaoskirin · 4 months
Text
Be Aware of AI Images (and don't reblog them)
A lot of aesthetic blogs have been pivoting to creating and RTing AI-generated "artwork," and I'm asking tumblr, with all my heart, to please ice them out.
Yes, even if it's your aesthetic. AI not only steals from artists, but it's not sucking up more electricity than some small countries. Just to give you an idea, Large Language Models (LLMs) like ChatGPT use FOUR TIMES the power Denmark uses in a year. Why Denmark? IDK. That's what the study looked at.
There's also a REALLY excellent possibility that the cooling needs of LLMs (if they continue on their current trajectory) will require more freshwater cooling than all the freshwater in existence in just a few years. I mean each time you use Google and it spits out an AI answer at you, that's 3KwH. You know how much electricity your personal computer uses in one day? Like, if it's on for a full 8 hours? Only about 1.5.
So if you do, let's say, 10 google searches a day, 100 a week, you're using as much electricity as your personal computer uses in 6 months.
And it's not YOUR fault that Google sold out. But I want you to be aware that LLMs and generated images ARE doing damage, and I'm asking you to do your best to not encourage image generation blogs to keep spitting out hundreds of images.
There are ways to recognize these images. Think about the content. Does it make sense? Look in the really high detail areas. Are you actually seeing patterns, or is it just a lot of visual noise to make you think it's detailed? Do things line up? Look at windows in images, or mirrors. Do they make sense? (IE if you look through a window, does what's on the other side make sense?)
I know it's a pain to analyze every single image you reblog, but you have to end the idea that the internet is only content that you can burn through. Take a second to look at these image and learn to recognize what's real from what's generated by using LLMs.
And to be fair, it's very difficult--almost impossible--for individual action to a train like this. The only thing I can hope for is that because the mass generation of images is actually being done BY INDIVIDUALS (not corporations) icing them out will cause them to get bored and move on to the next Get Rich Quick scheme.
34 notes · View notes
underlockv · 1 year
Text
Interestingly enough I think calling large language models a.i. is doing too much to humanize them. Because of how scifi literature has built up a.i. as living beings with actual working thought processes deserving of the classification of person (bicentennial man etc) a lot of people want to view a.i. as entities. And corporations pushing a.i. can take advantage of your soft feelings toward it like that. But LLMs are nowhere close to that, and tbh I don't even feel the way they learn approaches it. Word order guessing machines can logic the way to a regular sounding sentence but thats not anything approaching having a conversation with a person. Remembering what you said is just storing the information you are typing into it, its not any kind of indication of existence. And yet, so many people online are acting like when my grandma was convinced siri was actually a lady living in her phone. I think we need to start calling Large Language Models "LLMs" and not giving the corps pushing them more of an in with the general public. Its marketing spin, stop falling for it.
122 notes · View notes
prokopetz · 4 months
Text
I wish generative AI really was as bad with specific quantities of things as the jokes claim it is. Imagine AI-generated 9/11 inspiration porn on Facebook with the wrong number of towers.
916 notes · View notes
nova-ayashi · 2 months
Text
Today on Nova Prime, I write about, and then tear to shreds, NaNoWriMo, for their sudden endorsement of AI while manipulating people with social justice language, because obviously that's the only way to convince readers and writers that you've not gone off the deep-end, completely.
10 notes · View notes
Text
Sympathy for the spammer
Tumblr media
Catch me in Miami! I'll be at Books and Books in Coral Gables on Jan 22 at 8PM.
Tumblr media
In any scam, any con, any hustle, the big winners are the people who supply the scammers – not the scammers themselves. The kids selling dope on the corner are making less than minimum wage, while the respectable crime-bosses who own the labs clean up. Desperate "retail investors" who buy shitcoins from Superbowl ads get skinned, while the MBA bros who issue the coins make millions (in real dollars, not crypto).
It's ever been thus. The California gold rush was a con, and nearly everyone who went west went broke. Famously, the only reliable way to cash out on the gold rush was to sell "picks and shovels" to the credulous, doomed and desperate. That's how Leland Stanford made his fortune, which he funneled into eugenics programs (and founding a university):
https://www.hachettebookgroup.com/titles/malcolm-harris/palo-alto/9780316592031/
That means that the people who try to con you are almost always getting conned themselves. Think of Multi-Level Marketing (MLM) scams. My forthcoming novel The Bezzle opens with a baroque and improbable fast-food Ponzi in the town of Avalon on the island of Catalina, founded by the chicle monopolist William Wrigley Jr:
http://thebezzle.org
Wrigley found fast food declasse and banned it from the island, a rule that persists to this day. In The Bezzle, the forensic detective Martin Hench uncovers The Fry Guys, an MLM that flash-freezes contraband burgers and fries smuggled on-island from the mainland and sells them to islanders though an "affiliate marketing" scheme that is really about recruiting other affiliate markets to sell under you. As with every MLM, the value of the burgers and fries sold is dwarfed by the gigantic edifice of finance fraud built around it, with "points" being bought and sold for real cash, which is snaffled up and sucked out of the island by a greedy mainlander who is behind the scheme.
A "bezzle" is John Kenneth Galbraith's term for "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." In every scam, there's a period where everyone feels richer – but only the scammers are actually cleaning up. The wealth of the marks is illusory, but the longer the scammer can preserve the illusion, the more real money the marks will pump into the system.
MLMs are particularly ugly, because they target people who are shut out of economic opportunity – women, people of color, working people. These people necessarily rely on social ties for survival, looking after each others' kids, loaning each other money they can't afford, sharing what little they have when others have nothing.
It's this social cohesion that MLMs weaponize. Crypto "entrepreneurs" are encouraged to suck in their friends and family by telling them that they're "building Black wealth." Working women are exhorted to suck in their bffs by appealing to their sisterhood and the chance for "women to lift each other up."
The "sales people" trying to get you to buy crypto or leggings or supplements are engaged in predatory conduct that will make you financially and socially worse off, wrecking their communities' finances and shattering the mutual aid survival networks they rely on. But they're not getting rich on this – they're also being scammed:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4686468
This really hit home for me in the mid-2000s, when I was still editing Boing Boing. We had a submission form where our readers could submit links for us to look at for inclusion on the blog, and it was overwhelmed by spam. We'd add all kinds of antispam to it, and still, we'd get floods of hundreds or even thousands of spam submissions to it.
One night, I was lying in my bed in London and watching these spams roll in. They were all for small businesses in the rustbelt, handyman services, lawn-care, odd jobs, that kind of thing. They were 10 million miles from the kind of thing we'd ever post about on Boing Boing. They were coming in so thickly that I literally couldn't finish downloading my email – the POP session was dropping before I could get all the mail in the spool. I had to ssh into my mail server and delete them by hand. It was maddening.
Frustrated and furious, I started calling the phone numbers associated with these small businesses, demanding an explanation. I assumed that they'd hired some kind of sleazy marketing service and I wanted to know who it was so I could give them a piece of my mind.
But what I discovered when I got through was much weirder. These people had all been laid off from factories that were shuttering due to globalization. As part of their termination packages, their bosses had offered them "retraining" via "courses" in founding their own businesses.
The "courses" were the precursors to the current era's rise-and-grind hustle-culture scams (again, the only people getting rich from that stuff are the people selling the courses – the "students" finish the course poorer). They promised these laid-off workers, who'd given their lives to their former employers before being discarded, that they just needed to pull themselves up by their own boostraps:
https://pluralistic.net/2023/04/10/declaration-of-interdependence/#solidarity-forever
After all, we had the internet now! There were so many new opportunities to be your own boss! The course came with a dreadful build-your-own-website service, complete with an overpriced domain sales portal, and a single form for submitting your new business to "thousands of search engines."
This was nearly 20 years ago, but even then, there was really only one search engine that mattered: Google. The "thousands of search engines" the scammers promised to submit these desperate peoples' websites to were just submission forms for directories, indexes, blogs, and mailing lists. The number of directories, indexes, blogs and mailing lists that would publish their submissions was either "zero" or "nearly zero." There was certainly no possibility that anyone at Boing Boing would ever press the wrong key and accidentally write a 500-word blog post about a leaf-raking service in a collapsing deindustrialized exurb in Kentucky or Ohio.
The people who were drowning me in spam weren't the scammers – they were the scammees.
But that's only half the story. Years later, I discovered how our submission form was getting included in this get-rich-quick's mass-submission system. It was a MLM! Coders in the former Soviet Union were getting work via darknet websites that promised them relative pittances for every submission form they reverse-engineered and submitted. The smart coders didn't crack the forms directly – they recruited other, less business-savvy coders to do that for them, and then often as not, ripped them off.
The scam economy runs on this kind of indirection, where scammees are turned into scammers, who flood useful and productive and nice spaces with useless dross that doesn't even make them any money. Take the submission queue at Clarkesworld, the great online science fiction magazine, which famously had to close after it was flooded with thousands of junk submission "written" by LLMs:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
There was a zero percent chance that Neil Clarke would accidentally accept one of these submissions. They were uniformly terrible. The people submitting these "stories" weren't frustrated sf writers who'd discovered a "life hack" that let them turn out more brilliant prose at scale.
They were scammers who'd been scammed into thinking that AIs were the key to a life of passive income, a 4-Hour Work-Week powered by an AI-based self-licking ice-cream cone:
https://pod.link/1651876897/episode/995c8a778ede17d2d7cff393e5203157
This is absolutely classic passive-income brainworms thinking. "I have a bot that can turn out plausible sentences. I will locate places where sentences can be exchanged for money, aim my bot at it, sit back, and count my winnings." It's MBA logic on meth: find a thing people pay for, then, without bothering to understand why they pay for that thing, find a way to generate something like it at scale and bombard them with it.
Con artists start by conning themselves, with the idea that "you can't con an honest man." But the factor that predicts whether someone is connable isn't their honesty – it's their desperation. The kid selling drugs on the corner, the mom desperately DMing her high-school friends to sell them leggings, the cousin who insists that you get in on their shitcoin – they're all doing it because the system is rigged against them, and getting worse every day.
These people reason – correctly – that all the people getting really rich are scamming. If Amazon can make $38b/year selling "ads" that push worse products that cost more to the top of their search results, why should the mere fact that an "opportunity" is obviously predatory and fraudulent disqualify it?
https://pluralistic.net/2023/11/29/aethelred-the-unready/#not-one-penny-for-tribute
The quest for passive income is really the quest for a "greater fool," the economist's term for the person who relieves you of the useless crap you just overpaid for. It rots the mind, atomizes communities, shatters solidarity and breeds cynicism:
https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security
The rise and rise of botshit cannot be separated from this phenomenon. The botshit in our search-results, our social media feeds, and our in-boxes isn't making money for the enshittifiers who send it – rather, they are being hustled by someone who's selling them the "picks and shovels" for the AI gold rush:
https://www.theguardian.com/commentisfree/2024/jan/03/botshit-generative-ai-imminent-threat-democracy
That's the true cost of all the automation-driven unemployment criti-hype: while we're nowhere near a place where bots can steal your job, we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The manic "entrepreneurs" who've been stampeded into panic by the (correct) perception that the economy is a game of musical chairs where the number of chairs is decreasing at breakneck speed are easy marks for the Leland Stanfords of AI, who are creating generational wealth for themselves by promising that their bots will automate away all the tedious work that goes into creating value. Expect a lot more Amazon Marketplace products called "I'm sorry, I cannot fulfil this request as it goes against OpenAI use policy":
https://www.theverge.com/2024/1/12/24036156/openai-policy-amazon-ai-listings
No one's going to buy these products, but the AI picks-and-shovels people will still reap a fortune from the attempt. And because history repeats itself, these newly minted billionaires are continuing Leland Stanford's love affair with eugenics:
https://www.truthdig.com/dig-series/eugenics/
The fact that AI spam doesn't pay is important to the fortunes of AI companies. Most high-value AI applications are very risk-intolerant (self-driving cars, radiology analysis, etc). An AI tool might help a human perform these tasks more accurately – by warning them of things that they've missed – but that's not how AI will turn a profit. There's no market for AI that makes your workers cost more but makes them better at their jobs:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Plenty of people think that spam might be the elusive high-value, low-risk AI application. But that's just not true. The point of AI spam is to get clicks from people who are looking for better content. It's SEO. No one reads 2000 words of algorithm-pleasing LLM garbage over an omelette recipe and then subscribes to that site's feed.
And the omelette recipe generates pennies for the spammer that posted it. They are doing massive volume in order to make those pennies into dollars. You don't make money by posting one spam. If every spammer had to pay the actual recovery costs (energy, chillers, capital amortization, wages) for their query, every AI spam would lose (lots of) money.
Hustle culture and passive income are about turning other peoples' dollars into your dimes. It is a negative-sum activity, a net drain on society. Behind every seemingly successful "passive income" is a con artist who's getting rich by promising – but not delivering – that elusive passive income, and then blaming the victims for not hustling hard enough:
https://www.ftc.gov/business-guidance/blog/2023/12/blueprint-trouble
Tumblr media
I'm Kickstarting the audiobook for The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There's also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
825 notes · View notes
mlearningai · 1 month
Text
Excited to share this conversation on crafting LLMs with words
Watch how we unlock creativity through the power of language!
6 notes · View notes
slack-wise · 8 months
Text
Tumblr media
14 notes · View notes