#ChatGPT monetization
Explore tagged Tumblr posts
assetgas1 · 2 years ago
Text
Monetizing Chatbots: Strategies for Earning with ChatGPT
Introduction: Hey there! Have you ever thought about turning your chatbot interactions into a revenue stream? Well, you’re in the right place. In this guide, we’ll explore the exciting world of monetizing chatbots, specifically focusing on the power of ChatGPT. Whether you’re a business owner, content creator, or just someone looking to make a little extra cash, ChatGPT offers a range of…
View On WordPress
0 notes
kevinmarville · 20 days ago
Text
📘 Blog Post about Revenue Losses in Web Publishing – A Warren Buffett Perspective
0 notes
catchexperts · 3 months ago
Text
AI Chatbots Are Disrupting Digital Advertising
Tumblr media
AI chatbots like ChatGPT are not just answering questions — they’re changing how brands connect with users. As OpenAI announced in April 2024, sponsored content integrations are coming, and conversational advertising is emerging as the next big trend.
If done right, this will transform the future of digital marketing, creating new, trust-based ways for brands to engage consumers.
How AI Chatbots Are Reshaping Digital Advertising
Digital advertising was once ruled by:
Google Ads (targeting search intent)
Facebook Ads (targeting detailed personal data)
But a Gartner report now predicts that 25% of online searches will happen via AI chatbots by 2026 — disrupting search behaviors and traditional ad models.
Users expect direct, conversational answers — bypassing web pages filled with ads.
New Opportunities for Marketers in AI Chatbots
Brands that want to succeed must create ads that blend naturally into conversation flows without losing user trust.
Here are five innovative conversational advertising strategies:
Sponsored Recommendations Seamlessly suggesting sponsored products when users ask questions.
Product Mentions in Content Recommending sponsored airlines, hotels, or tools inside travel plans or workflows.
Chat-Integrated Promotions Offering exclusive deals or discount codes naturally during conversations.
Premium Listings in Comparisons Featuring sponsored brands higher in comparison lists or as bonus options.
Interactive Brand Experiences Offering optional mini-chats with brand-sponsored "experts" inside the main chatbot experience.
Why Transparency and Value Will Define Success
In chatbot advertising, user trust is everything.
To succeed:
Clearly label sponsored content.
Keep ads highly relevant to the user’s intent.
Ensure ads add real value to the conversation.
Misleading or intrusive ads will quickly destroy user trust — and the brand's reputation.
Final Thoughts: Conversational Ads Are the Future
Advertising in AI chatbots marks a pivotal shift in digital marketing. Brands that master conversational advertising early will capture new audiences, deeper engagement, and long-term loyalty.
Get ready — the future of advertising isn’t search or social media alone anymore. It’s conversation.
Ready to Elevate Your Digital Marketing Strategy?
The future of digital advertising is conversational — and the brands that adapt now will lead tomorrow. Visit Best Digital Marketing Services to learn more.
Stay connected for the latest trends, insights, and strategies:
LinkedIn
Twitter
Instagram
Facebook
YouTube
0 notes
pinkyjulien · 1 month ago
Text
17.06 - "Should we expect any kind of change concerning content moderation?"
When asked about inclusive content and hateful/bigoted mods, Foldinho, one of Nexus' new owner, replied:
"[...] we fully intend to continue supporting moderation principles that make Nexus feel safe and inclusive. That foundation matters, and we won’t compromise on it."
Tumblr media
Comment available on Dark0ne post
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
17.06 - NexusMods' new owners, Marinus, Nikolai and Victor, posted a new pinned comment on Dark0ne's post to "clear the air on a few things"
Tumblr media
Link to the post Full copy/pasted post and questions available in a reblog
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
16.06 - Nexus Mods was acquired by Chosen, a company focused on growth and monetization of gaming startups.
Reddit thread
Tumblr media
Dark0ne, creator and ex-owner of Nexus Mods, stepped down today in an update post:
Tumblr media
NexusMods post
Users on Resetera were able to found out more about this sudden change of ownership:
Tumblr media
Resetera thread
Tumblr media
Victor Folmann, CEO of Chosen, published this "Gaming Startup Monetization Cheat Sheet" on his LinkedIn
Tumblr media
Link to the post
Users are already sharing their fear of NexusMods allowing more bigoted mods in the future, and taking down inclusive ones to follow the current "anti-woke" trend in the gaming industry
Tumblr media Tumblr media
Reseta threada bout their change in moderation concerning bigoted mods following Oblivion Remastered launch
Another User, this time on NexusMods forum, noticed queer and inclusive mods being removed following Dark0ne's post
Tumblr media
Link to the forum post
It is also worth mentioning that Chosen and its CEO are seen using AI generated images on their website (including generated images depicting Tracer from Overwatch) hyping up ChatGPT on his Linkedin account, and are including "rewarding gamers with crypto money"
Tumblr media
Users speculate that Chosen buying out NexusMods is to potentially train an AI model that generates mods
Tumblr media
More about Nvidia's AI "RTX Remix" and future projects
8K notes · View notes
yestobetop · 2 years ago
Text
The Art of ChatGPT Profit: Monetization Techniques for Financial Growth
How to make money with ChatGPT? What is ChatGPT? OpenAI created ChatGPT, an advanced language model. It is intended to generate human-like text responses in response to given prompts. ChatGPT, which is powered by deep learning algorithms, can engage in natural and dynamic conversations, making it an ideal tool for a variety of applications. ChatGPT can be used for a variety of purposes,…
Tumblr media
View On WordPress
0 notes
autisticandroids · 2 years ago
Text
i've been seeing ai takes that i actually agree with and have been saying for months get notes so i want to throw my hat into the ring.
so i think there are two main distinct problems with "ai," which exist kind of in opposition to each other. the first happens when ai is good at what it's supposed to do, and the second happens when it's bad at it.
the first is well-exemplified by ai visual art. now, there are a lot of arguments about the quality of ai visual art, about how it's soulless, or cliche, or whatever, and to those i say: do you think ai art is going to be replacing monet and picasso? do you think those pieces are going in museums? no. they are going to be replacing soulless dreck like corporate logos, the sprites for low-rent edugames, and book covers with that stupid cartoon art style made in canva. the kind of art that everyone thinks of as soulless and worthless anyway. the kind of art that keeps people with art degrees actually employed.
this is a problem of automation. while ai art certainly has its flaws and failings, the main issue with it is that it's good enough to replace crap art that no one does by choice. which is a problem of capitalism. in a society where people don't have to sell their labor to survive, machines performing labor more efficiently so humans don't have to is a boon! this is i think more obviously true for, like, manufacturing than for art - nobody wants to be the guy putting eyelets in shoes all day, and everybody needs shoes, whereas a lot of people want to draw their whole lives, and nobody needs visual art (not the way they need shoes) - but i think that it's still true that in a perfect world, ai art would be a net boon, because giving people without the skill to actually draw the ability to visualize the things they see inside their head is... good? wider access to beauty and the ability to create it is good? it's not necessary, it's not vital, but it is cool. the issue is that we live in a society where that also takes food out of people's mouths.
but the second problem is the much scarier one, imo, and it's what happens when ai is bad. in the current discourse, that's exemplified by chatgpt and other large language models. as much hand-wringing as there has been about chatgpt replacing writers, it's much worse at imitating human-written text than, say, midjourney is at imitating human-made art. it can imitate style well, which means that it can successfully replace text that has no meaningful semantic content - cover letters, online ads, clickbait articles, the kind of stuff that says nothing and exists to exist. but because it can't evaluate what's true, or even keep straight what it said thirty seconds ago, it can't meaningfully replace a human writer. it will honestly probably never be able to unless they change how they train it, because the way LLMs work is so antithetical to how language and writing actually works.
the issue is that people think it can. which means they use it to do stuff it's not equipped for. at best, what you end up with is a lot of very poorly written children's books selling on amazon for $3. this is a shitty scam, but is mostly harmless. the behind the bastards episode on this has a pretty solid description of what that looks like right now, although they also do a lot of pretty pointless fearmongering about the death of art and the death of media literacy and saving the children. (incidentally, the "comics" described demonstrate the ways in which ai art has the same weaknesses as ai text - both are incapable of consistency or narrative. it's just that visual art doesn't necessarily need those things to be useful as art, and text (often) does). like, overall, the existence of these kids book scams are bad? but they're a gnat bite.
to find the worst case scenario of LLM misuse, you don't even have to leave the amazon kindle section. you don't even have to stop looking at scam books. all you have to do is change from looking at kids books to foraging guides. i'm not exaggerating when i say that in terms of texts whose factuality has direct consequences, foraging guides are up there with building safety regulations. if a foraging guide has incorrect information in it, people who use that foraging guide will die. that's all there is to it. there is no antidote to amanita phalloides poisoning, only supportive care, and even if you survive, you will need a liver transplant.
the problem here is that sometimes it's important for text to be factually accurate. openart isn't marketed as photographic software, and even though people do use it to lie, they have also been using photoshop to do that for decades, and before that it was scissors and paintbrushes. chatgpt and its ilk are sometimes marketed as fact-finding software, search engine assistants and writing assistants. and this is dangerous. because while people have been lying intentionally for decades, the level of misinformation potentially provided by chatgpt is unprecedented. and then there are people like the foraging book scammers who aren't lying on purpose, but rather not caring about the truth content of their output. obviously this happens in real life - the kids book scam i mentioned earlier is just an update of a non-ai scam involving ghostwriters - but it's much easier to pull off, and unlike lying for personal gain, which will always happen no matter how difficult it is, lying out of laziness is motivated by, well, the ease of the lie.* if it takes fifteen minutes and a chatgpt account to pump out fake foraging books for a quick buck, people will do it.
*also part of this is how easy it is to make things look like high effort professional content - people who are lying out of laziness often do it in ways that are obviously identifiable, and LLMs might make it easier to pass basic professionalism scans.
and honestly i don't think LLMs are the biggest problem that machine learning/ai creates here. while the ai foraging books are, well, really, really bad, most of the problem content generated by chatgpt is more on the level of scam children's books. the entire time that the internet has been shitting itself about ai art and LLM's i've been pulling my hair out about the kinds of priorities people have, because corporations have been using ai to sort the resumes of job applicants for years, and it turns out the ai is racist. there are all sorts of ways machine learning algorithms have been integrated into daily life over the past decade: predictive policing, self-driving cars, and even the youtube algorithm. and all of these are much more dangerous (in most cases) than chatgpt. it makes me insane that just because ai art and LLMs happen to touch on things that most internet users are familiar with the working of, people are freaking out about it because it's the death of art or whatever, when they should have been freaking out about the robot telling the cops to kick people's faces in.
(not to mention the environmental impact of all this crap.)
648 notes · View notes
the-bar-sinister · 25 days ago
Note
Please excuse if this is an upsetting topic. Recently I have been putting a lot of thought lately into complicated feelings I have about current fanfic culture. I will make two assumptions. Community engagement with fic authors has been dropping, and reader entitlement is increasing. At least, this seems to feel true to me. When I think about why, I come back to a few things. I think many readers have gotten very comfortable using chatgpt and character.ai to fill in an empty space where human interaction is supposed to go. I also think there has been an increased use of monetizing fanfic on Tumblr. I don't know how to feel about the this one. There are some very popular authors who take requests and commissions for fandoms regardless of whether they know the source material or not. I can see how this would increase their engagement and increase reader entitlement (it's written to their exact tastes because it's paid for). It feels to me like the friendly community engagement is dying out. Requests are requests, not demands. You're supposed to want to talk and share things with one another instead of waiting for someone to scratch your itch.
Yes, as our larger society increasingly works to turn all subcultures and hobbies into a means of profiting and a part of capitalism, it is leaking into fandom and fanfiction.
Fandom and fanfiction are, like all hobbies and subcultures, increasingly losing their community aspect and being turned into another commercial monetization.
This is a very, very bad thing and we need to fight against it.
27 notes · View notes
iimoontreesii · 2 months ago
Text
this might sound tin-foil hat of me but basically i think that companies are getting people reliant (aka hooked) on the services of GenAI and ChatGPT and once they're used by enough people or hit a certain success rate they will paywall it and make people shell out for their subscriptions. so be warned. do not become reliant on GenAI. not only is it stopping your ability to develop your own human skills but it will make you reliant on a corporations services. same thing for things like google maps, they are nice in a pinch sure but the second they go behind a paywall (which these companies can and will do) you will be left with that hole where a skill you should have developed does not exist. fight the second age of corporate industrialization and economic tyranny: develop critical skills without the use of monetized tech. be your own person, and you will have fun doing it. even if your brain hurts for a second. doing that hard thing and finding success is worth way more than the maybe five minutes you saved. i promise.
19 notes · View notes
rembrandtswife · 30 days ago
Text
Dear YouTubers
Twenty-five minutes of Your Opinions does not constitute a "deep dive" into, well, anything. First watch Contrapoints and Hbomberguy to see how it's done, then actually research your topic and I do *not* mean Ask ChatGPT or just regurgitate what some other YouTubers have said.
This tip is free and will not be monetized
14 notes · View notes
lynzine · 11 months ago
Text
NaNoWriMo Alternatives
If you know what's happening (and has happened last year with NaNo) I want to offer some alternatives that people are working on to keep the event while ditching the organization.
@novella-november
I may reblog this post as I learn more but there are two right off the bat!
If you don't know why I feel the need to advocate for alternatives I will be getting into the second issue (not the triggering scandal last year) below the break.
(Long story short: On top of last year's scandal... they now have an AI sponsor. Which is a big red flag for me and feels like they are looking for content to train their AIs on as well as new consumers. What better place than thousands of unpublished novelists working on completing a novel in a month?)
(PS If you reblog... I don't need details of last year's scandal... please.)
-
Hi everyone, if you've been with me for a while you might know I'm a fan fo NaNoWriMo... The event, no longer the organization. I was wary... but willing to see how new policies and restructuring shook out after the major scandal last year (if you aren't aware, I won't be going into the details here, but you may find them triggering).
It's this second issue that I feel that I can actually speak on... Because NaNoWriMo has a new sponsor... and that sponsor is AI.
When I first heard that NaNo was okay with AI, I was wary but... not too upset. After all, NaNo's big thing for a long time was just "Just write" basically however you had too, whatever tools you need. I know that there are AIs designed to aid with grammar and clarity. And that seemed... if not fine, than understandable. NaNo (for me) is a major source of motivation and I could completely understand why NaNo wouldn't want wins invalidated because they used some kind of AI to help them along. Then I found out that NaNo picked up an AI sponsor and that completely changed the story for me. It went from the people NaNo served... to supporting one of the organizations that might be ripping off writers.
As I've said in the notes of my fics when I reluctantly took steps to protect my work from AIs like ChatGPT, putting my work in an AI is essentially monetizing my work without reimbursing me and other writers and undercutting jobs in writing that people like me truly want. I don't know what this sponsor's actual AI is like. But it is a massive red flag that they are sponsoring an organization that's focused on an event that generates thousands of potential books to train their AI on.
NaNo's official stance is "We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege." NaNo is experiencing a lot of backlash.
I will be hoping that one of the alternatives are up and running come November.
33 notes · View notes
lesspopped · 22 days ago
Text
Job hunters, now able to generate custom applications instantly, flood employers, so employers turn to AI to manage the glut. Spammers and other bad-faith actors flood social media with near-infinite material, pushing the platforms to double down on automated moderation. Rapidly generated presentations lead to rapidly scheduled meetings recorded and automatically transcribed by AI assistants for machine summarization and analysis. Dating-app users generate chats with AI only to be filtered and then responded to by someone else using AI. The starkest and most consequential such story is what’s happening in education: Teachers dealing with students who generate entire essays and assignments are turning to AI-powered plagiarism detectors, or getting pitched on ed-tech software that solves cheating with surveillance — with, of course, the help of AI. These are stories about AI, but they’re also stories about broken systems. Students flocking to ChatGPT in the classroom suggests that they see school in terms of arbitrary tasks and attainment rather than education. The widespread use of AI in job hunting drives home the extent to which platforms like LinkedIn, which promises to connect job seekers with employers, have instead installed themselves between them, pushing both sides to either pay up or dishonestly game their systems. A dating app where users see opportunity in automated flirting must already be a pretty grim space. If Facebook can be so quickly and thoroughly overwhelmed by AI-generated imagery and bots, it probably wasn’t much of a social network anymore — a low-trust platform better at monetizing users than connecting them. Smaller-scale AI arms races like these don’t take hold unless users (or workers, or students) have already been pitted against one another by systems they don’t respect. In an uncomfortably large portion of modern life — especially online — that’s exactly what’s happened.
(emphasis mine)
a mutual on bsky used the term "opportunistic infection" in one of the many rounds of Discourse about llms a couple months ago, and I think that's extremely accurate. basically he meant something along the same lines as the above — that in general, the tech itself is not creating cracks in these systems, but rather moving into, exposing, and deepening existing cracks. if we all woke up tomorrow and this technology had disappeared off the face of the earth but everything else was exactly the same, we would not be waking up in a utopia. at best, problems that are reaching a boiling point now might go back down to a low simmer.
5 notes · View notes
sk1fanfiction · 8 months ago
Text
stop trying to monetize fic i'm literally as serious as a myocardial infarction
Okay so not only have I had to deal with the Artstation scammers constantly messaging me since October, to the point where my blocklist on FFN is 100 users long and I've had to block people on AO3 for the very first time, but today I got a review from someone asking if they can use my fic as content on their doubtlessly monetized Youtube channel (there are several of these accounts that send near-identical messages!), with a grating AI-generated voice reading the fic and stock or AI-generated footage playing behind it. I guess at least they are not trying to get me to pay them and are polite enough but this is quite literally I.L.L.E.G.A.L. if they are getting any AdSense money!
Illegality and swindling aside, not every hobby has to become a hustle and even though my writing might not be high art, that doesn't mean it deserves to be turned into soulless slop just because ChatGPT and Midjourney bros on Twitter who couldn't even write a fucking for loop gave you the bright idea. And I am personally sick of the recent deluge of scammers, trolls, and bots. What is happening to the moderation of both FFN and AO3?
Lastly, this has genuinely ruined reviews/comments for me. Every time I get a notification these days, I'm thinking 'please be a real person, please be real' because a full 50-60% of the time it's some auto-generated, mass-mailed message that has nothing to do with the fic. Hell I would even take a 'u suck' comment over this crap.
EDIT: I just remembered that the FFN app already has a reader built-in with a pretty natural-sounding voice, which is much higher quality than any of these channels, which makes them even more useless!
Okay, rant over. As you were.
11 notes · View notes
ruttotohtori · 10 months ago
Text
---
Analysoimamme sivuston sisältö toistaa äärioikeistolaista sanomaa ja salaliittoteorioita. Se myös kannustaa suoraan toimintaan. Tällä hetkellä suurin osa meillä leviävistä salaliittoteorioista nousee äärioikeistolaisesta tai konservatiivisesta maaperästä, kertoo suomalaista disinformaatiokenttää tutkiva viestinnän asiantuntija Janne Riiheläinen. – Suomessa tällä hetkellä hirveän monet ajatukset, narratiivit sekä salaliittoteoriat ovat peräisin Yhdysvaltain kulttuurisodista. Riiheläisen mukaan tekoälytyökalujen ansiosta kuka tahansa voi tuottaa laadullisesti ja määrällisesti yhä enemmän disinformaatiota.
---
– Videolla voidaan saada kuka tahansa sanomaan mitä tahansa kenen tahansa äänellä. Kuka vain voi tehdä juridisen kuuloisen tiedotteen mistä tahansa pähkähullusta asiasta. Riiheläisen mukaan meillä ei enää kohta ole tapoja todentaa tietoa muodon kautta. Tietoon pitää syventyä ennen kuin näemme, onko se luotettavaa vai ei.
---
Toimittaja-tietokirjailija Johanna Vehkoo kirjoittaa vuonna 2019 ilmestyneessä Valheenpaljastajan käsikirjassaan, että pitkään ajateltiin, että salaliittoteorioita ja disinformaatiota levittävät ihmiset, joilta puuttuu yhteiskunnallista valtaa. Viime vuosina asetelma on kääntynyt lähes päinvastaiseksi – salaliittoteoriat ja disinformaatio leviävät nykyisin myös yhteiskunnan huipulta, kuten poliitikoilta, julkkiksilta ja sosiaalisen median vaikuttajilta.
---
Generatiivisen tekoälyn käyttö oikeistoradikaalin disinformaation tuotannossa ei ole uusi ilmiö. Sitä on nähty tänä vuonna käytettävän esimerkiksi Southportin mellakoiden lietsomiseen, vaalivaikuttamiseen EU-vaaleissa ja Yhdysvaltain sisäpolitikan manipulointiin. Meemien muodossa disinformaatiota on esimerkiksi Elon Muskin omistaman X-alustan Grok-kuvageneraattorilla.
---
Kaupalliset kielimallit tarjoavat terroristiryhmittymille pomminteko-ohjeita, tietoa yhteiskunnallisen infrastruktuurin haavoittuvuuspisteistä sekä kirjoittavat käskystä sivukaupalla antisemitististä vihapuhetta. Kuva- ja äänigeneraattoreita puolestaan käytetään deepfake-videoiden sekä äärioikeistolaisten meemien luomiseen.
---
Johanna Vehkoon mukaan osa tutkijoista kutsuu suuria kielimalleja, etenkin ChatGPT:tä, ”paskapuhegeneraattoreiksi”, koska ne eivät tiedä, mikä on fakta. Ne voivat keksiä vakuuttavan näköisiä lähdeviitteitä, joita ei ole olemassa. Lisäksi ChatGPT voi keksiä olemattomia journalistisia artikkeleita ja tapahtumia. – Monet ihmiset käyttävät näitä tiedonhakuun. Se on hirvittävän pelottava kehityskulku faktantarkistuksen ja journalismin näkökulmasta, Vehkoo sanoo.
---
Suuret kielimallit on ohjelmoitu antamaan vastauksia kaikenlaisissa tilanteissa ja kuulostamaan vakuuttavilta, vaikka oikeasti ne eivät tiedä, mitä tieto on. Ongelmaa on lähes mahdoton poistaa, sillä se on sisäänrakennettu tekoälymalleihin. Lisäksi estoja voi kiertää.
---
Hyvä kohde henkilökohtaistetulle disinformaatioviestinnälle ovat esimerkiksi salaliittoteorioihin kallellaan olevat sosiaalisen median vaikuttajat, joiden kautta viesti leviää suurelle määrälle seuraajia.
---
Sisällöntuottajiin kohdistettu propagandakampanja nähtiin jo tänä vuonna, kun yhdysvaltalaiset oikeistolaiset vaikuttajat Tim Pool, Dave Rubin ja Benny Johnson levittivät Venäjän valtion sponsoroiman propagandakampanjan disinformaatiota, jossa käytettiin myös generatiivista tekoälyä.
---
11 notes · View notes
chappydev · 7 months ago
Text
Future of LLMs (or, "AI", as it is improperly called)
Posted a thread on bluesky and wanted to share it and expand on it here. I'm tangentially connected to the industry as someone who has worked in game dev, but I know people who work at more enterprise focused companies like Microsoft, Oracle, etc. I'm a developer who is highly AI-critical, but I'm also aware of where it stands in the tech world and thus I think I can share my perspective. I am by no means an expert, mind you, so take it all with a grain of salt, but I think that since so many creatives and artists are on this platform, it would be of interest here. Or maybe I'm just rambling, idk.
LLM art models ("AI art") will eventually crash and burn. Even if they win their legal battles (which if they do win, it will only be at great cost), AI art is a bad word almost universally. Even more than that, the business model hemmoraghes money. Every time someone generates art, the company loses money -- it's a very high energy process, and there's simply no way to monetize it without charging like a thousand dollars per generation. It's environmentally awful, but it's also expensive, and the sheer cost will mean they won't last without somehow bringing energy costs down. Maybe this could be doable if they weren't also being sued from every angle, but they just don't have infinite money.
Companies that are investing in "ai research" to find a use for LLMs in their company will, after years of research, come up with nothing. They will blame their devs and lay them off. The devs, worth noting, aren't necessarily to blame. I know an AI developer at meta (LLM, really, because again AI is not real), and the morale of that team is at an all time low. Their entire job is explaining patiently to product managers that no, what you're asking for isn't possible, nothing you want me to make can exist, we do not need to pivot to LLMs. The product managers tell them to try anyway. They write an LLM. It is unable to do what was asked for. "Hm let's try again" the product manager says. This cannot go on forever, not even for Meta. Worst part is, the dev who was more or less trying to fight against this will get the blame, while the product manager moves on to the next thing. Think like how NFTs suddenly disappeared, but then every company moved to AI. It will be annoying and people will lose jobs, but not the people responsible.
ChatGPT will probably go away as something public facing as the OpenAI foundation continues to be mismanaged. However, while ChatGPT as something people use to like, write scripts and stuff, will become less frequent as the public facing chatGPT becomes unmaintainable, internal chatGPT based LLMs will continue to exist.
This is the only sort of LLM that actually has any real practical use case. Basically, companies like Oracle, Microsoft, Meta etc license an AI company's model, usually ChatGPT.They are given more or less a version of ChatGPT they can then customize and train on their own internal data. These internal LLMs are then used by developers and others to assist with work. Not in the "write this for me" kind of way but in the "Find me this data" kind of way, or asking it how a piece of code works. "How does X software that Oracle makes do Y function, take me to that function" and things like that. Also asking it to write SQL queries and RegExes. Everyone I talk to who uses these intrernal LLMs talks about how that's like, the biggest thign they ask it to do, lol.
This still has some ethical problems. It's bad for the enivronment, but it's not being done in some datacenter in god knows where and vampiring off of a power grid -- it's running on the existing servers of these companies. Their power costs will go up, contributing to global warming, but it's profitable and actually useful, so companies won't care and only do token things like carbon credits or whatever. Still, it will be less of an impact than now, so there's something. As for training on internal data, I personally don't find this unethical, not in the same way as training off of external data. Training a language model to understand a C++ project and then asking it for help with that project is not quite the same thing as asking a bot that has scanned all of GitHub against the consent of developers and asking it to write an entire project for me, you know? It will still sometimes hallucinate and give bad results, but nowhere near as badly as the massive, public bots do since it's so specialized.
The only one I'm actually unsure and worried about is voice acting models, aka AI voices. It gets far less pushback than AI art (it should get more, but it's not as caustic to a brand as AI art is. I have seen people willing to overlook an AI voice in a youtube video, but will have negative feelings on AI art), as the public is less educated on voice acting as a profession. This has all the same ethical problems that AI art has, but I do not know if it has the same legal problems. It seems legally unclear who owns a voice when they voice act for a company; obviously, if a third party trains on your voice from a product you worked on, that company can sue them, but can you directly? If you own the work, then yes, you definitely can, but if you did a role for Disney and Disney then trains off of that... this is morally horrible, but legally, without stricter laws and contracts, they can get away with it.
In short, AI art does not make money outside of venture capital so it will not last forever. ChatGPT's main income source is selling specialized LLMs to companies, so the public facing ChatGPT is mostly like, a showcase product. As OpenAI the company continues to deathspiral, I see the company shutting down, and new companies (with some of the same people) popping up and pivoting to exclusively catering to enterprises as an enterprise solution. LLM models will become like, idk, SQL servers or whatever. Something the general public doesn't interact with directly but is everywhere in the industry. This will still have environmental implications, but LLMs are actually good at this, and the data theft problem disappears in most cases.
Again, this is just my general feeling, based on things I've heard from people in enterprise software or working on LLMs (often not because they signed up for it, but because the company is pivoting to it so i guess I write shitty LLMs now). I think artists will eventually be safe from AI but only after immense damages, I think writers will be similarly safe, but I'm worried for voice acting.
8 notes · View notes
itmeblog · 7 months ago
Text
I've been thinking a lot about ChatGPT lately and how humans are really good at finding patterns to the point that we find them in places they dont exist. Like the phenomenon where anything abstract has the possibility of being interpreted as a face.
Anyway, we see what machines do and matter in -> black box -> matter out. And like for programs and printers and automated systems and shit that can be fine for working knowledge. Like someone out there knows precisely how the self checkout works but all I need to know is that the scanner works and that it accepts debit.
But we conflate that with our existence. People are not machines. The point of writing a Great Gatsby essay in high school wasn't because the essay mattered. The end result wasn't the point. The point was the black box. The synthesis, the analyzing text and interpretation of quotes that taught you stuff like media analysis and checked reading comprehension. Everything that happened from the time you got the rubric to the final submission of that paper mattered. The essay itself is like... vaguely consequential. There are hundreds of thousands of not millions of Great Gatsby essays, nobody was clammoring for more, particularly not 9th grade English teachers.
The point is the black box, the point is the *doing*. And you see we do that weird optimization everywhere, it's not even new! AI just made a new dimension. Make dinner faster, monetize your hobbies, write this quicker, lose weight at mach 1, don't you want Alexa to read bedtime stories to your kid!?
I know it's all capitalism and whatever in a trenchcoat but sometimes it really has me wondering, what am I alive for? And the answer so often is "the black box".
Like doing silly voices while reading to make my family laugh or pouring over a dictionary while writing or smelling onions caramelize. Stories From Ylelmore in and of itself is a middle finger to story optimization omg, how many convos in there are so strictly unnecessary to the plot?? Because in Ylelmore the plot doesn't matter! The point is making the journey as fun as possible!! Having a stupid 5 minutes argument about whether fish drown? Sure, why not?! We have the time!!
Just a few years ago I used to lament having to *sleep* and all the productivity I was missing out on! And why? For what? What was there to be productive about?
7 notes · View notes
puraiuddo · 2 years ago
Text
Tumblr media
So by popular demand here is my own post about
Tumblr media Tumblr media
and why
This case will not affect fanwork.
The actual legal complaint that was filed in court can be found here and I implore people to actually read it, as opposed to taking some rando's word on it (yes, me, I'm some rando).
The Introductory Statement (just pages 2-3) shouldn't require being fluent in legalese and it provides a fairly straightforward summary of what the case is aiming to accomplish, why, and how.
That said, I understand that for the majority of people 90% of the complaint is basically incomprehensible, so please give me some leeway as I try to condense 4 years of school and a 47 page legal document into a tumblr post.
To abbreviate to the extreme, page 46 (paragraph 341, part d) lays out exactly what the plaintiffs are attempting to turn into law:
"An injunction [legal ruling] prohibiting Defendants [AI] from infringing Plaintiffs' [named authors] and class members' [any published authors] copyrights, including without limitation enjoining [prohibiting] Defendants from using Plaintiff's and class members' copyrighted works in "training" Defendant's large language models without express authorization."
That's it. That's all.
This case is not even attempting to alter the definition of "derivative work" and nothing in the language of the argument suggests that it would inadvertently change the legal treatment of "derivative work" going forward.
I see a lot of people throwing around the term "precedent" in a frenzy, assuming that because a case touches on a particular topic (eg “derivative work” aka fanart, fanfiction, etc) somehow it automatically and irrevocably alters the legal standing of that thing going forward.
That’s not how it works.
What's important to understand about the legal definition of "precedent" vs the common understanding of the term is that in law any case can simultaneously follow and establish precedent. Because no two cases are wholly the same due to the diversity of human experience, some elements of a case can reference established law (follow precedent), while other elements of a case can tread entirely new ground (establish precedent).
The plaintiffs in this case are attempting to establish precedent that anything AI creates going forward must be classified as "derivative work", specifically because they are already content with the existing precedent that defines and limits "derivative work".
The legal limitations of "derivative work", such as those dictating that only once it is monetized are its creators fair game to be sued, are the only reason the authors can* bring this to court and seek damages.
*this is called the "grounds" for a lawsuit. You can't sue someone just because you don't like what they're doing. You have to prove you are suffering "damages". This is why fanworks are tentatively "safe"—it's basically impossible to prove that Ebony Dark'ness Dementia is depriving the original creator of any income when she's providing her fanfic for free. On top of that, it's not worth the author’s time or money to attempt to sue Ebony when there's nothing for the author to monetarily gain from a broke nerd.
Pertaining to how AI/ChatGPT is "damaging" authors when Ebony isn't and how much of an unconscionable difference there is between the potential profits up for grabs between the two:
Page 9 (paragraphs 65-68) detail how OpenAI/ChatGPT started off as a non-profit in 2015, but then switched to for-profit in 2019 and is now valued at $29 Billion.
Pages 19-41 ("Plaintiff-Specific Allegations") detail how each named author in the lawsuit has been harmed and pages 15-19 ("GPT-N's and ChatGPT’s Harm to Authors") outline all the other ways that AI is putting thousands and thousands of other authors out of business by flooding the markets with cheap commissions and books.
The only ethically debatable portion of this case is the implications of expanding what qualifies as "derivative work".
However, this case seems pretty solidly aimed at Artificial Intelligence, with very little opportunity for the case to establish precedent that could be used against humans down the line. The language of the case is very thorough in detailing how the specific mechanics of AI means that it copies* copywritten material and how those mechanics specifically mean that anything it produces should be classified as "derivative work" (by virtue of there being no way to prove that everything it produces is not a direct product of it having illegally obtained and used** copywritten material).
*per section "General Factual Allegations" (pgs 7-8), the lawsuit argues that AI uses buzzwords ("train" "learn" "intelligence") to try to muddy how AI works, but in reality it all boils down to AI just "copying" (y'all can disagree with this if you want, I'm just telling you what the lawsuit says)
**I see a lot of people saying that it's not copyright infringement if you're not the one who literally scanned the book and uploaded it to the web—this isn't true. Once you "possess" (and downloading counts) copywritten material through illegal means, you are breaking the law. And AI must first download content in order to train its algorithm, even if it dumps the original content nano-seconds later. So, effectively, AI cannot interact with copywritten material in any capacity, by virtue of how it interacts with content, without infringing.
Now that you know your fanworks are safe, I'll provide my own hot take 🔥:
Even if—even if—this lawsuit put fanworks in jeopardy... I'd still be all for it!
Why? Because if no one can make a living organically creating anything and it leads to all book, TV, and movie markets being entirely flooded with a bunch of progressively more soulless and reductive AI garbage, what the hell are you even going to be making fanworks of?
But, no, actually because the dangers of AI weaseling its way into every crevice of society with impunity is orders of magnitude more dangerous and detrimental to literal human life than fanwork being harder to access.
Note to anyone who chooses to interact with this post in any capacity: Just be civil!
81 notes · View notes