#AI text humanization tool
Explore tagged Tumblr posts
tryslat · 10 months ago
Text
Features of Our AI To Human Text Converter
However, not all AI-generated text is fit for human consumption without some level of refinement. That's where the AI To Human Text Converter comes in, a free tool that turns robotic-sounding AI content into natural, human-readable text. Let's dive into the key features that make this tool an indispensable resource for anyone looking to humanize AI-generated text effortlessly.
1. Simple and User-Friendly Interface
One of the standout features of our AI To Human Text Converter is its simple, user-friendly interface. Many people shy away from using complex tools that require a steep learning curve. Fortunately, this converter is designed with ease of use in mind. The interface is intuitive, allowing users to quickly navigate the platform and convert their AI-generated text into human-like content within seconds. There's no need to struggle with confusing menus or spend time learning how to use the tool.
2. Safe and Secure to Use
Safety is paramount when it comes to using online tools, especially for text conversion. AI To Human Text Converter ensures that users' data is protected through secure browsing measures. The website is well-secured, minimizing any risks of data breaches or security threats. Whether you're a content creator, student, or professional, you can confidently use the tool without worrying about jeopardizing your safety.
3. Accurate Conversion of AI-Generated Content to Human-Like Text
The primary feature of the AI To Human Text Converter is its ability to transform AI-generated content into human-readable text. Utilizing advanced algorithms, the tool analyzes the input and produces an output that closely mimics the natural flow of human writing. Whether you're converting AI content for essays, blog posts, or marketing materials, this tool ensures the end result is clear, engaging, and free of robotic phrasing.
4. No Limitations – Unlimited Usage
One of the most attractive features of the AI To Human Text Converter is its unlimited usage policy. Unlike other tools that impose restrictions or require subscriptions after a certain number of conversions, our tool is completely free with no limitations. You can convert as much content as you need, whenever you need it. This makes it an ideal solution for content creators, bloggers, and students with large volumes of AI-generated text to convert.
5. Fast and Efficient Processing
Time is a valuable commodity, and with our AI to human text converter, speed is a top priority. The tool processes your content in seconds, delivering humanized text quickly and efficiently. Whether you have a single paragraph or an entire document to convert, you can trust that the tool will provide results without delays.
6. No Authentication Needed
Another significant advantage of AI To Human Text Converter is that you don’t need to create an account, sign up, or log in. The tool is ready for immediate use, allowing you to convert text as soon as you arrive at the website. This no-authentication feature ensures a hassle-free experience, making it easy for users to get started right away.
Why Choose AI To Human Text Converter?
If you're looking for a reliable and efficient way to humanize your AI-generated content, AI To Human Text Converter is the perfect choice. Here are some key reasons why you should consider using this tool:
Free of Cost: Our tool is completely free to use, with no hidden fees or subscription costs.
Unlimited Use: Convert as much AI content as you need without worrying about restrictions.
No Login Required: Enjoy immediate access to the tool without needing to create an account.
Fast Conversion: Save time with near-instant results that transform AI text into human-like content.
User-Friendly: The intuitive interface makes it easy for anyone to use, even without prior experience.
The AI To Human Text Converter is packed with features that make it an excellent choice for anyone looking to convert AI-generated content into natural, human-readable text. Its simple interface, fast processing, and unlimited usage ensure that you get the best results without any hassle. Plus, with top-notch security measures in place, you can use the tool confidently and safely. Whether you’re a student, content creator, or professional, this tool is designed to meet all your text conversion needs.
Try the AI To Human Text Converter today and experience the difference for yourself!
5 notes · View notes
bluntsam · 6 months ago
Text
My evidence that AI could never replace me.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Read ‘em and weep boys. My bullshit is one-hundo-percent human.
5 notes · View notes
aitohumantextconverter · 9 months ago
Text
Top 5 Benefits of Using AI to Human Text Converter Tools
The evolution of AI-generated content has reshaped the way we approach writing. From drafting essays to generating blog posts, AI has opened up countless opportunities. However, AI content may lack the natural flow that resonates with human readers. This is where tools like the AI to Human Text Converter come in, transforming AI-generated text into human-like language. Let’s explore the top five benefits of using AI to Human Text Converter tools.
1. Enhanced Readability and Engagement
AI-generated content can sometimes sound robotic and lack the nuances of human expression. AI to Human Text Converter tools refine this content, ensuring it reads smoothly and is relatable. This makes it perfect for websites, academic papers, and blogs, where readability and engagement are crucial. When readers find content relatable, they’re more likely to stay on the page, enhancing user experience and engagement.
2. SEO Optimization
Humanized content performs better in search engines. AI to Human Text Converters like AI to Human Text Converter optimize AI-generated text to meet SEO standards, helping websites rank higher on Google. Search engines reward content that is natural, relevant, and engaging. By converting AI text into human-like language, you’re also ensuring that your content aligns with Google’s E-A-T (Expertise, Authoritativeness, Trustworthiness) standards, leading to a boost in SEO rankings.
3. Bypass AI Detection
Many websites and platforms are now equipped with AI-detection tools. Using an AI to Human Text Converter helps your content bypass these detectors, making it appear authentically human-written. For students, bloggers, or professionals using AI-generated content, this ensures the text sounds original and relatable, without setting off alarms on content-checking platforms.
4. Multilingual Support and Accessibility
Tools like AI to Human Text Converter offer multilingual support, making it easy to convert AI-generated text into human-like language in various languages. This is a powerful feature for businesses or individuals aiming to reach a diverse audience. When content resonates across languages, it makes your site more inclusive and enhances your global reach.
5. Free and Unlimited Usage
Many AI to Human Text Converter tools, including AI to Human Text Converter, offer their services for free with unlimited use. This means you can humanize as much AI content as needed without subscription fees. Whether you’re a student, writer, or business professional, this tool is both cost-effective and efficient, giving you access to top-quality, human-like content at no expense.
3 notes · View notes
ai-undetectable · 1 year ago
Text
AI Undetectable Humanize AI Text Tool for Authentic Communication
Transform your text with AI Undetectable, the premier humanize AI text tool. Seamlessly infuse your content with genuine human-like expressions, making your communication more relatable and engaging. Say goodbye to robotic-sounding text and hello to authentic connections. Whether you're crafting marketing copy, customer service responses, or social media posts, AI Undetectable empowers you to humanize AI text effortlessly. Elevate your brand's voice and resonate with your audience on a deeper level with our innovative tool.
0 notes
patricia-taxxon · 7 months ago
Note
So wait, let me just ask for clarity because I want to understand. Do you support AI art?
i support art made with spontaneous and hands-off processes, i support the creation of art tools that are more art than tool & allow people to "participate" in someone else's creation vicariously a-la picrew, i don't support the institution of "AI" as a consumer grade technology industry that promises impossible things and prioritizes appearances and marketability over usability, i believe that if "AI" allowed people to siphon images directly from their brain with no effort required then it would be a good thing but I believe this is fundamentally impossible until we figure out how to read minds and the focus on arguing for or against accessibility is missing the point, i believe AI art can only ever be a pale imitation of the process of commissioning an artist who can't ever ask questions and cannot be trusted with object permanence, I believe copyright law is a head on the hydra of capitalism and doesn't serve artists, i believe that AI art isn't necessarily art theft but it CAN overfit to its data and create illegal works without telling you, which constitutes criminal levels of negligence, I believe all art is derivative in some way and some of the most seminal art made in this era of history has been far more dubiously infringing than AI art ever can be because AI art does not steal in the way a human does, I think the focus on energy consumption is transparently just a post-hoc justification for hating the thing you all already hated under the guise of environmentalism because it is a problem far from unique to AI, I think the focus on environmentalism was a distraction at best during the NFT craze too, i don't think AI art takes artists out of a job any more than stock photos or clipart does, but the proliferation of consumer-grade tools DOES run the risk of engendering bad client practices similar to the rise of machine translation and asking translators to simply "fix" a machine translated run of text at a marked down price, but this is not the fault of the technology itself and is instead a result of the ideological push being made by the biggest actors in the industry, i think AI art is ugly as sin and carries the pervasive quality of looking normal at a glance but getting worse and worse the longer you look at it, which can be interesting but often isn't, i think ai art is shit google images and the controversy is overblown but I think machine learning is here to stay and it will inevitably decentralize again after the immense costs catch up to all the corpos relying on it to win the future.
so like, yes and no.
3K notes · View notes
superconductivebean · 3 months ago
Text
I know what AI is, and this is my main reason not to use it.
When AI receives a prompt, it selects a *token*—the key word for its search query. It does not open up Google for you—it selects a group of tokens, interlinked with the OG token, to generate the probability matrix for its eventual first word.
AI will generate many more of these matrixes, but the key is: 1) its choice for words is dictated by the Perplexity (how wordly; rarer words will be more likely to be chosen from the matrix); 2) its sentence structure and the overall use of tokens is determined by the Burstiness (the length and the complexity of the sentence, and how many more tokens can be allowed per answer); 3) the matrix itself—the process of generating them rather than a standalone among the many—is set by the OG token, so AI is already limited in its output from the very beginning. AI cannot have unlimited access to tokens; tokens are not only the unit of memory—they require computation power, so any given AI will be limited in how much of them it can have, otherwise it is limited by the hardware.
How is AI able co computate all of that? Datasets, unethically gathered—stolen, in short; the fact that AI consume their own outputs after the web has been scrapped goes without mention.
It's a simplified explanation, but it's enough to say such a limited, crude, purely mechanical software can only be used for style checking or assigning a reading grade—something you would need math for. Whoever decided to market it for *creativity* has spawned NFT 2.0 Idiot Bubble.
I hope we are not seeing regurgitated Project Gutenberg for 19th Century Authentic Dark Academia Dark Fantasy Writing bullshite after it eventually bursts.
anyway I saw the AI poll and thought it needed like, more options? anyhow. I would like to hear nuances in the reblogs!
I guess if you do use AI just straight up unfollow my blog?
645 notes · View notes
imsobadatnicknames2 · 1 year ago
Note
How can you consider yourself any sort of leftist when you defend AI art bullshit? You literally simp for AI techbros and have the gall to pretend you're against big corporations?? Get fucked
I don't "defend" AI art. I think a particular old post of mine that a lot of people tend to read in bad faith must be making the rounds again lmao.
Took me a good while to reply to this because you know what? I decided to make something positive out of this and use this as an opportunity to outline what I ACTUALLY believe about AI art. If anyone seeing this decides to read it in good or bad faith... Welp, your choice I guess.
I have several criticisms of the way the proliferation of AI art generators and LLMs is making a lot of things worse. Some of these are things I have voiced in the past, some of these are things I haven't until now:
Most image and text AI generators are fine-tuned to produce nothing but the most agreeable, generically pretty content slop, pretty much immediately squandering their potential to be used as genuinely interesting artistic tools with anything to offer in terms of a unique aesthetic experience (AI video still manages to look bizarre and interesting but it's getting there too)
In the entertainment industry and a lot of other fields, AI image generation is getting incorporated into production pipelines in ways that lead to the immiseration of working artists, being used to justify either lower wages or straight-up layoffs, and this is something that needs to be fought against. That's why I unconditionally supported the SAG-AFTRA strikes last year and will unconditionally support any collective action to address AI art as a concrete labor issue
In most fields where it's being integrated, AI art is vastly inferior to human artists in any use case where you need anything other than to make a superficially pretty picture really fast. If you need to do anything like ask for revisions or minor corrections, give very specific descriptions of how objects and people are interacting with each other, or just like. generate several pictures of the same thing and have them stay consistent with each other, you NEED human artists and it's preposterous to think they can be replaced by AI.
There is a lot of art on the internet that consists of the most generically pretty, cookie-cutter anime waifu-adjacent slop that has zero artistic or emotional value to either the people seeing it or the person churning it out, and while this certainly was A Thing before the advent of AI art generators, generative AI has made it extremely easy to become the kind of person who churns it out and floods online art spaces with it.
Similarly, LLMs make it extremely easy to generate massive volumes of texts, pages, articles, listicles and what have you that are generic vapid SEO-friendly pap at best and bizzarre nonsense misinformation at worst, drowning useful information in a sea of vapid noise and rendering internet searches increasingly useless.
The way LLMs are being incorporated into customer service and similar services not only, again, encourages further immiseration of customer service workers, but it's also completely useless for most customers.
A very annoyingly vocal part the population of AI art enthusiasts, fanatics and promoters do tend to talk about it in a way that directly or indirectly demeans the merit and skill of human artists and implies that they think of anyone who sees anything worthwile in the process of creation itself rather than the end product as stupid or deluded.
So you can probably tell by now that I don't hold AI art or writing in very high regard. However (and here's the part that'll get me called an AI techbro, or get people telling me that I'm just jealous of REAL artists because I lack the drive to create art of my own, or whatever else) I do have some criticisms of the way people have been responding to it, and have voiced such criticisms in the past.
I think a lot of the opposition to AI art has critstallized around unexamined gut reactions, whipping up a moral panic, and pressure to outwardly display an acceptable level of disdain for it. And in particular I think this climate has made a lot of people very prone to either uncritically entertain and adopt regressive ideas about Intellectual Propety, OR reveal previously held regressive ideas about Intellectual Property that are now suddenly more socially acceptable to express:
(I wanna preface this section by stating that I'm a staunch intellectual property abolitionist for the same reason I'm a private property abolitionist. If you think the existence of intellectual property is a good thing, a lot of my ideas about a lot of stuff are gonna be unpalatable to you. Not much I can do about it.)
A lot of people are suddenly throwing their support behind any proposal that promises stricter copyright regulations to combat AI art, when a lot of these also have the potential to severely udnermine fair use laws and fuck over a lot of independent artist for the benefit of big companies.
It was very worrying to see a lot of fanfic authors in particular clap for the George R R Martin OpenAI lawsuit because well... a lot of them don't realize that fanfic is a hobby that's in a position that's VERY legally precarious at best, that legally speaking using someone else's characters in your fanfic is as much of a violation of copyright law as straight up stealing entire passages, and that any regulation that can be used against the latter can be extended against the former.
Similarly, a lot of artists were cheering for the lawsuit against AI art models trained to mimic the style of specific artists. Which I agree is an extremely scummy thing to do (just like a human artist making a living from ripping off someone else's work is also extremely scummy), but I don't think every scummy act necessarily needs to be punishable by law, and some of them would in fact leave people worse off if they were. All this to say: If you are an artist, and ESPECIALLY a fan artist, trust me. You DON'T wanna live in a world where there's precedent for people's artstyles to be considered intellectual property in any legally enforceable way. I know you wanna hurt AI art people but this is one avenue that's not worth it.
Especially worrying to me as an indie musician has been to see people mention the strict copyright laws of the music industry as a positive thing that they wanna emulate. "this would never happen in the music industry because they value their artists copyright" idk maybe this is a the grass is greener type of situation but I'm telling you, you DON'T wanna live in a world where copyright law in the visual arts world works the way it does in the music industry. It's not worth it.
I've seen at least one person compare AI art model training to music sampling and say "there's a reason why they cracked down on sampling" as if the death of sampling due to stricter copyright laws was a good thing and not literally one of the worst things to happen in the history of music which nearly destroyed several primarily black music genres. Of course this is anecdotal because it's just One Guy I Saw Once, but you can see what I mean about how uncritical support for copyright law as a tool against AI can lead people to adopt increasingly regressive ideas about copyright.
Similarly, I've seen at least one person go "you know what? Collages should be considered art theft too, fuck you" over an argument where someone else compared AI art to collages. Again, same point as above.
Similarly, I take issue with the way a lot of people seem EXTREMELY personally invested in proving AI art is Not Real Art. I not only find this discussion unproductive, but also similarly dangerously prone to validating very reactionary ideas about The Nature Of Art that shouldn't really be entertained. Also it's a discussion rife with intellectual dishonesty and unevenly applied definition and standards.
When a lot of people present the argument of AI art not being art because the definition of art is this and that, they try to pretend that this is the definition of art the've always operated under and believed in, even when a lot of the time it's blatantly obvious that they're constructing their definition on the spot and deliberately trying to do so in such a way that it doesn't include AI art.
They never succeed at it, btw. I've seen several dozen different "AI art isn't art because art is [definition]". I've seen exactly zero of those where trying to seriously apply that definition in any context outside of trying to prove AI art isn't art doesn't end up in it accidentally excluding one or more non-AI artforms, usually reflecting the author's blindspots with regard to the different forms of artistic expression.
(However, this is moot because, again, these are rarely definitions that these people actually believe in or adhere to outside of trying to win "Is AI art real art?" discussions.)
Especially worrying when the definition they construct is built around stuff like Effort or Skill or Dedication or The Divine Human Spirit. You would not be happy about the kinds of art that have traditionally been excluded from Real Art using similar definitions.
Seriously when everyone was celebrating that the Catholic Church came out to say AI art isn't real art and sharing it as if it was validating and not Extremely Worrying that the arguments they'd been using against AI art sounded nearly identical to things TradCaths believe I was like. Well alright :T You can make all the "I never thought I'd die fighting side by side with a catholic" legolas and gimli memes you want, but it won't change the fact that the argument being made by the catholic church was a profoundly conservative one and nearly identical to arguments used to dismiss the artistic merit of certain forms of "degenerate" art and everyone was just uncritically sharing it, completely unconcerned with what kind of worldview they were lending validity to by sharing it.
Remember when the discourse about the Gay Sex cats pic was going on? One of the things I remember the most from that time was when someone went "Tell me a definition of art that excludes this picture without also excluding Fountain by Duchamp" and how just. Literally no one was able to do it. A LOT of people tried to argue some variation of "Well, Fountain is art and this image isn't because what turns fountain into art is Intent. Duchamp's choice to show a urinal at an art gallery as if it was art confers it an element of artistic intent that this image lacks" when like. Didn't by that same logic OP's choice to post the image on tumblr as if it was art also confer it artistic intent in the same way? Didn't that argument actually kinda end up accidentally validating the artistic status of every piece of AI art ever posted on social media? That moment it clicked for me that a lot of these definitions require applying certain concepts extremely selectively in order to make sense for the people using them.
A lot of people also try to argue it isn't Real Art based on the fact that most AI art is vapid but like. If being vapid definitionally excludes something from being art you're going to have to exclude a whooole lot of stuff along with it. AI art is vapid. A lot of art is too, I don't think this argument works either.
Like, look, I'm not really invested in trying to argue in favor of The Artistic Merits of AI art but I also find it extremely hard to ignore how trying to categorically define AI art as Not Real Art not only is unproductive but also requires either a) applying certain parts of your definition of art extremely selectively, b) constructing a definition of art so convoluted and full of weird caveats as to be functionally useless, or c) validating extremely reactionary conservative ideas about what Real Art is.
Some stray thoughts that don't fit any of the above sections.
I've occassionally seen people respond to AI art being used for shitposts like "A lot of people have affordable commissions, you could have paid someone like $30 to draw this for you instead of using the plagiarism algorithm and exploiting the work of real artists" and sorry but if you consider paying an artist a rate that amounts to like $5 for several hours of work a LESS exploitative alternative I think you've got something fucked up going on with your priorities.
Also it's kinda funny when people comment on the aforementioned shitposts with some variation of "see, the usage of AI art robs it of all humor because the thing that makes shitposts funny is when you consider the fact that someone would spend so much time and effort in something so stupid" because like. Yeah that is part of the humor SOMETIMES but also people share and laugh at low effort shitposts all the time. Again you're constructing a definition that you don't actually believe in anywhere outside of this type of conversations. Just say you don't like that it's AI art because you think it's morally wrong and stop being disingenuous.
So yeah, this is pretty much everything I believe about the topic.
I don't "defend" AI art, but my opposition to it is firmly rooted in my principles, and that means I refuse to uncritically accept any anti-AI art argument that goes against those same principles.
If you think not accepting and parroting every Anti-AI art argument I encounter because some of them are ideologically rooted in things I disagree with makes me indistinguishable from "AI techbros" you're working under a fucked up dichotomy.
2K notes · View notes
mostlysignssomeportents · 2 years ago
Text
What kind of bubble is AI?
Tumblr media
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes · View notes
nostalgebraist · 4 months ago
Text
Anthropic's stated "AI timelines" seem wildly aggressive to me.
As far as I can tell, they are now saying that by 2028 – and possibly even by 2027, or late 2026 – something they call "powerful AI" will exist.
And by "powerful AI," they mean... this (source, emphasis mine):
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use. The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
In the post I'm quoting, Amodei is coy about the timeline for this stuff, saying only that
I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside [...]
However, other official communications from Anthropic have been more specific. Most notable is their recent OSTP submission, which states (emphasis in original):
Based on current research trajectories, we anticipate that powerful AI systems could emerge as soon as late 2026 or 2027 [...] Powerful AI technology will be built during this Administration. [i.e. the current Trump administration -nost]
See also here, where Jack Clark says (my emphasis):
People underrate how significant and fast-moving AI progress is. We have this notion that in late 2026, or early 2027, powerful AI systems will be built that will have intellectual capabilities that match or exceed Nobel Prize winners. They’ll have the ability to navigate all of the interfaces… [Clark goes on, mentioning some of the other tenets of "powerful AI" as in other Anthropic communications -nost]
----
To be clear, extremely short timelines like these are not unique to Anthropic.
Miles Brundage (ex-OpenAI) says something similar, albeit less specific, in this post. And Daniel Kokotajlo (also ex-OpenAI) has held views like this for a long time now.
Even Sam Altman himself has said similar things (though in much, much vaguer terms, both on the content of the deliverable and the timeline).
Still, Anthropic's statements are unique in being
official positions of the company
extremely specific and ambitious about the details
extremely aggressive about the timing, even by the standards of "short timelines" AI prognosticators in the same social cluster
Re: ambition, note that the definition of "powerful AI" seems almost the opposite of what you'd come up with if you were trying to make a confident forecast of something.
Often people will talk about "AI capable of transforming the world economy" or something more like that, leaving room for the AI in question to do that in one of several ways, or to do so while still failing at some important things.
But instead, Anthropic's definition is a big conjunctive list of "it'll be able to do this and that and this other thing and...", and each individual capability is defined in the most aggressive possible way, too! Not just "good enough at science to be extremely useful for scientists," but "smarter than a Nobel Prize winner," across "most relevant fields" (whatever that means). And not just good at science but also able to "write extremely good novels" (note that we have a long way to go on that front, and I get the feeling that people at AI labs don't appreciate the extent of the gap [cf]). Not only can it use a computer interface, it can use every computer interface; not only can it use them competently, but it can do so better than the best humans in the world. And all of that is in the first two paragraphs – there's four more paragraphs I haven't even touched in this little summary!
Re: timing, they have even shorter timelines than Kokotajlo these days, which is remarkable since he's historically been considered "the guy with the really short timelines." (See here where Kokotajlo states a median prediction of 2028 for "AGI," by which he means something less impressive than "powerful AI"; he expects something close to the "powerful AI" vision ["ASI"] ~1 year or so after "AGI" arrives.)
----
I, uh, really do not think this is going to happen in "late 2026 or 2027."
Or even by the end of this presidential administration, for that matter.
I can imagine it happening within my lifetime – which is wild and scary and marvelous. But in 1.5 years?!
The confusing thing is, I am very familiar with the kinds of arguments that "short timelines" people make, and I still find the Anthropic's timelines hard to fathom.
Above, I mentioned that Anthropic has shorter timelines than Daniel Kokotajlo, who "merely" expects the same sort of thing in 2029 or so. This probably seems like hairsplitting – from the perspective of your average person not in these circles, both of these predictions look basically identical, "absurdly good godlike sci-fi AI coming absurdly soon." What difference does an extra year or two make, right?
But it's salient to me, because I've been reading Kokotajlo for years now, and I feel like I basically get understand his case. And people, including me, tend to push back on him in the "no, that's too soon" direction. I've read many many blog posts and discussions over the years about this sort of thing, I feel like I should have a handle on what the short-timelines case is.
But even if you accept all the arguments evinced over the years by Daniel "Short Timelines" Kokotajlo, even if you grant all the premises he assumes and some people don't – that still doesn't get you all the way to the Anthropic timeline!
To give a very brief, very inadequate summary, the standard "short timelines argument" right now is like:
Over the next few years we will see a "growth spurt" in the amount of computing power ("compute") used for the largest LLM training runs. This factor of production has been largely stagnant since GPT-4 in 2023, for various reasons, but new clusters are getting built and the metaphorical car will get moving again soon. (See here)
By convention, each "GPT number" uses ~100x as much training compute as the last one. GPT-3 used ~100x as much as GPT-2, and GPT-4 used ~100x as much as GPT-3 (i.e. ~10,000x as much as GPT-2).
We are just now starting to see "~10x GPT-4 compute" models (like Grok 3 and GPT-4.5). In the next few years we will get to "~100x GPT-4 compute" models, and by 2030 will will reach ~10,000x GPT-4 compute.
If you think intuitively about "how much GPT-4 improved upon GPT-3 (100x less) or GPT-2 (10,000x less)," you can maybe convince yourself that these near-future models will be super-smart in ways that are difficult to precisely state/imagine from our vantage point. (GPT-4 was way smarter than GPT-2; it's hard to know what "projecting that forward" would mean, concretely, but it sure does sound like something pretty special)
Meanwhile, all kinds of (arguably) complementary research is going on, like allowing models to "think" for longer amounts of time, giving them GUI interfaces, etc.
All that being said, there's still a big intuitive gap between "ChatGPT, but it's much smarter under the hood" and anything like "powerful AI." But...
...the LLMs are getting good enough that they can write pretty good code, and they're getting better over time. And depending on how you interpret the evidence, you may be able to convince yourself that they're also swiftly getting better at other tasks involved in AI development, like "research engineering." So maybe you don't need to get all the way yourself, you just need to build an AI that's a good enough AI developer that it improves your AIs faster than you can, and then those AIs are even better developers, etc. etc. (People in this social cluster are really keen on the importance of exponential growth, which is generally a good trait to have but IMO it shades into "we need to kick off exponential growth and it'll somehow do the rest because it's all-powerful" in this case.)
And like, I have various disagreements with this picture.
For one thing, the "10x" models we're getting now don't seem especially impressive – there has been a lot of debate over this of course, but reportedly these models were disappointing to their own developers, who expected scaling to work wonders (using the kind of intuitive reasoning mentioned above) and got less than they hoped for.
And (in light of that) I think it's double-counting to talk about the wonders of scaling and then talk about reasoning, computer GUI use, etc. as complementary accelerating factors – those things are just table stakes at this point, the models are already maxing out the tasks you had defined previously, you've gotta give them something new to do or else they'll just sit there wasting GPUs when a smaller model would have sufficed.
And I think we're already at a point where nuances of UX and "character writing" and so forth are more of a limiting factor than intelligence. It's not a lack of "intelligence" that gives us superficially dazzling but vapid "eyeball kick" prose, or voice assistants that are deeply uncomfortable to actually talk to, or (I claim) "AI agents" that get stuck in loops and confuse themselves, or any of that.
We are still stuck in the "Helpful, Harmless, Honest Assistant" chatbot paradigm – no one has seriously broke with it since that Anthropic introduced it in a paper in 2021 – and now that paradigm is showing its limits. ("Reasoning" was strapped onto this paradigm in a simple and fairly awkward way, the new "reasoning" models are still chatbots like this, no one is actually doing anything else.) And instead of "okay, let's invent something better," the plan seems to be "let's just scale up these assistant chatbots and try to get them to self-improve, and they'll figure it out." I won't try to explain why in this post (IYI I kind of tried to here) but I really doubt these helpful/harmless guys can bootstrap their way into winning all the Nobel Prizes.
----
All that stuff I just said – that's where I differ from the usual "short timelines" people, from Kokotajlo and co.
But OK, let's say that for the sake of argument, I'm wrong and they're right. It still seems like a pretty tough squeeze to get to "powerful AI" on time, doesn't it?
In the OSTP submission, Anthropic presents their latest release as evidence of their authority to speak on the topic:
In February 2025, we released Claude 3.7 Sonnet, which is by many performance benchmarks the most powerful and capable commercially-available AI system in the world.
I've used Claude 3.7 Sonnet quite a bit. It is indeed really good, by the standards of these sorts of things!
But it is, of course, very very far from "powerful AI." So like, what is the fine-grained timeline even supposed to look like? When do the many, many milestones get crossed? If they're going to have "powerful AI" in early 2027, where exactly are they in mid-2026? At end-of-year 2025?
If I assume that absolutely everything goes splendidly well with no unexpected obstacles – and remember, we are talking about automating all human intellectual labor and all tasks done by humans on computers, but sure, whatever – then maybe we get the really impressive next-gen models later this year or early next year... and maybe they're suddenly good at all the stuff that has been tough for LLMs thus far (the "10x" models already released show little sign of this but sure, whatever)... and then we finally get into the self-improvement loop in earnest, and then... what?
They figure out to squeeze even more performance out of the GPUs? They think of really smart experiments to run on the cluster? Where are they going to get all the missing information about how to do every single job on earth, the tacit knowledge, the stuff that's not in any web scrape anywhere but locked up in human minds and inaccessible private data stores? Is an experiment designed by a helpful-chatbot AI going to finally crack the problem of giving chatbots the taste to "write extremely good novels," when that taste is precisely what "helpful-chatbot AIs" lack?
I guess the boring answer is that this is all just hype – tech CEO acts like tech CEO, news at 11. (But I don't feel like that can be the full story here, somehow.)
And the scary answer is that there's some secret Anthropic private info that makes this all more plausible. (But I doubt that too – cf. Brundage's claim that there are no more secrets like that now, the short-timelines cards are all on the table.)
It just does not make sense to me. And (as you can probably tell) I find it very frustrating that these guys are out there talking about how human thought will basically be obsolete in a few years, and pontificating about how to find new sources of meaning in life and stuff, without actually laying out an argument that their vision – which would be the common concern of all of us, if it were indeed on the horizon – is actually likely to occur on the timescale they propose.
It would be less frustrating if I were being asked to simply take it on faith, or explicitly on the basis of corporate secret knowledge. But no, the claim is not that, it's something more like "now, now, I know this must sound far-fetched to the layman, but if you really understand 'scaling laws' and 'exponential growth,' and you appreciate the way that pretraining will be scaled up soon, then it's simply obvious that –"
No! Fuck that! I've read the papers you're talking about, I know all the arguments you're handwaving-in-the-direction-of! It still doesn't add up!
281 notes · View notes
tryslat · 10 months ago
Text
How Can I Turn Content Produced by AI into Content Written by Humans?
Transforming AI-generated content into human-written text can be a challenging task. While manually editing the content to make it sound more natural can be time-consuming, there's a more efficient solution available. The Online AI Text Converter Tool provides a quick and easy way to convert AI-generated content into human-like text.
Manual Transformation vs. Using the AI Text Converter Tool
Manually transforming AI-generated text involves editing the content to remove robotic phrasing, adjusting the tone, and ensuring readability. This process can be tedious and require significant effort, especially with large volumes of content.
In contrast, the Online AI Text Converter Tool simplifies this process. This free tool allows you to convert AI-generated text into natural-sounding human text with just a few clicks. It utilizes advanced algorithms to ensure that the converted content retains its original meaning while adopting a more human-like tone.
Benefits of Using the Online AI Text Converter Tool
Efficiency: Convert AI text in seconds instead of spending hours on manual edits.
Ease of Use: User-friendly interface that requires no special skills or training.
High Quality: Produces high-quality, human-like text that reads naturally.
Cost-Free: Available for free with unlimited use, no hidden fees.
Versatility: Suitable for various types of content, including essays, blog posts, and professional documents.
How to Use the Online AI Text Converter Tool
Using the AI Text Converter Tool is straightforward. Visit the website, paste your AI-generated content into the provided text box, and click the convert button. The tool will quickly transform your text into a human-readable format, ready for use.
In conclusion, if you need to efficiently convert AI-generated content into human-like text, the Online AI Text Converter Tool is an invaluable resource. It saves time, enhances content quality, and is free to use. Try it today and see how easy it is to turn AI text into compelling, human-readable content.
3 notes · View notes
elbiotipo · 3 months ago
Text
It's so funny when people say "not all AI sucks! only generative AI!" because generative AI is genuinely an amazing technology.
You know why those early AI images like crayon and such were so strange and dreamlike? It's because generative algorithms actually do generate those images. They don't copypaste like a collage, images are created pixel-by-pixel. Generative AIs are actually systems that assimilate concepts, associate them to images, are able to translate instructions in plain human text instead of code and create new things from it (this was seen as pure science fiction less than 5 years ago). This is why AI images now have better quality, because new models are able to understand more concepts and implement them. Because the idea with generative AI isn't and shouldn't for it to be able to just copy-paste images or text, it's the ability to generate new images or text from learned concepts.
This post gets, in a very easy, understandable way, into the details on how this works. And I hope you do give it a read no matter your stand on this:
This, as I always say, was considered pure science fiction, a thing that would not exist until at least the 2100s if at all, and it is now here. And not only by corporations, but open-source models are being researched by the minute.
No, I do not care for AI corporations and I don't care for what they're mostly trying to use AI for (advertising and customer service). I care about what can become of this technology. Advertising and mass produced shit will be shit, no matter if it's done by human or AI. Do I expect an advertisement to be shit because it uses AI? No, I expect it to be shit because it is an advertisement.
What will be interesting, and I think we will see more in the future when the utterly poisoned current discourse about AI calms down, will be when artists with interesting concepts and a good handle of these tools start to create new things, much like synthesizers or photographers didn't ruin music or art, because there was always an artist behind the tool in the first place. Someone is doing those prompts to create something. Your question should be who and why.
234 notes · View notes
aitohumantextconverter · 9 months ago
Text
Why You Should Convert AI-Generated Text to Human Text for Better Engagement
AI-generated text has come a long way, streamlining content creation across industries and simplifying writing tasks. However, despite AI’s capabilities, many find that purely AI-generated text often lacks the nuance and relatability of human communication. This gap in engagement has driven demand for AI to human text converters, tools that transform AI-generated text into natural, conversational language.
In this article, we’ll explore the benefits of converting AI-generated text to human text and how this approach can improve engagement across various platforms—from blogs to social media posts.
1. Enhancing Readability and Relatability
AI content, while accurate and informative, can sometimes feel stiff or overly structured. By converting AI-generated text to human-like language, writers can make content more readable and relatable. Humanized content flows better, making it easier for readers to absorb and engage with the information.
Tip: Use tools like AI To Human Text Converter to instantly enhance readability without losing the original meaning, providing a friendly, natural feel to AI content.
2. Building Trust with Authentic Language
Today’s readers are quick to notice content that sounds overly robotic or formulaic. Authenticity builds trust, and content that sounds human is more likely to resonate with audiences. Humanized content reflects empathy, personality, and emotion—all key to forming connections with readers.
Example: For brand marketing, human-like language can give a relatable tone, creating a stronger bond with customers.
3. Improving SEO and User Experience
Engagement metrics like time spent on page, bounce rate, and user interaction are crucial for SEO. Humanized content tends to hold readers’ attention longer, leading to improved SEO performance. A conversational style with strategically placed keywords keeps readers on the page, helping improve overall rankings.
SEO Tip: Tools like AI To Human Text Converter allow for easy keyword integration in humanized content, supporting a natural flow that improves SEO.
4. Increasing Social Media Engagement
Social media thrives on relatable, engaging content. Humanized text is more likely to spark conversation, shares, and comments. When content feels natural, people are more inclined to engage, making humanized AI text ideal for social media marketing and brand voice consistency.
Pro Tip: Whether for captions or blog links, humanized text can make a brand’s social media posts sound more genuine, increasing shares and comments.
5. Better Academic and Professional Use
For assignments, essays, and formal presentations, text that sounds human communicates ideas effectively without seeming detached. AI-generated text may be clear, but converting it to human text adds the depth and comprehension needed in academic and professional contexts.
Application: Many students and professionals now use converters to ensure their writing is both clear and impactful, particularly when communicating complex concepts.
6. Saving Time and Effort
Creating high-quality human-like text from scratch can be time-consuming. AI to human text converters streamline this process, turning rough AI-generated drafts into polished, human-sounding text. This allows writers to focus on creativity and strategy instead of manual rewriting.
Choosing the Right AI to Human Text Converter
When choosing a tool to humanize AI-generated text, consider the following:
Ease of Use: Tools that allow for instant conversion without technical setup are ideal for quick turnarounds.
Customization: Look for converters that let you adjust tone and style for different audiences.
Free Usage: Some tools, like AI To Human Text Converter, offer unlimited conversions for free, making it accessible for various needs.
1 note · View note
therobotmonster · 27 days ago
Note
On that recent Disney Vs Midjourney court thing wrt AI, how strong do you think their case is in a purely legal sense, what do you think MJ's best defenses are, how likely is Disney to win, and how bad would the outcome be if they do win?
Oh sure, ask an easy one.
In a purely legal sense, this case is very questionable.
Scraping as fair use has already been established when it comes to text in legal cases, and infringement is based on publication, not inspiration. There's also the question of if Midjourney would be responsible for their users' creations under safe harbor provisions, or even basic understanding of what an art tool is. Adobe isn't responsible for the many, many illegal images its software is used to make, after all.
The best defense, I would say, is the fair use nature of dataset training and the very nature of transformative work, which is protected, requires the work-to-be-transformed is involved. Disney's basic approach of 'your AI knows who our characters are, so that proves you stole from us' would render fair use impossible.
I don't think its likely for Disney to win, but the problem with civil action is proof isn't needed, just convincing. Bad civil cases happen all the time, and produce case law. Which is what Disney is trying to do here.
If Disney wins, they'll have pulled off a coup of regulatory capture, basically ensuring that large media corporations can replace their staff with robots but that small creators will be limited to underpowered models to compete with them.
Worse, everything that is a 'smoking gun' when it comes to copyright infringement on Midjourney? That's fan art. All that "look how many copyrighted characters they're using-" applies to the frontpage of Deviantart or any given person's Tumblr feed more than to the featured page of Midjourney.
Every single website with user-generated content it chock full of copyright infringement because of fan art and fanfic, and fair use arguments are far harder to pull out for fan-works. The law won't distinguish between a human with a digital art package and a human with an AI art package, and any win Disney makes against MJ is a win against Artstation, Deviantart, Rule34.xxx, AO3, and basically everyone else.
"We get a slice of your cheese if enough of your users post our mouse" is not a rule you want in law.
And the rules won't be enforced by a court 9/10 times. Even if your individual work is plainly fair use, it's not going to matter to whatever image-based version of youtube's copyreich bots gets applied to Artstation and RedBubble to keep the site owners safe.
Even if you're right, you won't have the money to fight.
Heck, Adobe already spies on what you make to report you to the feds if you're doing a naughty, imagine it's internal watchdogs throwing up warnings when it detects you drawing Princess Jasmine and Ariel making out. That may sound nuts, but it's entirely viable.
And that's just one level of possible nightmare. If the judgement is broad enough, it could provide a legal pretext for pursuing copyright lawsuits over style and inspiration. Given how consolidated IP is, this means you're going to have several large cabals that can crush any new work that seems threatening, as there's bound to be something they can draw a connection to.
If you want to see how utterly stupid inspiration=theft is, check out when Harlan Ellison sued James Cameron over Terminator because Cameron was dumb enough to say he was inspired by Demon with a Glass Hand and Soldier from the Outer Limits.
Harlan was wrong on the merits, wrong ethically, and the case shouldn't have been entertained in the first place, but like I said, civil law isn't about facts. Cameron was honest about how two episodes of a show he saw as a kid gave him this completely different idea (the similarities are 'robot that looks like a guy with hand reveal' and 'time traveling soldier goes into a gun store and tries to buy future guns'), and he got unjustly sued for it.
If you ever wonder why writers only talk about their inspirations that are dead, that's why. Anything that strengthens the "what goes in" rather than the "what goes out" approach to IP is good for corps, bad for culture.
174 notes · View notes
not-terezi-pyrope · 1 year ago
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes · View notes
ai-undetectable · 1 year ago
Text
Tumblr media
Elevate Your Content Strategy with AI Undetectable Humanize AI Text Tool for Engaging Communication Revolutionize your approach to content creation with AI Undetectable, the ultimate Humanize AI Text tool. Designed to empower businesses and creators alike, our platform seamlessly integrates artificial intelligence with human-like expression, resulting in content that truly connects with your audience. Say goodbye to bland, impersonal writing and hello to engaging communication that drives results. Whether you're crafting marketing materials, website copy, or social media posts, AI Undetectable ensures that your message stands out in a crowded digital landscape. Join the ranks of forward-thinking brands and creators who are leveraging the power of AI to elevate their content strategy. With AI Undetectable, the future of authentic communication is within reach.
https://aiundetectable.com/ai-undetectable
0 notes
dragonnarrative-writes · 3 months ago
Text
Generative AI Is Bad For Your Creative Brain
In the wake of early announcing that their blog will no longer be posting fanfiction, I wanted to offer a different perspective than the ones I’ve been seeing in the argument against the use of AI in fandom spaces. Often, I’m seeing the arguments that the use of generative AI or Large Language Models (LLMs) make creative expression more accessible. Certainly, putting a prompt into a chat box and refining the output as desired is faster than writing a 5000 word fanfiction or learning to draw digitally or traditionally. But I would argue that the use of chat bots and generative AI actually limits - and ultimately reduces - one’s ability to enjoy creativity.
Creativity, defined by the Cambridge Advanced Learner’s Dictionary & Thesaurus, is the ability to produce or use original and unusual ideas. By definition, the use of generative AI discourages the brain from engaging with thoughts creatively. ChatGPT, character bots, and other generative AI products have to be trained on already existing text. In order to produce something “usable,” LLMs analyzes patterns within text to organize information into what the computer has been trained to identify as “desirable” outputs. These outputs are not always accurate due to the fact that computers don’t “think” the way that human brains do. They don’t create. They take the most common and refined data points and combine them according to predetermined templates to assemble a product. In the case of chat bots that are fed writing samples from authors, the product is not original - it’s a mishmash of the writings that were fed into the system.
Dialectical Behavioral Therapy (DBT) is a therapy modality developed by Marsha M. Linehan based on the understanding that growth comes when we accept that we are doing our best and we can work to better ourselves further. Within this modality, a few core concepts are explored, but for this argument I want to focus on Mindfulness and Emotion Regulation. Mindfulness, put simply, is awareness of the information our senses are telling us about the present moment. Emotion regulation is our ability to identify, understand, validate, and control our reaction to the emotions that result from changes in our environment. One of the skills taught within emotion regulation is Building Mastery - putting forth effort into an activity or skill in order to experience the pleasure that comes with seeing the fruits of your labor. These are by no means the only mechanisms of growth or skill development, however, I believe that mindfulness, emotion regulation, and building mastery are a large part of the core of creativity. When someone uses generative AI to imitate fanfiction, roleplay, fanart, etc., the core experience of creative expression is undermined.
Creating engages the body. As a writer who uses pen and paper as well as word processors while drafting, I had to learn how my body best engages with my process. The ideal pen and paper, the fact that I need glasses to work on my computer, the height of the table all factor into how I create. I don’t use audio recordings or transcriptions because that’s not a skill I’ve cultivated, but other authors use those tools as a way to assist their creative process. I can’t speak with any authority to the experience of visual artists, but my understanding is that the feedback and feel of their physical tools, the programs they use, and many other factors are not just part of how they learned their craft, they are essential to their art.
Generative AI invites users to bypass mindfully engaging with the physical act of creating. Part of becoming a person who creates from the vision in one’s head is the physical act of practicing. How did I learn to write? By sitting down and making myself write, over and over, word after word. I had to learn the rhythms of my body, and to listen when pain tells me to stop. I do not consider myself a visual artist - I have not put in the hours to learn to consistently combine line and color and form to show the world the idea in my head.
But I could.
Learning a new skill is possible. But one must be able to regulate one’s unpleasant emotions to be able to get there. The emotion that gets in the way of most people starting their creative journey is anxiety. Instead of a focus on “fear,” I like to define this emotion as “unpleasant anticipation.” In Atlas of the Heart, Brene Brown identifies anxiety as both a trait (a long term characteristic) and a state (a temporary condition). That is, we can be naturally predisposed to be impacted by anxiety, and experience unpleasant anticipation in response to an event. And the action drive associated with anxiety is to avoid the unpleasant stimulus.
Starting a new project, developing a new skill, and leaning into a creative endevor can inspire and cause people to react to anxiety. There is an unpleasant anticipation of things not turning out exactly correctly, of being judged negatively, of being unnoticed or even ignored. There is a lot less anxiety to be had in submitting a prompt to a machine than to look at a blank page and possibly make what could be a mistake. Unfortunately, the more something is avoided, the more anxiety is generated when it comes up again. Using generative AI doesn’t encourage starting a new project and learning a new skill - in fact, it makes the prospect more distressing to the mind, and encourages further avoidance of developing a personal creative process.
One of the best ways to reduce anxiety about a task, according to DBT, is for a person to do that task. Opposite action is a method of reducing the intensity of an emotion by going against its action urge. The action urge of anxiety is to avoid, and so opposite action encourages someone to approach the thing they are anxious about. This doesn’t mean that everyone who has anxiety about creating should make themselves write a 50k word fanfiction as their first project. But in order to reduce anxiety about dealing with a blank page, one must face and engage with a blank page. Even a single sentence fragment, two lines intersecting, an unintentional drop of ink means the page is no longer blank. If those are still difficult to approach a prompt, tutorial, or guided exercise can be used to reinforce the understanding that a blank page can be changed, slowly but surely by your own hand.
(As an aside, I would discourage the use of AI prompt generators - these often use prompts that were already created by a real person without credit. Prompt blogs and posts exist right here on tumblr, as well as imagines and headcannons that people often label “free to a good home.” These prompts can also often be specific to fandom, style, mood, etc., if you’re looking for something specific.)
In the current social media and content consumption culture, it’s easy to feel like the first attempt should be a perfect final product. But creating isn’t just about the final product. It’s about the process. Bo Burnam’s Inside is phenomenal, but I think the outtakes are just as important. We didn’t get That Funny Feeling and How the World Works and All Eyes on Me because Bo Burnham woke up and decided to write songs in the same day. We got them because he’s been been developing and honing his craft, as well as learning about himself as a person and artist, since he was a teenager. Building mastery in any skill takes time, and it’s often slow.
Slow is an important word, when it comes to creating. The fact that skill takes time to develop and a final piece of art takes time regardless of skill is it’s own source of anxiety. Compared to @sentientcave, who writes about 2k words per day, I’m very slow. And for all the time it takes me, my writing isn’t perfect - I find typos after posting and sometimes my phrasing is awkward. But my writing is better than it was, and my confidence is much higher. I can sit and write for longer and longer periods, my projects are more diverse, I’m sharing them with people, even before the final edits are done. And I only learned how to do this because I took the time to push through the discomfort of not being as fast or as skilled as I want to be in order to learn what works for me and what doesn’t.
Building mastery - getting better at a skill over time so that you can see your own progress - isn’t just about getting better. It’s about feeling better about your abilities. Confidence, excitement, and pride are important emotions to associate with our own actions. It teaches us that we are capable of making ourselves feel better by engaging with our creativity, a confidence that can be generalized to other activities.
Generative AI doesn’t encourage its users to try new things, to make mistakes, and to see what works. It doesn’t reward new accomplishments to encourage the building of new skills by connecting to old ones. The reward centers of the brain have nothing to respond to to associate with the action of the user. There is a short term input-reward pathway, but it’s only associated with using the AI prompter. It’s designed to encourage the user to come back over and over again, not develop the skill to think and create for themselves.
I don’t know that anyone will change their minds after reading this. It’s imperfect, and I’ve summarized concepts that can take months or years to learn. But I can say that I learned something from the process of writing it. I see some of the flaws, and I can see how my essay writing has changed over the years. This might have been faster to plug into AI as a prompt, but I can see how much more confidence I have in my own voice and opinions. And that’s not something chatGPT can ever replicate.
153 notes · View notes