#No-Code AI
Explore tagged Tumblr posts
Text
Shaping Tomorrow: Three AI Automation Trends to Watch in 2025.
Sanjay Kumar Mohindroo Sanjay Kumar Mohindroo. skm.stayingalive.in Three pivotal AI automation trends for 2025—hyper-automated workflows, AI decision engines, and democratized bots—explained with clear steps to act now. A Quick Look at the Forces Driving Change
In a world racing toward smarter machines, three key #AIautomation trends will define how we work, decide, and innovate in 2025.…
#2025 technology#AI#AI automation trends#artificial-intelligence#decision engines#democratized AI bots#enterprise AI#hyper-automation#News#no-code AI#Sanjay Kumar Mohindroo#smart workflows#technology
0 notes
Text
How ChatGPT is Not a Replacement for Enterprise Conversational AI Platforms
ChatGPT is a new dialog-based AI chatbot that uses natural language processing (NLP) and machine learning to generate automated conversations. The field of conversational AI has seen rapid growth in recent years, with the development of new AI models and advancements in AI-powered chatbots. The conversational AI market is projected to reach $32.62 billion by 2030, growing at a CAGR of 23.6% from 2023 to 2030 (Source: Grand View Research). Enterprise chatbots are increasingly being adopted by businesses for business automation, streamlining workflows, and enhancing customer interactions.
AI assistants like ChatGPT enable computers to understand and respond to human input, creating a more natural and intuitive interaction between humans and technology. This powerful GPT-3.5-based AI chatbot can perform a variety of chatbot integration tasks without requiring extensive coding, making it a strong contender in the era of no-code AI. However, despite its capabilities, can ChatGPT truly replace enterprise chatbots? We explore its potential and why it may not be an adequate substitute when compared to a dedicated, enterprise-level AI chatbot solution.
What is ChatGPT?
ChatGPT is a generative pre-trained transformer (GPT) based on the conference paper “Attention Is All You Need.” This transformer model incorporates the attention mechanism, a key component of conversational AI and a type of generative AI that includes architectures like generative Adversarial networks (GANs). As an AI-powered chatbot, ChatGPT can generate new content based on user input, making it a versatile tool for business automation and various other applications. Its capabilities include question answering, content creation, essay writing, text completion, code completion, input data translation, and much more.
The training of this AI chatbot involves reinforcement learning, where human AI trainers provide expected responses that are used as feedback to iteratively improve the model. This process allows ChatGPT to predict the next words in a sentence based on the previous sequence, enabling seamless chatbot integration in various industries.
With the rise of enterprise chatbots and no-code AI solutions, ChatGPT presents both opportunities and challenges. While it enhances automation and simplifies AI deployment, it may not fully replace specialized AI assistants designed for enterprise-level solutions. Like any AI model, ChatGPT comes with its own pros and cons—let’s analyze them in detail.
For any model developed, there are some pros and cons. Let’s analyze that for ChatGPT.
User Benefits of Using ChatGPT
Generates detailed responses and articulates answers.
Capable of keeping track of previous conversations.
Proficient enough to regenerate response for the same user prompt.
Trained to reply in different languages.
Best at answering open-domain questions.
Rejects inappropriate queries.
Limitation Of ChatGPT
Limited and biased training data.
Sensitive to the input.
Writes plausible-sounding sounding but incorrect answers.
Unable to answer correctly for world events that occurred after 2021.
Programming knowledge is essential for custom training and integration.
Lack of scalability.
Existing UI cannot be customized.
A study by Gartner (2023) highlights that while LLMs like ChatGPT improve efficiency by 40%, they are not yet suited for highly specialized business applications requiring structured responses.
While ChatGPT is powerful, it is not a one-size-fits-all solution for business needs. Enterprise chatbots, built using conversational AI platforms, offer domain-specific customization, secure integration, and a better customer experience.
Objective of Conversational AI
The primary goal of Conversational AI is to streamline communication naturally. AI-powered chatbots like ChatGPT enable businesses to automate tasks such as customer inquiries, recommendations, and information dissemination. Enterprise chatbots and AI assistants enhance business automation by improving efficiency and reducing workload. With chatbot integration and no-code AI, companies can deploy AI chatbots without extensive coding. NLP enables these systems to understand and respond intelligently to human input. As a result, businesses can enhance customer experiences while allowing human agents to focus on complex tasks.
Studies indicate that AI-driven automation can reduce customer service costs by up to 30% (Source: Juniper Research).
Enterprise Chatbots vs. ChatGPT
1. Front-end
The user interface (UI) of enterprise chatbots is fully customizable to match a company’s branding, whereas ChatGPT does not provide direct UI customization. ChatGPT needs third-party integration to modify its interface, while enterprise chatbots offer built-in white-labeling for a seamless brand experience.
2. Programming Knowledge
No-code Assistant Platform enables businesses to build chatbots without programming. Features like drag-and-drop bot training, API integration, and ticket generation make them accessible to non-technical users. In contrast, ChatGPT requires programming expertise for custom training, API integration, database connectivity, and advanced functions like OTP verification and payment gateway integration.
3. Integration
Enterprise chatbots support direct integration with business platforms, including websites, WhatsApp, Facebook Messenger, and other social channels. Providers offer seamless integration, ensuring businesses can deploy chatbots effortlessly. ChatGPT requires developer support for integrations, making it less accessible for businesses without coding expertise.
NLU & Re-training Complexity
Enterprise chatbots leverage domain-specific Natural Language Understanding (NLU), ensuring highly accurate responses based on business-specific datasets. No-code platforms simplify AI training with drag-and-drop UI for model refinement. In contrast, ChatGPT requires complex fine-tuning, which is more resource intensive.
5. Cost
Enterprise AI platforms offer flexible pricing models such as on-premises installations, staff training, chatbot development services, and pay-as-you-go plans. Pricing is not restricted by word count or token limits. On the other hand, ChatGPT's pricing is token-based, meaning costs increase with usage (e.g., number of training words, prompt tokens, and response tokens).
A study by Gartner (2023) reports that conversational AI adoption is growing at a CAGR of 23.6%, but businesses prefer structured enterprise chatbot solutions over general AI models.
Benefits of Using Conversational AI Platforms for Enterprise Chatbots
1. No-Code Platform
The No-code Assistant Platform enables businesses to develop fully managed AI-powered chatbots without requiring programming knowledge. Even non-technical users can build business-specific chatbots effortlessly. The drag-and-drop dialogue manager allows easy knowledge base setup and response configuration with minimal input. No-code platforms help businesses reduce development costs and improve chatbot efficiency.
2. Customized Training
Customizable training and flow design features allow organizations to tailor their chatbots to meet specific business needs and customer expectations. This leads to a more personalized and context-aware chatbot experience, enhancing user engagement and overall satisfaction.
3. Analytical Dashboard
AI chatbot platforms provide real-time data insights with graphical visualizations of chatbot traffic, location-based visitors, engagement rates, user feedback ratings, and sentiment analysis. The platform also offers customizable dashboards, allowing businesses to monitor chatbot performance and optimize responses effectively.
4. User-Friendly Interface
No-code AI platforms offer an intuitive, easy-to-use interface that simplifies chatbot creation. Businesses benefit from drag-and-drop tools that streamline chatbot design and deployment. This accessibility makes AI adoption easier for companies of all sizes, fostering widespread adoption and automation.

The cost-effective and highly adaptable Conversational AI platform has gained widespread acceptance among businesses. Its seamless integration with websites, social media platforms, and CRM tools simplifies deployment. Experience AI-powered automation—book a demo or contact us today!
Conclusion
While ChatGPT is a versatile AI model, it lacks the business-specific customization, security, and integration capabilities that enterprise chatbots provide. Enterprise AI chatbot platforms offer scalability, robust security, and tailored AI solutions to enhance customer experience and streamline business processes.
As AI adoption increases, organizations are integrating chatbots, voice bots, and IVR solutions into their workflows. Businesses seeking a custom AI-powered chatbot can connect with Rydot Infotech at [email protected] for expert AI solutions.
#ChatGPT#Conversational AI#Enterprise Chatbots#AI Chatbot#NLP#AI Assistant#Chatbot Integration#No-Code AI#Business Automation#AI-powered Chatbots
1 note
·
View note
Text
“Slopsquatting” in a nutshell:
1. LLM-generated code tries to run code from online software packages. Which is normal, that’s how you get math packages and stuff but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
#slopsquatting#ai generated code#LLM#yes ive got your package right here#why yes it is stable and trustworthy#its readme says so#and now Google snippets read the readme and says so too#no problems ever in mimmic software packige
14K notes
·
View notes
Text
Surely this will have no negative consequences whatsoever!
#dbhc#dbhc art#dbhc grian#dbhc mumbo#dbhc s8#art escapades#grian#Mumbo#mumbojumbo#mumbo jumbo#hermitcraft#hermitcraft au#grumbo#hermitcraft s8#hc watchers#watchers#watcher grian#watcher mumbo#tw eyestrain#tw eye contact#tw eye imagery#tw eldritch#tw glitch#tw horror#not sure what all to tag here so pls lemme know if I should add anything#yeah I took the soul sharing thing in a very ‘’undertale’’ light#those aus where Asriel and frisk share a soul so asriel can maintain his form really changed me /silly#also this was a great idea grian. yeah. yeah okay. give the 6 month old robot with an ai soul the eldritch all seeing powers of a watcher#good idea#love the ‘woah’ page… something about grian being able to see entities at their ‘core’…. the androids being code-contained vessels…
10K notes
·
View notes
Text
anyways good episode
#tadc#the amazing digital circus#tadc ragatha#tadc caine#[ ooc ]#[ doodles ]#can you tell who my second fave is#the scene with caine having an existential crisis over not being good at the only thing he's coded to do is funny to me because#i've been thinking of an au where ragatha and caine are the only people swapped - basically ragatha's the ai and caine's a human now#and ai ragatha's problem was literally That ; just not being good at the one thing you're supposed to do#like fuckin hell turns out if you swap these two there's barely any meaningful change /silly
2K notes
·
View notes
Text

#gego#sukuna#digitalart#gojo satoru#baby girl coded#gojo#megumi fushiguro#geto#aiart#jjk gojo#yuji itadori#ai#geto suguru#jjk#satosugu#jujutsu kaisen#art
1K notes
·
View notes
Text
the past few years, every software developer that has extensive experience, and knows what they're talking about, has had pretty much the same opinion on LLM code assistants: they're OK for some tasks but generally shit. Having something that automates code writing is not new. Codegen before AI were scripts that generated code that you have to write for a task, but is so repetitive it's a genuine time saver to have a script do it.
this is largely the best that LLMs can do with code, but they're still not as good as a simple script because of the inherently unreliable nature of LLMs being a big honkin statistical model and not a purpose-built machine.
none of the senior devs that say this are out there shouting on the rooftops that LLMs are evil and they're going to replace us. because we've been through this concept so many times over many years. Automation does not eliminate coding jobs, it saves time to focus on other work.
the one thing I wish senior devs would warn newbies is that you should not rely on LLMs for anything substantial. you should definitely not use it as a learning tool. it will hinder you in the long run because you don't practice the eternally useful skill of "reading things and experimenting until you figure it out". You will never stop reading things and experimenting until you figure it out. Senior devs may have more institutional knowledge and better instincts but they still encounter things that are new to them and they trip through it like a newbie would. this is called "practice" and you need it to learn things
256 notes
·
View notes
Text
Extremely grateful for every single person who still posts coding tutorials on YouTube and Stacks etc. so I can avoid using AI.
#ai#coding#rstudio#my boss said the other day “just use chatGPT”! My dude.... we literally work on renewable energy 😐
273 notes
·
View notes
Text

Bésame con tus ojos de miel, que me hablan de un mundo que ya no conozco
#hlvrai#half life#gordon freeman#benrey#hlvrai fanart#half life vr but the ai is self aware#my art#frenrey#hlvrai gordon#benrey hlvrai#hlvrai benrey#gordon hlvrai#illustration#artist on tumblr#art#the song is ‘miel’ by Zoé#I HIGHLY recommend it it’s so them#and like any other of their songs too tbh bc they have so many Frenrey coded songs#gordon feetman#Spotify
725 notes
·
View notes
Text
I started watching Code Geass and Suzaku immediately jumped up into my top list of favorite fictional characters
#yugioh#yugioh vrains#yusaku fujiki#code geass#suzaku kururugi#the show reminds me a lot of vrains where a lot of people are selfish with clashing ideals and are taking extreme measures to change things#suzaku is like takeru with some yusaku mixed#lelouch is like ryoken and season 3 ai#cc is similar to ai#i could continue my parallels ramble but ill stop at that#also Suzakus seiyuu is Ai and Kallens seiyuu is one of revolvers knights Baira#art
242 notes
·
View notes
Text
Kallen Kozuki ❤️❤️
Anime: Code Geass
More on my Patreon !!
#ai art#ai artwork#ai generated#anime and manga#ai girl#ai image#ai#ai waifu#waifu#manga#anime ai#anime art#anime#cute anime girl#artwork#aiartwork#ai artist#manga ai#illustration#code geass#kallen kozuki#kallen stadtfeld#kallen code geass#kallen
235 notes
·
View notes
Text
The Comedy of Errors : Developers edition
188 notes
·
View notes
Text
Miss Francis Clara Censordoll from Moral Orel is canonically intersex-coded! It's implied she may have a variation such as Complete Androgen Insensitivity Syndrome (CAIS), due to her fixation on eggs—especially fertilized eggs—because of the fact her gonads were nonconsensually removed during infancy, which is all-too common in individuals with CAIS due to the (arguably false) belief the testes will have a high chance of cancer.
Requested by @trans-dwightschrute + more under the cut!
The requester elaborates: "During a phone call with her Mom she says this:
"My lack of eggs is not a hinderance, it is an asset. Don't push me down like that when you did this to me."
"Well what good was it relieving me of my reproductive parts if I could not be an expert on the matter? I was an infant when I had them, I was an infant when you had them removed."
Some fans theorize that the unconscentual [sic] surgery implies that she's intersex."
#requested by trans-dwightschrute#intersex#intersex character#intersex coding#complete androgen insensitivity syndrome#androgen insensitivity syndrome#CAIS#AIS#tw igm#moral orel#miss censordoll#francis clara censordoll#I really need to watch moral orel dude.#i already have adult cartoon autism I can't believe I haven't gotten around to this show yet#and ty requester because I was already planning on getting around to her when the nurse bendy hc came in lol
154 notes
·
View notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed

I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
857 notes
·
View notes
Text
kodaka games: big tiddy girl: do you wanna touch my boobs 😏 protagonist: uhhh…no…? *blushing*
uchikoshi games: big tiddy girl: do you wanna touch my boobs 😏 protagonist: hell yeah! protagonist's female partner: i will have your head on a stick if you even take so much as one more step towards that woman big tiddy girl: here's my cat his name is boobs
1K notes
·
View notes