#Benefits of AI in IT operations
Explore tagged Tumblr posts
Text
#AI-driven IT transformation#AI in IT services#Artificial intelligence in IT#IT transformation with AI#AI for digital transformation#AI-powered IT solutions#Enterprise AI services#Benefits of AI in IT operations#AI implementation in IT companies#Cloud-based AI tools#AI integration strategy
1 note
·
View note
Text
The AI Revolution: 10 Industries Being Transformed by AI
The relentless march of technology often brings about profound shifts, but few have been as sweeping and impactful as the current Artificial Intelligence (AI) revolution. Once confined to the realms of science fiction, AI has now permeated nearly every facet of our daily lives, quietly reshaping industries, redefining possibilities, and fundamentally altering how businesses operate. This isn’t…
#agricultural AI#agriculture AI tools#AI adoption#AI applications#AI benefits#AI challenges#AI evolution#AI for growth#AI impact on business#AI in agriculture#AI in banking operations#AI in cybersecurity#AI in education#AI in finance#AI in healthcare#AI in industries#AI in manufacturing#AI in media#AI in retail#AI in transportation#AI innovation#AI opportunities#AI revolution#AI solutions#AI strategy#AI-driven cyber defense#AI-driven insights#artificial intelligence transformation#automation#autonomous systems
0 notes
Text
Discover the Advantages of AI for Your Business
In today’s digital landscape, artificial intelligence has become a vital resource for forward-thinking companies and organizations. The benefits of using AI in business are numerous, as its vast potential can revolutionize the way businesses operate, making it an indispensable tool for success. I will explore how AI technologies are transforming the business world, creating competitive…
#AI Implementation Strategies#AI in Business#AI Solutions#Artificial Intelligence Benefits#Business Automation#Future of Business AI#Machine Learning for Business#Optimizing Operations with AI
1 note
·
View note
Text
What Makes a Good AI Assistant for Revenue Teams?
In today’s fast-moving go-to-market environment, sales and revenue teams need more than just dashboards and reports. They need smart, contextual guidance that surfaces exactly when and where they need it. Enter the AI assistant for sales — an intelligent companion built to streamline decisions, accelerate deals, and eliminate guesswork.
But not all AI assistants are created equal. So, what separates the average from the game-changing?
1. Real-Time, Contextual Intelligence
A good AI assistant doesn't just answer questions; it understands them. It delivers real-time insights based on current data — not last week’s export. Whether it’s asking for updated pipeline health, deal status, or forecast accuracy, the assistant should provide immediate, relevant responses within the sales workflow.
2. Deep Integration Across RevOps
The most effective assistants function as part of a broader revops ecosystem. They connect data across sales, marketing, and customer success, giving teams a holistic view of performance. This level of integration helps eliminate silos and ensures that every insight is grounded in shared context.
3. Natural Language Interface
Sales teams are too busy to navigate complicated menus or build custom reports. An AI assistant should offer natural language interactions, letting users ask things like, “What’s the forecast for Q3?” or “Which deals are stuck in the proposal stage?” — and get instant answers.
4. Proactive Recommendations
It’s not enough to react — a top-tier RevOps AI agent anticipates. From highlighting at-risk deals to suggesting next best actions, it proactively drives better decisions, helping reps and leaders stay ahead of problems.
Meet “Tell Me” by Crenovent
Crenovent sets a new benchmark with “Tell Me,” its embedded AI assistant for revenue teams. Seamlessly woven into its revenue operations platform, “Tell Me” empowers users to access forecasting, pipeline insights, and performance metrics instantly — all through simple, conversational prompts.
Unlike generic chatbots, “Tell Me” is built specifically for revenue intelligence. It understands business logic, forecasts with precision, and aligns with your team’s goals. It’s not just an assistant — it’s a strategic advisor.
If your team is ready to move from static data to intelligent decisions, explore the power of “Tell Me,” the leading RevOps AI agent by Crenovent.
#ai#crm#crm integration#crm platform#crm services#crm software#crm benefits#crm solutions#b2b saas#crm strategy#revenue operations#revops#revtech#techrev
0 notes
Text
Android vs iOS - Which OS is right for you?
Over the years, the debate between Android and iOS has intensified, making it vital for you to understand the unique features each operating system offers. Android provides a wide range of devices with customizable options, while iOS boasts a seamless user experience and consistent updates. Your choice can impact everything from app availability to device performance. Whether you prioritize…
#AI-powered mobile operating systems#Android benefits#Android customization options#Android features#Android flexibility vs iOS simplicity#Android market share vs iOS#Android or iOS which is better#Android vs iOS#Android vs iOS comparison#Android vs iOS for app development#Android vs iOS for gaming#Android vs iOS performance analysis#Android vs iOS security features#Android vs iOS user experience#best mobile operating system#choose between Android and iOS#choosing the right OS for developers#cross-platform app development Android vs iOS#generative AI apps on Android#generative AI apps on iOS#generative AI in mobile apps#iOS benefits#iOS exclusive features#iOS features#iOS smoothness and stability#mobile apps on Android vs iOS#mobile operating system trends#mobile OS comparison#mobile platform comparison for AI#next-gen mobile technologies comparison
1 note
·
View note
Text

Applications of Robotic Process Automation in Healthcare
Robotic Process Automation (RPA) is transforming the healthcare sector by streamlining repetitive tasks. Key applications include patient data management, appointment scheduling, claims processing, and inventory tracking. By reducing human error and enhancing efficiency, RPA ensures better resource allocation and improved patient care. With expertise in healthcare automation, USM Business Systems stands out as the best mobile app development company, providing cutting-edge RPA solutions for healthcare businesses.
#Robotic process automation in healthcare#RPA applications in healthcare#Healthcare automation benefits#RPA for medical billing#Automation in patient management#RPA in healthcare operations#Healthcare efficiency with RPA#Robotic automation in hospitals#RPA in claims processing#RPA for healthcare workflows#AI and RPA in healthcare#Digital transformation in healthcare#RPA in patient data management#Automation for medical records#RPA in healthcare industry
0 notes
Text
Automatisasi Bisnis dengan Kekuatan AI
Automatisasi bisnis telah menjadi topik utama dalam dunia usaha modern. Dengan kemajuan teknologi kecerdasan buatan (AI), perusahaan kini memiliki peluang untuk mengoptimalkan proses mereka, meningkatkan efisiensi, dan mengurangi biaya operasional. AI tidak hanya menggantikan tugas-tugas manual yang berulang tetapi juga membawa kemampuan analisis data yang canggih, prediksi yang akurat, dan…
#AI automation#AI benefits#AI challenges#AI in banking#AI in business#AI in logistics#AI in retail#AI training#AI trends 2024#AI-powered tools#artificial intelligence#business automation#business innovation#cost reduction#customer experience#ethical AI#future of AI#operational efficiency#predictive analytics#scalable solutions#smart inventory management#supply chain management#workforce automation
0 notes
Text
Dominating the Market with Cloud Power
Explore how leveraging cloud technology can help businesses dominate the market. Learn how cloud power boosts scalability, reduces costs, enhances innovation, and provides a competitive edge in today's digital landscape. Visit now to read more: Dominating the Market with Cloud Power
#ai-driven cloud platforms#azure cloud platform#business agility with cloud#business innovation with cloud#capital one cloud transformation#cloud adoption in media and entertainment#cloud computing and iot#cloud computing for business growth#cloud computing for financial institutions#cloud computing for start-ups#cloud computing for travel industry#cloud computing in healthcare#cloud computing landscape#Cloud Computing solutions#cloud for operational excellence#cloud infrastructure as a service (iaas)#cloud migration benefits#cloud scalability for enterprises#cloud security and disaster recovery#cloud solutions for competitive advantage#cloud solutions for modern businesses#Cloud storage solutions#cloud technology trends#cloud transformation#cloud-based content management#cloud-based machine learning#cost-efficient cloud services#customer experience enhancement with cloud#data analytics with cloud#digital transformation with cloud
1 note
·
View note
Text
Revolutionize Your Workflow: Meet Krater AI Today!
Krater AI is revolutionizing task management by offering a suite of benefits that significantly boost efficiency and productivity. Imagine automating those repetitive, time-consuming tasks that often bog us down—Krater AI takes care of them for us! This allows us to redirect our focus to more complex responsibilities, enhancing overall work performance and satisfaction.
But the advantages don’t stop there. Krater AI brings technical innovation to the forefront with advanced tools for creating engaging content, whether it’s presentations or social media management. Its user-friendly interface ensures everyone can leverage its powerful features, making it an invaluable asset in today’s fast-paced world.
#KraterAI #ProductivityTools
#Krater AI#task management#productivity tools#efficiency boost#automate tasks#work performance#user-friendly interface#digital innovation#content creation#social media management#marketing materials#interactive content#professional tools#streamline processes#enhance creativity#AI benefits#time-saving solutions#user experience#advanced technology#business productivity#creative tools#daily operations#task automation#work satisfaction#professional development#innovative solutions#elevate quality#fast-paced world#digital platforms#valuable asset
0 notes
Text
Worst part of popular left wing AI discourse online is that there's absolutely a need for a robust leftist opposition to use of cognitive automation without social dispensation to displaced human workers. The lack of any prior measures to facilitate a transition to having fewer humans in the workplace (UBI, more public control over industrial infrastructure, etc) is a disaster we are sleepwalking into - one that could lock the majority of our society's wealth further into the hands of authoritarian oligarchs who retain control of industry through last century private ownership models, while no longer needing to rely on us to operate their property.
But now we're seemingly not going to have the opposition we so desperately need, because everyone involved in the anti-AI conversation has pretty thoroughly discredited themselves and their movement by harbouring unconstrained reactionary nonsense, blatant falsehoods and woo. Instead of talking about who owns and benefits from cognitive automation, people are:
Demanding impossibilities like uninventing a now readily accessible technology
Trying to ascribe implicit moral value to said technology instead of the who is using it and how
Siding with corporations on copyright law in the name of "defending small artists"
Repeating obvious and embarrassing technical misconceptions and erroneous pop-sci about machine learning in order to justify their preferred philosophy
Invoking neo-spiritual conservative woo about the specialness of the human soul to try to incoherently discredit a machine that can quite obviously perform certain tasks just as well if not better than they can
Misrepresent numbers about energy use and environmental cost in an absurd double standard (all modern infrastructure is reliant on data centers to a similar level of impact, including your favourite fandom social media and online video games!) to build a narrative AI is some sort of malevolent spirit that damages our reality when it is called upon
It's a level of reactionary ignorance that has completely discredited any popular opposition to industrial AI rollout because it falls apart as soon as you dig deeper than a snappy social media post, or a misguided pro-copyright screed from an insecure web artist (who decries a machine laying eyes on their freely posted work while simultaneously charging commission for fan-art of corporate IPs... I'm sure that will absolutely resolve in their favour).
It would be funny how much people are fucking themselves over with all this, except I'm being fucked over to, and as a result am really quite mad about the situation. We need UBI, we need to liberate abundance from corporate greed, what we don't need is viral posts about putting distortion filters on anime fan-art to ward off the evil mechanical eye, pointless boycotts of platforms because they are perceived to have let the evil machines taint them, or petitions to further criminalize the creation of derivative works.
3K notes
·
View notes
Note
So the AI ask wasn't spam. I'd highly encourage you to do some research into how AI actually works, because it is neither particularly harmful to the environment, nor is it actually plagiarism.
Ignoring all of that however, my issue is that, fine, if you don't like AI, whatever. But people get so vitriolic about it. Regardless of your opinions on if it's valid art, your blog is usually a very positive place. It was kind of shocking to see you post something saying "fuck you if you disagree with me, your're a disgrace to the community." Just felt uncharacteristicly mean.
Even if you insist AI isn’t actively harmful to the environment or other writers (and the research I have done suggests it is, feel free to send me additional reading) and you simply MUST use prompts to generate personal content, nobody has any business posting it in a creative space for authors, which was the specific complaint addressed in that original post. While I’ll never say “fuck you for who you are as a person” on this blog, I might very well say “fuck you for harmful or rude actions you’ve taken willingly,” which is what that post was about.
Ao3 and similar platforms are designed as an archive for fan content and not a personal storage place for AI prompt results. It is simply not an appropriate place. If you look in the notes of the previous ask you will see other people have brought up additional reasons they have concerns about this practice.
A note on environmental effects for those who might not know: Generative AI requires MASSIVE amounts of data computers operating. As anyone who has held a laptop in their lap or run Civ VII on an aging desktop computer, computer équipement generates a lot of heat. Even some home and small-industrial computers have water-cooling systems. The amount of water demanded by AI computers is massive, even as parts of the world (even in America) experience water shortages. Besides this, it consumes a lot of power. The rising demand for AI and the improvements demanded to keep it viable mean this problem will continue to scale up rather than improve. Of course, those who benefit from the use of AI continue to downplay these concerns, and money is being funneled into convincing the public that these are not real concerns.
I have been openly against the use of generative AI, especially for art and writing, since its popularity rose in the last couple years. I’m sorry I wasn’t clearer about this stance sooner. I have asked my followers to alert me if I proliferate or share AI content, and continue to do so.
831 notes
·
View notes
Note
As bad as it is that workers even have to resort to this, appealing to copyright is one of the only already existing laws (at least in the US where most of these tech companies operate in) that workers can leverage in order to advocate people with more legal power to pass regulations and laws that are in workers' favour, especially those who don't have the power of a strong union to support them, which is significantly more than the number of unionized workers. It's Not Good, but trying to get the US government to listen to workers' concerns is like yelling at a brick wall. That said, while thankfully most lawsuits have just been plaintiffs wanting some kind of compensation for data that was used commercially without permission, the ones that are trying to weasle in things like "style rights" step way too far into overregulation, and even people who want to see Big Tech finally be held accountable for once should be against that sneaky IP expansionism. People are *extremely* understandably worried about copyright expansionism given the past few decades, but laws stating you need to get permission from a copyright holder to use their data in a commercially-used model is something that nobody interested in AI on this site should be worried about affecting their uses of it—unless you're working at a startup that's betting on the current legal grey area staying the norm long-term, at least. This is especially why it's important to make sure hobbyists and noncommercial research are thrown under the bus just to keep companies accountable.
name one single case where "workers" have benefited from 'appealing to copyright' lol
460 notes
·
View notes
Text
Falling into the AI vortex.
Before I deeply criticize something, I try to understand it more than surface level.
With guns, I went into deep research mode and learned as much as I could about the actual guns so I could be more effective in my gun control advocacy.
I learned things like... silencers are not silent. They are mainly for hearing protection and not assassinations. It's actually small caliber subsonic ammo that is a concern for covert shooting. A suppressor can aid with that goal, but its benefits as hearing protection outweigh that very rare circumstance.
AR15s... not that powerful. They use a tiny bullet. Originally it could not even be used against thick animal hides. It was classified as a "varmint hunting" gun. There are other factors that make it more dangerous like lightweight ammo, magazine capacity, medium range accuracy, and being able to penetrate things because the tiny bullets go faster. But in most mass shooting situations where the shooting distance is less than 20 feet, they really aren't more effective than a handgun. They are just popular for that purpose. Dare I say... a mass shooting fad or cliche. But there are several handguns that could be more powerful and deadly—capable of one bullet kills if shot anywhere near the chest. And easier to conceal and operate in close quarters like a school hallway.
This deeper understanding tells me that banning one type of gun may not be the solution people are hoping for. And that if you don't approach gun control holistically (all guns vs one gun), you may only get marginal benefits from great effort and resources.
Now I'm starting the same process with AI tools.
Everyone is stuck in "AI is bad" mode. And I understand why. But I worry there is nuance we are missing with this reactionary approach. Plus, "AI is bad" isn't a solution to the problem. It may be bad, but it is here and we need to figure out realistic approaches to mitigate the damage.
So I have been using AI tools. I am trying to understand how they work, what they are good for, and what problems we should be most worried about.
I've been at this for nearly a month and this may not be what everyone wants to hear, but I have had some surprising interactions with AI. Good interactions. Helpful interactions. I was even able to use it to help me keep from an anxiety thought spiral. It was genuinely therapeutic. And I am still processing that experience and am not sure what to say about it yet.
If I am able to write an essay on my findings and thoughts, I hope people will understand why I went into the belly of the beast. I hope they won't see me as an AI traitor.
A big part of my motivation to do this was because of a friend of mine. He was hit by a drunk driver many years ago. He is a quadriplegic. He has limited use of his arms and hands and his head movement is constrained.
When people say, "just pick up a pencil and learn to draw" I always cringe at his expense. He was an artist. He already learned how to pick up a pencil and draw. That was taken away from him. (And please don't say he can stick a pencil in his mouth. Some quads have that ability—he does not. It is not a thing all of them can do.) But now he has a tool that allows him to be creative again. And it has noticeably changed his life. It is a kind of art therapy that has had massive positive effects on his depression.
We have had a couple of tense arguments about the ethics of AI. He is all-in because of his circumstances. And it is difficult to express my opinions when faced with that. But he asked and I answered. He tried to defend it and did a poor job. Which, considering how smart he is, was hard to watch.
But I love my friend and I feel I'd like to at least know what I'm talking about. I want to try and experience the benefits he is seeing. And I'd like to see if there is a way for this technology to exist where it doesn't hurt more than it helps.
I don't know when I will be done with my experiment. My health is improving but I am still struggling and I will need to cut my dose again soon. But for now I am just collecting information and learning.
I guess I just wanted to prepare people for what I'm doing.
And ask they keep an open mind with my findings. Not all of them will be "AI is bad."
184 notes
·
View notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed

I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
857 notes
·
View notes
Note
I think you guys are thinking too much about it. AI or no AI a fic is a fic. It doesn't matter. You think you writing about real people is ethical? Writing them fucking and with controversial pairings? AI is all over the place like get used to it. If someone is using AI to fix their errors, or to just improve some writing why tf do you care? Y'all are just entitled. Not everyone's great at English. Just stfu and LET people write what they want. God.
hi, this is such an ignorant ask i'm incredibly surprised you felt confident enough to hit send! but i'll engage with you in good faith regardless.
yes, there are debates about the ethics of writing RPF, but i think comparing them to the ethical debates about the use of AI is frankly quite laughable. not only does AI have an incredibly detrimental impact on the environment, the impacts are likely to be unequal and hit already resource-strained environments the hardest. (i am providing sources for you here, something i'm assuming you're unfamiliar with since you are so in favour of relying on AI to generate 'original' thought). moreover, many AI models rely on data scraping in order to train these models. it is very often the case that creators of works on the internet - for example, ao3 - do not give consent for their works to be used to train these models. it raises ethical questions about ownership of content, and of intellectual property beyond fanfiction. comparing these ethical dilemmas to the ethics of rpf is not an argument that convinces me, nor i'm sure does it convince many others.
"AI is all over the place like get used to it" - frankly, i'm not surprised you're so supportive of AI, if this is the best argument in its favour you can muster. you know what else is all over the place?? modern slavery! modern slavery's extremely commonplace across the world, anti-slavery international estimate that about 50 million people globally are living in modern slavery. following the line of your argument, since modern slavery is so commonplace, this must make it okay, and we should get used to it. the idea that just because something is everywhere makes it acceptable is a logical fallacy. do you see how an overreliance on AI reduces your ability to critically think, and to form arguments for yourself?
please explain to me how i'm entitled for thinking that relying on AI to produce something of generally, extremely poor quality, is poor behaviour on your part, or the part of other people who do it. you don't have to write fanfiction in english, and if you do struggle with english, there are MANY talented betas in this fandom who i'm sure would be willing to lend a hand and fix SPAG. you are NOT going to improve your english by getting AI to fix it for you.
as @wisteriagoesvroom helpfully pointed out "art is an act of emotion and celebration and joy and defiance. it is an unshakeable, unstoppable feeling that idea that must and should be expressed" - this is not something you can achieve via the use of AI. you might think it's not that deep, but for many people who dedicate hours of their time to writing fanfiction, it feels very much like a slap in the face. and what's more, it produces negligible benefits for the person who is engaging in creating AI fanfiction.
i agree with you that people should write whatever they want, but the operative word in that statement is write. i do not, and will not ever consider inputting prompts into chatgpt a sincere form of artistic creation. thanks!
216 notes
·
View notes
Text
saw a post on twitter asking, effectively, why everything (even things that have no possible benefit) needs to have "ai" integrations and initiatives now. It's a good question, and one relevant to our present situation as a society.
fundamentally this hype is being driven by Capital, by investors. they are the ones going around asking every business "what are you going to do with/about ai?". while the individual capitalists asking these questions may believe they are motivated by a genuine concern for the Future, the material reasons behind this are our old friend, the impossibility of infinite growth.
big tech got big by building valuable, if antisocial, products: massive monopolized networks of information, surveillance, advertising, and even logistics. but capitalism demands ever-more growth and ever-more profit (which fights against the tendency of that rate of profit to fall), and that means that they have to be constantly looking for the next big thing.
and, at the moment, the only real contender for that next big thing is "ai". The other ones have had their day and show no signs of replicating the explosive growth of the current tech monopolies.
and so, everyone has to try and find some way to make "ai" part of their operation, even when it makes no sense, and even in the face of the mounting evidence that these machine learning models don't live up to the hype, and never will, no matter how much we destroy our ecology to power them up.
397 notes
·
View notes