#Benefits and Applications of Generative AI?
Explore tagged Tumblr posts
Text
0 notes
Text
AI and Automation in June 2025: Top Enterprise Shifts
June 2025 marked a turning point in enterprise technology, where artificial intelligence and automation moved from experimental to essential. In this blog, Infosprint Technologies breaks down six significant AI and automation developments every business leader needs to know — from OpenAI’s release of the reasoning-driven o3-Pro model to Anthropic’s launch of Claude Gov, a government-grade secure AI platform.
We also explore how giants like Salesforce, AWS, SAP, UiPath, and IBM are embedding generative AI into real-world business systems. Think: bots that navigate complex UIs, AI that drafts emails from your data, and cloud-based RPA tools that your HR or finance team can build without code.
Key Highlights:
OpenAI’s o3-Pro is up to 80% cheaper, making deep AI accessible to SMBs.
Anthropic’s Claude Gov shows the future of regulated, secure AI for defense and government.
Salesforce and SAP are enabling AI agents to interact with CRMs and ERP systems in real time.
UiPath and IBM are moving RPA from IT-only tools to enterprise-wide automation frameworks.
The rise of agentic AI means bots now decide, adapt, and act — not just follow rules.
Whether you’re a CIO, CTO, or business strategist, this blog offers a roadmap for navigating AI transformation in Q3 and beyond.
#ai and automation in june 2025#enterprise AI integration#generative AI partnerships#agentic AI applications#cloud-based automation tools#AI workflow automation#secure AI deployment#what is agentic AI#how Claude Gov works#benefits of o3-pro#RPA for compliance#AI in ERP systems
0 notes
Text
Tech Tip: Embracing AI for Everyday Efficiency
Introduction Discover how Artificial Intelligence is revolutionizing daily tasks and enhancing productivity across various fields. In the rapidly evolving digital landscape, Artificial Intelligence (AI) has become a cornerstone of innovation and efficiency. From generating code snippets to optimizing website content, AI tools are transforming the way we work and live. This tech tip explores how…
#AI#AI Applications#AI Code Generation#AI for Content Creation#AI in Business#AI in Daily Life#AI Innovation#AI Tools for SEO#AI Trends#Artificial Intelligence Benefits#ChatGPT#Everyday Efficiency#Google&039;s Gemini AI#Tech Tips#Uvalde Computer Repair
0 notes
Text
What Is the Best Way to Use AI in Content Creation?
Artificial Intelligence (AI) has transformed various industries, and content creation is no exception. By understanding what is the best way to use AI in content creation, creators can leverage this technology to enhance productivity, quality, and creativity. From automated writing tools to data analysis, AI offers diverse applications that can streamline the content production process, ensuring…
#AI#AI adoption#AI applications#AI benefits#AI creativity#AI impact#AI in business#AI in SEO#AI insights#AI integration#AI market#AI marketing#AI platforms#AI research#AI statistics#AI stats#AI technology#AI tools#AI trends#AI use#AI writing#AI-driven#AI-generated#automation#automation tools#blog writing#Business#coding#content creation#content optimization
1 note
·
View note
Text
See how the nuances of generative AI drive breakthroughs and challenges in industries, shaping its overall impact.
#AI Boosting Productivity#Benefits Of Generative AI#Future Of Generative AI#Generative AI#Generative AI Applications#Generative AI Capabilities#Limitations Of Generative AI#Role Of Generative AI
0 notes
Text
How Do Nuances Shape the Impact of Generative AI?
See how the nuances of generative AI drive breakthroughs and challenges in industries, shaping its overall impact. Generative AI is revolutionizing industries through increased creativity, new productivity, and innovation. However, the contextual details that are inherent in this technology make a significant difference between its efficiency, trustworthiness, and repercussions in society. In…
#AI Boosting Productivity#Benefits Of Generative AI#Future Of Generative AI#Generative AI#Generative AI Applications#Generative AI Capabilities#Limitations Of Generative AI#Role Of Generative AI
0 notes
Text
How Do Nuances Shape the Impact of Generative AI?
Learn how small nuances in data and algorithms significantly alter the outcomes and influence of generative AI technology.
Generative AI is revolutionizing industries through increased creativity, new productivity, and innovation. However, the contextual details that are inherent in this technology make a significant difference between its efficiency, trustworthiness, and repercussions in society. In this article, they have discussed certain critical aspects of Generative AI, including their significance, the consequences of these nuances and their role in the evolution of Generative AI.
Understanding the Nuances of Generative AI Technology
Generative AI is an advanced form of artificial intelligence which uses machine learning techniques, specifically the deep learning, to produce new content. Such content may be textual, graphical, musical, or even complex data mappings. The components are well established, and the control of the process relies on small parameters, or factors that are easily overlooked but can greatly affect the final result.
Part of the subtleties of Generative AI is the data used to train these models. The kind, variety and volume of the training data can greatly affect the generated output. The model trained from a small dataset may generate some bias and non-diverse content that may lead to production of wrong results. Further, there is the choice of algorithms and fine-tuning process, which add some fine details that determine how well an AI will generalize to new data or different contexts.
It is necessary to consider all these factors in dealing with Generative AI systems in order to achieve the best results possible. No matter whether it is about creating the application for content generation, design, or scientific research, a deeper understanding of the nuances of that technology can enhance the results and make the user be more aware of the technology’s strengths and weaknesses.
The Importance of Nuance in Generative AI Applications
Special considerations are critical in defining the performance of generative AI applications in various industries and geographical areas. For instance, in marketing, an advert created by an AI has to fit the cultural and social requirement of the intended information recipients for its effectiveness. Absence of subtlety is very dangerous as a communication can turn out to be either unimpressive or even provocative.
Think of automated content generation solutions only for product descriptions or customer outreach. These tools need to know not only concrete semantics of the words but also their nuances, which can be quite different when translated into different languages or used in different cultures. A generative AI system trained mostly on data from one region may not be as efficient in another as a result of these cultural disparities.
In addition, the use of nuance plays a significant role in how the AI handles the user’s interface. Here we have seen that in customer service, generative AI chatbots have to query and answer with sensitivity and understanding that cannot simply be achieved using the literal meaning of the words used. This includes understanding the emotional content and context of a conversation—features closely associated with variation in language.
Given that generative AI is becoming a part of day to day processes within organizations, there is a need to ensure that AI systems deployed are culturally and contextually competent. This has become quite important more so for companies that are in the global market since applying a singular strategy in various markets can cause misunderstanding.
Nuances in Language and Cultural Differences
Language is complex and holds several nuances that generative AI needs to understand in order to function optimally. Such details may range from colloquialisms specific to a region to idioms that cannot be literally translated from one language into another. For instance the English equivalent of “break a leg” used when wishing a person luck in his or her performance may have a negative impact on the person if interpreted literally.
Cultural factors add to these dynamics in even more ways including; The same thing that would be considered funny in one culture may be considered as a taboo in the other. Generative AI should capture these differences to ensure that it produces relevant and impactful content in all the cultures. This is particularly difficult because culture is not a fixed environment; it changes, and so should the AI that is interacting with it.
The generative AI issues that exist in this field are therefore complex. AI can not only identify these cultural and linguistic variations but also adjust the outputs based on them. This implies that there is need for development of complex algorithms that can easily identify difference in the usage of language and cultural differences and learn on this in real-time.
However, in the context of international business AI’s capability to manage such subtleties is a benefit. Some of the successful factors which portray Companies that employ AI system that has ability to recognize cultural sensitivity are favourable in International markets. This is the reason why while developing the AI technologies it is not enough to make them simply smart but also respecting the linguistic and cultural differences.
How Nuances Affect the Capabilities and Limitations of Generative AI
Content Quality:
Nuances directly impact the quality of AI-generated content. Decoding of top level contextual cues like the cultural or generational references enables the generation of a more fitting content by the AI. Without this, the output may have all the technical input, a human touch, which makes content engaging is missing.
User Interaction:
Nuances in language and tone significantly affect how users perceive and interact with AI. An AI system that does not recognize these nuances may create an awkward image of being cold and inattentive to the customer’s feelings and that would not go well with the user.
Ethical Considerations:
Nuances also play a crucial role in the ethical deployment of AI. For example, AI has to be very cautious when concerning such issues as it has to grasp the consequences of its outputs in certain cultures. Screw ups on this can cost you ethical violations and a tarnished reputation.
Adaptability:
Therefore, depending on the amount of nuance in new situations or data, this can or cannot manifest as an issue in the performance of generative AI. These systems are more adaptable and reliable for a wider range of uses because they have adapted to receiving new and varying inputs like multicultural or multilingual inputs.
AI Boosting Productivity:
Nuanced understanding enables AI to enhance productivity by generating content that is not only correct but also contextually appropriate and effective. This capability is important in industries as marketing, customer service and content creating businesses where communication matters most.
Navigating the Nuances to Maximize the Benefits of Generative AI
Training with Diverse Data:
Yet to address the nuances, generative AI systems should be trained on a diverse set of data that contains cultural, linguistic and contextual information. This means that the AI will be able to understand the nuanced differences between areas and sectors within the world.
Continuous Learning:
AI systems needs to be made capable of learning as new data appears in the environment and particularly cultural norms as language evolves. The existence of feedback loops that permit modification of AI results depending on the users’ responses can greatly improve its performance.
Ethical Frameworks:
Ethical frameworks are hence key since they will help AI to avoid socially sensitive matters, as well as accommodate for cultural differences. These frameworks should be embedded into the AI’s decision-making algorithms so that the outputs are not only correct but also culturally sensitive.
Customizable Outputs:
Enabling users to customize AI-generated content eliminates the problem created by nuances as users can customize them. This way the AI system allows users to have more control over the tone, style and the cultural context of the output which makes the result to be more suitable to the identified needs.
Collaboration with Human Experts:
Combining AI’s computational power with human expertise can help navigate nuances more effectively. This human intervention also enshrines appropriateness and ethical considerations of the AI-produced information especially in critical environments.
Conclusion: Embracing the Nuances of Generative AI for Positive Impact
As the generative AI progresses, its potential will also depend on how well the AI systems will understand language, culture, and context. This understanding is now vital for businesses and developers to achieve all the benefits that AI has to offer. However, it becomes possible to accept those issues and consider enhancing strategies to follow them, as it opens doors to develop new potential of AI, improve interpersonal communication, and advance innovation in various fields.
Overall, the future of generative AI is promising, but it will be important to grasp and orient the highly contextual and flexible ways in which it relates to reality. By adhering to the best practices mentioned above regarding generative AI including, data quality, continual learning and incorporating ethical principles, the role of generative AI can be leveraged to its full potential in a way that benefits society and improves the nature and efficacy of every application.
Original source: https://bit.ly/3MrYDIf
#AI Boosting Productivity#Benefits Of Generative AI#Future Of Generative AI#Generative AI#Generative AI Applications#Generative AI Capabilities#Limitations Of Generative AI#Role Of Generative AI
0 notes
Text
AI in Banking: Insights from AWS
youtube
Discover how artificial intelligence is transforming the banking industry through insights from AWS. Learn about cutting-edge AI applications in fraud detection, customer service, risk management, and personalized banking experiences. Explore case studies and best practices on leveraging AWS's AI and machine learning services to drive innovation, enhance security, and improve operational efficiency in the financial sector.
#money 20/20 Europe#artificial intelligence#financial services#ai chatbots for improved customer experience in small banks#can ai help prevent money laundering in international banking#ai and the future of wealth management services#is generative ai secure for banking applications#how generative ai is used in bank customer service best practices for ai chatbots in banking#benefits of ai for risk management in banks#ai in banking#insights from aws#banking insights#Youtube
0 notes
Text
Exploring the Ethical Implications of Generative AI
In recent years, the advent of Generative AI, or GeneAIwiz, has revolutionized various industries, including test automation, mobile development, and software development lifecycle (SDLC). This cutting-edge technology harnesses the power of artificial intelligence to generate content, designs, and even code, thus streamlining processes and boosting efficiency. However, along with its myriad benefits, Generative AI also raises profound ethical questions that warrant careful consideration. In this blog post, we delve into the ethical implications of Generative AI, its applications in test automation and mobile development, and the approach taken by V2Soft in offering such services.
Understanding Generative AI
Generative AI involves algorithms trained on vast datasets to generate new content or solutions autonomously. This technology employs deep learning models, such as Generative Adversarial Networks (GANs) and Transformers, to mimic human creativity and problem-solving abilities. By analyzing patterns in data, Generative AI can produce text, images, music, and even code snippets with remarkable accuracy.
1. Generative AI in Test Automation
In the realm of test automation, Generative AI offers a revolutionary approach to streamline the testing process. Traditional testing methodologies often rely on predefined test cases, which may overlook unforeseen scenarios. Generative AI, on the other hand, can dynamically generate test cases based on real-world usage patterns and edge cases.
Tradeoffs:
Accuracy vs. Diversity: While Generative AI can generate a diverse range of test cases, ensuring their accuracy remains a challenge.
Resource Intensiveness: Training Generative AI models requires significant computational resources and extensive datasets.
2. Generative AI in Mobile Development
Generative AI tools for app development have gained traction among developers seeking to expedite the design and prototyping phases. These tools can generate UI mockups, code snippets, and even entire app architectures based on minimal input from developers.
Challenges:
Customization vs. Automation: Balancing the need for customized solutions with the desire for automation poses a significant challenge.
Quality Assurance: Ensuring the quality and security of apps generated using Generative AI tools is paramount.
3. Generative AI in SDLC
In the software development lifecycle, Generative AI holds the promise of accelerating the development process and reducing time-to-market. By automating repetitive tasks such as code generation, documentation, and bug fixing, developers can focus on higher-level tasks, fostering innovation.
Approach by V2Soft:
V2Soft adopts a comprehensive approach to harnessing Generative AI in software development. By leveraging advanced machine learning techniques and domain-specific knowledge, V2Soft's GeneAIwiz platform offers tailored solutions for test automation, mobile development, and SDLC optimization. With a focus on quality, security, and ethical considerations, V2Soft ensures that its Generative AI solutions align with industry best practices and regulatory standards.
Ethical Considerations
Despite its transformative potential, Generative AI raises ethical concerns regarding data privacy, algorithmic bias, and the displacement of human labor. As AI systems become increasingly autonomous, ensuring transparency, accountability, and fairness in their deployment becomes imperative.
Conclusion
Generative AI holds immense promise in revolutionizing test automation, mobile development, and SDLC optimization. However, as with any disruptive technology, its ethical implications must be carefully examined and addressed. By adopting a balanced approach that prioritizes transparency, accountability, and human oversight, organizations can harness the full potential of Generative AI while mitigating its ethical risks.
0 notes
Text
Dive into the world of Generative AI! Discover its workings, applications, and the myriad benefits it brings. Unleash creativity and innovation through this cutting-edge technology. Explore the potential of Generative AI for transformative applications.
#AI Innovation#Tech Exploration#Generative AI Applications#Generative AI Benefits#generative ai#artificial intelligence
0 notes
Text
AI can’t do your job

I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me in SAN DIEGO at MYSTERIOUS GALAXY on Mar 24, and in CHICAGO with PETER SAGAL on Apr 2. More tour dates here.
AI can't do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can't do your job:
https://www.pcmag.com/news/amid-job-cuts-doge-accelerates-rollout-of-ai-tool-to-automate-government
If you pay attention to the hype, you'd think that all the action on "AI" (an incoherent grab-bag of only marginally related technologies) was in generating text and images. Man, is that ever wrong. The AI hype machine could put every commercial illustrator alive on the breadline and the savings wouldn't pay the kombucha budget for the million-dollar-a-year techies who oversaw Dall-E's training run. The commercial market for automated email summaries is likewise infinitesimal.
The fact that CEOs overestimate the size of this market is easy to understand, since "CEO" is the most laptop job of all laptop jobs. Having a chatbot summarize the boss's email is the 2025 equivalent of the 2000s gag about the boss whose secretary printed out the boss's email and put it in his in-tray so he could go over it with a red pen and then dictate his reply.
The smart AI money is long on "decision support," whereby a statistical inference engine suggests to a human being what decision they should make. There's bots that are supposed to diagnose tumors, bots that are supposed to make neutral bail and parole decisions, bots that are supposed to evaluate student essays, resumes and loan applications.
The narrative around these bots is that they are there to help humans. In this story, the hospital buys a radiology bot that offers a second opinion to the human radiologist. If they disagree, the human radiologist takes another look. In this tale, AI is a way for hospitals to make fewer mistakes by spending more money. An AI assisted radiologist is less productive (because they re-run some x-rays to resolve disagreements with the bot) but more accurate.
In automation theory jargon, this radiologist is a "centaur" – a human head grafted onto the tireless, ever-vigilant body of a robot
Of course, no one who invests in an AI company expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot. The real pitch to hospital is, "Fire all but one of your radiologists and then put that poor bastard to work reviewing the judgments our robot makes at machine scale."
No one seriously thinks that the reverse-centaur radiologist will be able to maintain perfect vigilance over long shifts of supervising automated process that rarely go wrong, but when they do, the error must be caught:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
The role of this "human in the loop" isn't to prevent errors. That human's is there to be blamed for errors:
https://pluralistic.net/2024/10/30/a-neck-in-a-noose/#is-also-a-human-in-the-loop
The human is there to be a "moral crumple zone":
https://estsjournal.org/index.php/ests/article/view/260
The human is there to be an "accountability sink":
https://profilebooks.com/work/the-unaccountability-machine/
But they're not there to be radiologists.
This is bad enough when we're talking about radiology, but it's even worse in government contexts, where the bots are deciding who gets Medicare, who gets food stamps, who gets VA benefits, who gets a visa, who gets indicted, who gets bail, and who gets parole.
That's because statistical inference is intrinsically conservative: an AI predicts the future by looking at its data about the past, and when that prediction is also an automated decision, fed to a Chaplinesque reverse-centaur trying to keep pace with a torrent of machine judgments, the prediction becomes a directive, and thus a self-fulfilling prophecy:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
AIs want the future to be like the past, and AIs make the future like the past. If the training data is full of human bias, then the predictions will also be full of human bias, and then the outcomes will be full of human bias, and when those outcomes are copraphagically fed back into the training data, you get new, highly concentrated human/machine bias:
https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification
By firing skilled human workers and replacing them with spicy autocomplete, Musk is assuming his final form as both the kind of boss who can be conned into replacing you with a defective chatbot and as the fast-talking sales rep who cons your boss. Musk is transforming key government functions into high-speed error-generating machines whose human minders are only the payroll to take the fall for the coming tsunami of robot fuckups.
This is the equivalent to filling the American government's walls with asbestos, turning agencies into hazmat zones that we can't touch without causing thousands to sicken and die:
https://pluralistic.net/2021/08/19/failure-cascades/#dirty-data
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete
Image: Krd (modified) https://commons.wikimedia.org/wiki/File:DASA_01.jpg
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
--
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#reverse centaurs#automation#decision support systems#automation blindness#humans in the loop#doge#ai#elon musk#asbestos in the walls#gsai#moral crumple zones#accountability sinks
277 notes
·
View notes
Note
i agree with you on the premise that copyright is bad and disney winning that lawsuit but im curious
if they did, would they actually take the opportunity to nuke fanworks? like it seems most media companies benefit a lot from having fandoms engage with their works, it seems like nuking fandoms is counterintuitive to having an audience for their ips
just curious on your opinion here, youre been very eloquent in voicing anti copyright law that ive come over to your side from firmly fuck ai ling live copyright
I don't necessarily think Disney *would* do that, but them winning this lawsuit would give more ammunition to anyone who did want to do so.
Afaik, there hasn't been a US copyright court case that establishes precedent in favor of fanworks being considered fair use, but there also hasn't been a case that establishes precedent in favor of them *not* being considered fair use (generally people who have analyzed it lean more towards "no, most fanworks probably aren't transformative enough for the fair use defense to apply" but without a court case it's generally still a grey area)
This lawsuit, however, features arguments that can be used as precedent on the side of "definitely not", such as the fact that, when asked to, Midjourney can generate pictures of copyrighted characters. If a machine-produced picture of a copyrighted character is not applicable for a fair use defense, then this can be used as precedent in other cases against a human creating a picture of a copyrighted character.
While I don't think this means that Disney will immediately start going after fanartists and fanfic authors (I joke about that sometimes but in reality it would be kind of a PR disaster for them), it does mean that anyone who *does* want to do that in the future will probably have an easier time doing so.
45 notes
·
View notes
Text
Once the AI bubble bursts, that doesn’t mean chatbots and image generators will be relegated to the trash bin of history. Rather, there will be a reassessment of where it makes sense to implement them, and if attention moves on too fast, they may be able to do that with minimal pushback. The challenge visual artists and video game workers are already finding with employers making use of generative AI to worsen the labor conditions in their industries may become entrenched, especially if artists fail in their lawsuits against AI companies for training on their work without permission. But it could be far worse than that. Microsoft is already partnering with Palantir to feed generative AI into militaries and intelligence agencies, while governments around the world are looking at how they can implement generative AI to reduce the cost of service delivery, often without effective consideration of the potential harms that can come of relying on tools that are well known to output false information. This is a problem Resisting AI author Dan McQuillan has pointed to as a key reason why we must push back against these technologies. There are already countless examples of algorithmic systems have been used to harm welfare recipients, childcare benefit applicants, immigrants, and other vulnerable groups. We risk a repetition, if not an intensification, of those harmful outcomes. When the AI bubble bursts, investors will lose money, companies will close, and workers will lose jobs. Those developments will be splashed across the front pages of major media organizations and will receive countless hours of public discussion. But it’s those lasting harms that will be harder to immediately recognize, and that could fade as the focus moves on to whatever Silicon Valley places starts pushing as the foundation of its next investment cycle. All the benefits Altman and his fellow AI boosters promised will fade, just as did the promises of the gig economy, the metaverse, the crypto industry, and countless others. But the harmful uses of the technology will stick around, unless concerted action is taken to stop those use cases from lingering long after the bubble bursts.
16 August 2024
67 notes
·
View notes
Text
I saw a post the other day calling criticism of generative AI a moral panic, and while I do think many proprietary AI technologies are being used in deeply unethical ways, I think there is a substantial body of reporting and research on the real-world impacts of the AI boom that would trouble the comparison to a moral panic: while there *are* older cultural fears tied to negative reactions to the perceived newness of AI, many of those warnings are Luddite with a capital L - that is, they're part of a tradition of materialist critique focused on the way the technology is being deployed in the political economy. So (1) starting with the acknowledgement that a variety of machine-learning technologies were being used by researchers before the current "AI" hype cycle, and that there's evidence for the benefit of targeted use of AI techs in settings where they can be used by trained readers - say, spotting patterns in radiology scans - and (2) setting aside the fact that current proprietary LLMs in particular are largely bullshit machines, in that they confidently generate errors, incorrect citations, and falsehoods in ways humans may be less likely to detect than conventional disinformation, and (3) setting aside as well the potential impact of frequent offloading on human cognition and of widespread AI slop on our understanding of human creativity...
What are some of the material effects of the "AI" boom?
Guzzling water and electricity
The data centers needed to support AI technologies require large quantities of water to cool the processors. A to-be-released paper from the University of California Riverside and the University of Texas Arlington finds, for example, that "ChatGPT needs to 'drink' [the equivalent of] a 500 ml bottle of water for a simple conversation of roughly 20-50 questions and answers." Many of these data centers pull water from already water-stressed areas, and the processing needs of big tech companies are expanding rapidly. Microsoft alone increased its water consumption from 4,196,461 cubic meters in 2020 to 7,843,744 cubic meters in 2023. AI applications are also 100 to 1,000 times more computationally intensive than regular search functions, and as a result the electricity needs of data centers are overwhelming local power grids, and many tech giants are abandoning or delaying their plans to become carbon neutral. Google’s greenhouse gas emissions alone have increased at least 48% since 2019. And a recent analysis from The Guardian suggests the actual AI-related increase in resource use by big tech companies may be up to 662%, or 7.62 times, higher than they've officially reported.
Exploiting labor to create its datasets
Like so many other forms of "automation," generative AI technologies actually require loads of human labor to do things like tag millions of images to train computer vision for ImageNet and to filter the texts used to train LLMs to make them less racist, sexist, and homophobic. This work is deeply casualized, underpaid, and often psychologically harmful. It profits from and re-entrenches a stratified global labor market: many of the data workers used to maintain training sets are from the Global South, and one of the platforms used to buy their work is literally called the Mechanical Turk, owned by Amazon.
From an open letter written by content moderators and AI workers in Kenya to Biden: "US Big Tech companies are systemically abusing and exploiting African workers. In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern day slavery."
Deskilling labor and demoralizing workers
The companies, hospitals, production studios, and academic institutions that have signed contracts with providers of proprietary AI have used those technologies to erode labor protections and worsen working conditions for their employees. Even when AI is not used directly to replace human workers, it is deployed as a tool for disciplining labor by deskilling the work humans perform: in other words, employers use AI tech to reduce the value of human labor (labor like grading student papers, providing customer service, consulting with patients, etc.) in order to enable the automation of previously skilled tasks. Deskilling makes it easier for companies and institutions to casualize and gigify what were previously more secure positions. It reduces pay and bargaining power for workers, forcing them into new gigs as adjuncts for its own technologies.
I can't say anything better than Tressie McMillan Cottom, so let me quote her recent piece at length: "A.I. may be a mid technology with limited use cases to justify its financial and environmental costs. But it is a stellar tool for demoralizing workers who can, in the blink of a digital eye, be categorized as waste. Whatever A.I. has the potential to become, in this political environment it is most powerful when it is aimed at demoralizing workers. This sort of mid tech would, in a perfect world, go the way of classroom TVs and MOOCs. It would find its niche, mildly reshape the way white-collar workers work and Americans would mostly forget about its promise to transform our lives. But we now live in a world where political might makes right. DOGE’s monthslong infomercial for A.I. reveals the difference that power can make to a mid technology. It does not have to be transformative to change how we live and work. In the wrong hands, mid tech is an antilabor hammer."
Enclosing knowledge production and destroying open access
OpenAI started as a non-profit, but it has now become one of the most aggressive for-profit companies in Silicon Valley. Alongside the new proprietary AIs developed by Google, Microsoft, Amazon, Meta, X, etc., OpenAI is extracting personal data and scraping copyrighted works to amass the data it needs to train their bots - even offering one-time payouts to authors to buy the rights to frack their work for AI grist - and then (or so they tell investors) they plan to sell the products back at a profit. As many critics have pointed out, proprietary AI thus works on a model of political economy similar to the 15th-19th-century capitalist project of enclosing what was formerly "the commons," or public land, to turn it into private property for the bourgeois class, who then owned the means of agricultural and industrial production. "Open"AI is built on and requires access to collective knowledge and public archives to run, but its promise to investors (the one they use to attract capital) is that it will enclose the profits generated from that knowledge for private gain.
AI companies hungry for good data to train their Large Language Models (LLMs) have also unleashed a new wave of bots that are stretching the digital infrastructure of open-access sites like Wikipedia, Project Gutenberg, and Internet Archive past capacity. As Eric Hellman writes in a recent blog post, these bots "use as many connections as you have room for. If you add capacity, they just ramp up their requests." In the process of scraping the intellectual commons, they're also trampling and trashing its benefits for truly public use.
Enriching tech oligarchs and fueling military imperialism
The names of many of the people and groups who get richer by generating speculative buzz for generative AI - Elon Musk, Mark Zuckerberg, Sam Altman, Larry Ellison - are familiar to the public because those people are currently using their wealth to purchase political influence and to win access to public resources. And it's looking increasingly likely that this political interference is motivated by the probability that the AI hype is a bubble - that the tech can never be made profitable or useful - and that tech oligarchs are hoping to keep it afloat as a speculation scheme through an infusion of public money - a.k.a. an AIG-style bailout.
In the meantime, these companies have found a growing interest from military buyers for their tech, as AI becomes a new front for "national security" imperialist growth wars. From an email written by Microsoft employee Ibtihal Aboussad, who interrupted Microsoft AI CEO Mustafa Suleyman at a live event to call him a war profiteer: "When I moved to AI Platform, I was excited to contribute to cutting-edge AI technology and its applications for the good of humanity: accessibility products, translation services, and tools to 'empower every human and organization to achieve more.' I was not informed that Microsoft would sell my work to the Israeli military and government, with the purpose of spying on and murdering journalists, doctors, aid workers, and entire civilian families. If I knew my work on transcription scenarios would help spy on and transcribe phone calls to better target Palestinians, I would not have joined this organization and contributed to genocide. I did not sign up to write code that violates human rights."
So there's a brief, non-exhaustive digest of some vectors for a critique of proprietary AI's role in the political economy. tl;dr: the first questions of material analysis are "who labors?" and "who profits/to whom does the value of that labor accrue?"
For further (and longer) reading, check out Justin Joque's Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism and Karen Hao's forthcoming Empire of AI.
25 notes
·
View notes
Text
AI Is Inherently Counterrevolutionary
You've probably heard some arguments against AI. While there are fields where it has amazing applications (i.e. medicine), the introduction of language generative AI models has sparked a wave of fear and backlash. Much has been said about the ethics, impact on learning, and creative limits of ChatGPT and similar. But I go further: ChatGPT is counterrevolutionary and inherently, inescapably anti-socialist, anti-communist, and incompatible with all types of leftist thought and practice. In this essay I will...
...
Dammit im just going to write the whole essay cause this shit is vital
3 Reasons Leftists Should Not Use AI
1. It is a statistics machine
Imagine you have a friend who only ever tells you what they think you want to hear. How quickly would that be frustrating? And how could you possibly rely on them to tell you the truth?
Now, imagine a machine that uses statistica to predict what someone like you probably wants to hear. That's ChatGPT. It doesnt think, it runs stats on the most likely outcome. This is why it cant really be creative. All it can do is regurgitate the most likely response to your input.
There's a big difference between that statistical prediction and answering a question. For AI, it doesnt matter what's true, only what's likely.
Why does that matter if you're a leftist? Well, a lot of praxis is actually not doing what is most likely. Enacting real change requires imagination and working toward things that havent been done before.
Not only that, but so much of being a communist or anarchist or anti-capitalist relies on being able to get accurate information, especially on topics flooded with propaganda. ChatGPT cannot be relied on to give accurate information in these areas. This only worsens the polarized information divide.
2. It reinforces the status quo
So if ChatGPT tells you what you're most likely to want to hear, that means it's generally pulling from what it has been trained to label as "average". We're seen how AI models can be influenced by the racism and sexism of their training data, but it goes further than that.
AI models are also given a model of what is "normal" that is biased towards their programmers/data sets. ChatGPT is trained to mark neoliberal capitalism as normal. That makes ChatGPT itself at odds with an anti-capitalist perspective. This kind of AI cannot help but incorporate not just racism, sexism, homophobia, etc but its creators' bias towards capitalist imperialism.
3. It's inescapably expoitative
There's no way around it. ChatGPT was trained on and regurgitates the unpaid, uncredited labor of millions. Full stop.
This kind of AI has taken the labor of millions of people without permission or compensation to use in perpetuity.
That's not even to mention how much electricity, water, and other resources are required to run the servers for AI--it requires orders of magnitude more computing power than a typical search engine.
When you use ChatGPT, you are benefitting from the unpaid labor of others. To get a statistical prediction of what you want to hear regardless of truth. A prediction that reinforces capitalism, white supremacy, patriarchy, imperialism, and all the things we are fighting against.
Can you see how this makes using AI incompatible with leftism?
(And please, I am begging you. Do not use ChatGPT to summarize leftist theory for you. Do not use it to learn about activism. Please. There are so many other resources out there and groups of real people to organize with.)
I'm serious. Dont use AI. Not for work or school. Not for fun. Not for creativity. Not for internet clout. If you believe in the ideas I've mentioned here or anything adjacent to such, using AI is a contradiction to everything you stand for.
#ai#chatgpt#anti capitalism#anti ai#socialism#communism#leftism#leftist#praxis#activism#in this essay i will#artificial intelligence#hot take#i hate capitalism#fuck ai
37 notes
·
View notes
Text
Learn how small nuances in data and algorithms significantly alter the outcomes and influence of generative AI technology.
#AI Boosting Productivity#Benefits Of Generative AI#Future Of Generative AI#Generative AI#Generative AI Applications#Generative AI Capabilities#Limitations Of Generative AI#Role Of Generative AI
0 notes