#LLM Customization
Explore tagged Tumblr posts
techahead-software-blog ¡ 6 months ago
Text
RAG vs Fine-Tuning: Choosing the Right Approach for Building LLM-Powered Chatbots
Tumblr media
Imagine having an ultra-intelligent assistant ready to answer any question. Now, imagine making it even more capable, specifically for tasks you rely on most. That’s the power—and the debate—behind Retrieval-Augmented Generation (RAG) and Fine-Tuning. These methods act as “training wheels,” each enhancing your AI’s capabilities in unique ways.
RAG brings in current, real-world data whenever the model needs it, perfect for tasks requiring constant updates. Fine-Tuning, on the other hand, ingrains task-specific knowledge directly into the model, tailoring it to your exact needs. Selecting between them can dramatically influence your AI’s performance and relevance.
Whether you’re building a customer-facing chatbot, automating tailored content, or optimizing an industry-specific application, choosing the right approach can make all the difference. 
This guide will delve into the core contrasts, benefits, and ideal use cases for RAG and Fine-Tuning, helping you pinpoint the best fit for your AI ambitions.
Key Takeaways:
Retrieval-Augmented Generation (RAG) and Fine-Tuning are two powerful techniques for enhancing Large Language Models (LLMs) with distinct advantages.
RAG is ideal for applications requiring real-time information updates, leveraging external knowledge bases to deliver relevant, up-to-date responses.
Fine-Tuning excels in accuracy for specific tasks, embedding task-specific knowledge directly into the model’s parameters for reliable, consistent performance.
Hybrid approaches blend the strengths of both RAG and Fine-Tuning, achieving a balance of real-time adaptability and domain-specific accuracy.
What is RAG?
Retrieval-Augmented Generation (RAG) is an advanced technique in natural language processing (NLP) that combines retrieval-based and generative models to provide highly relevant, contextually accurate responses to user queries. Developed by OpenAI and other leading AI researchers, RAG enables systems to pull information from extensive databases, knowledge bases, or documents and use it as part of a generated response, enhancing accuracy and relevance.
How RAG Works?
Tumblr media
Retrieval Step
When a query is received, the system searches through a pre-indexed database or corpus to find relevant documents or passages. This retrieval process typically uses dense embeddings, which are vector representations of text that help identify the most semantically relevant information.
 Generation Step
The retrieved documents are then passed to a generative model, like GPT or a similar transformer-based architecture. This model combines the query with the retrieved information to produce a coherent, relevant response. The generative model doesn’t just repeat the content but rephrases and contextualizes it for clarity and depth.
Combining Outputs
The generative model synthesizes the response, ensuring that the answer is not only relevant but also presented in a user-friendly way. The combined information often makes RAG responses more informative and accurate than those generated by standalone generative models.
Advantages of RAG
Tumblr media
Improved Relevance
By incorporating external, up-to-date sources, RAG generates more contextually accurate responses than traditional generative models alone.
Reduced Hallucination
One of the significant issues with purely generative models is “hallucination,” where they produce incorrect or fabricated information. RAG mitigates this by grounding responses in real, retrieved content.
Scalability
RAG can integrate with extensive knowledge bases and adapt to vast amounts of information, making it ideal for enterprise and research applications.
Enhanced Context Understanding
By pulling from a wide variety of sources, RAG provides a richer, more nuanced understanding of complex queries.
Real-World Knowledge Integration
For companies needing up-to-date or specialized information (e.g., medical databases, and legal documents), RAG can incorporate real-time data, ensuring the response is as accurate and current as possible.
Disadvantages of RAG
Tumblr media
Computational Intensity
RAG requires both retrieval and generation steps, demanding higher processing power and memory, making it more expensive than traditional NLP models.
Reliance on Database Quality
The accuracy of RAG responses is highly dependent on the quality and relevance of the indexed knowledge base. If the corpus lacks depth or relevance, the output can suffer.
Latency Issues
The retrieval and generation process can introduce latency, potentially slowing response times, especially if the retrieval corpus is vast.
Complexity in Implementation
Setting up RAG requires both an effective retrieval system and a sophisticated generative model, increasing the technical complexity and maintenance needs.
Bias in Retrieved Data
Since RAG relies on existing data, it can inadvertently amplify biases or errors present in the retrieved sources, affecting the quality of the generated response.
What is Fine-Tuning?
Tumblr media
Fine-tuning is a process in machine learning where a pre-trained model (one that has been initially trained on a large dataset) is further trained on a more specific, smaller dataset. This step customizes the model to perform better on a particular task or within a specialized domain. Fine-tuning adjusts the weights of the model so that it can adapt to nuances in the new data, making it highly relevant for specific applications, such as medical diagnostics, legal document analysis, or customer support.
How Fine-Tuning Works?
Tumblr media
Pre-Trained Model Selection
A model pre-trained on a large, general dataset (like GPT trained on a vast dataset of internet text) serves as the foundation. This model already understands a wide range of language patterns, structures, and general knowledge.
Dataset Preparation
A specific dataset, tailored to the desired task or domain, is prepared for fine-tuning. This dataset should ideally contain relevant and high-quality examples of what the model will encounter in production.
Training Process
During fine-tuning, the model is retrained on the new dataset with a lower learning rate to avoid overfitting. This step adjusts the pre-trained model’s weights so that it can capture the specific patterns, terminology, or context in the new data without losing its general language understanding.
Evaluation and Optimization
The fine-tuned model is tested against a validation dataset to ensure it performs well. If necessary, hyperparameters are adjusted to further optimize performance.
Deployment
Once fine-tuning yields satisfactory results, the model is ready for deployment to handle specific tasks with improved accuracy and relevancy.
Advantages of Fine-Tuning
Tumblr media
Enhanced Accuracy
Fine-tuning significantly improves the model’s performance on domain-specific tasks since it adapts to the unique vocabulary and context of the target domain.
Cost-Effectiveness
It’s more cost-effective than training a new model from scratch. Leveraging a pre-trained model saves computational resources and reduces time to deployment.
Task-Specific Customization
Fine-tuning enables customization for niche applications, like customer service responses, medical diagnostics, or legal document summaries, where specialized vocabulary and context are required.
Reduced Data Requirements
Fine-tuning typically requires a smaller dataset than training a model from scratch, as the model has already learned fundamental language patterns from the pre-training phase.
Scalability Across Domains
The same pre-trained model can be fine-tuned for multiple specialized tasks, making it highly adaptable across different applications and industries.
Disadvantages of Fine-Tuning
Tumblr media
Risk of Overfitting
If the fine-tuning dataset is too small or lacks diversity, the model may overfit, meaning it performs well on the fine-tuning data but poorly on new inputs.
Loss of General Knowledge
Excessive fine-tuning on a narrow dataset can lead to a loss of general language understanding, making the model less effective outside the fine-tuned domain.
Data Sensitivity
Fine-tuning may amplify biases or errors present in the new dataset, especially if it’s not balanced or representative.
Computation Costs
While fine-tuning is cheaper than training from scratch, it still requires computational resources, which can be costly for complex models or large datasets.
Maintenance and Updates
Fine-tuned models may require periodic retraining or updating as new domain-specific data becomes available, adding to maintenance costs.
Key Difference Between RAG and Fine-Tuning
Tumblr media
 
Key Trade-Offs to Consider
Tumblr media
Data Dependency 
RAG’s dynamic data retrieval means it’s less dependent on static data, allowing accurate responses without retraining.
Cost and Time
Fine-tuning is computationally demanding and time-consuming, yet yields highly specialized models for specific use cases.
Dynamic Vs Static Knowledge
RAG benefits from dynamic, up-to-date retrieval, while fine-tuning relies on stored static knowledge, which may age.
When to Choose Between RAG and Fine-Tuning?
RAG shines in applications needing vast and frequently updated knowledge, like tech support, research tools, or real-time summarization. It minimizes retraining requirements but demands a high-quality retrieval setup to avoid inaccuracies. Example: A chatbot using RAG for product recommendations can fetch real-time data from a constantly updated database.
Fine-tuning excels in tasks needing domain-specific knowledge, such as medical diagnostics, content generation, or document reviews. While demanding quality data and computational resources, it delivers consistent results post-training, making it well-suited for static applications. Example: A fine-tuned AI model for document summarization in finance provides precise outputs tailored to industry-specific language.
the right choice is totally depended on the use case of your LLM chatbot. Take the necessary advantages and disadvantages in the list and choose the right fit for your custom LLM development.
Hybrid Approaches: Leveraging RAG and Fine-Tuning Together
Rather than favoring either RAG or fine-tuning, hybrid approaches combine the strengths of both methods. This approach fine-tunes the model for domain-specific tasks, ensuring consistent and precise performance. At the same time, it incorporates RAG’s dynamic retrieval for real-time data, providing flexibility in volatile environments.
Optimized for Precision and Real-Time Responsiveness
Tumblr media
With hybridization, the model achieves high accuracy for specialized tasks while adapting flexibly to real-time information. This balance is crucial in environments that require both up-to-date insights and historical knowledge, such as customer service, finance, and healthcare.
Fine-Tuning for Domain Consistency: By fine-tuning, hybrid models develop strong, domain-specific understanding, offering reliable and consistent responses within specialized contexts.
RAG for Real-Time Adaptability: Integrating RAG enables the model to access external information dynamically, keeping responses aligned with the latest data.
Ideal for Data-Intensive Industries: Hybrid models are indispensable in fields like finance, healthcare, and customer service, where both past insights and current trends matter. They adapt to new information while retaining industry-specific precision.
Versatile, Cost-Effective Performance
Hybrid approaches maximize flexibility without extensive retraining, reducing costs in data management and computational resources. This approach allows organizations to leverage existing fine-tuned knowledge while scaling up with dynamic retrieval, making it a robust, future-proof solution.
Conclusion
Choosing between RAG and Fine-Tuning depends on your application’s requirements. RAG delivers flexibility and adaptability, ideal for dynamic, multi-domain needs. It provides real-time data access, making it invaluable for applications with constantly changing information.
Fine-Tuning, however, focuses on domain-specific tasks, achieving greater precision and efficiency. It’s perfect for tasks where accuracy is non-negotiable, embedding knowledge directly within the model.
Hybrid approaches blend these benefits, offering the best of both. However, these solutions demand thoughtful integration for optimal performance, balancing flexibility with precision.
At TechAhead, we excel in delivering custom AI app development around specific business objectives. Whether implementing RAG, Fine-Tuning, or a hybrid approach, our expert team ensures AI solutions drive impactful performance gains for your business.
Source URL: https://www.techaheadcorp.com/blog/rag-vs-fine-tuning-difference-for-chatbots/
0 notes
betweenlands ¡ 1 month ago
Text
look i hate LLMs as much as the next person but can we please stop acting like a totally easy and feasible replacement would be installing a mod that just exists out of thin air and comes with a custom written set of quests and NPC dialogue branches.
that is not how writing works. that is not how writing dialogue for games works. i have literally written NPC dialogue for a minecraft map before and six short, unpolished, janky monologues with no branching whatsoever took me two weeks to finish. frankly i would rather have the LLM stank than put a bunch of under/unpaid writers or actors into crunch mode for a relatively minor aspect of an otherwise great server.
#like oh my god okay for all people say that LLMs are utterly morally bankrupt and uncreative and don't have any of the soul of writing#some people sure do also seem to act like writing is so easy to do and come up with and like anyone can just type words#in other words it's fucking insulting. i'm sorry but it is genuinely insulting to act like it would be ~that easy~.#writing doesn't come from nowhere! stop that!#txt#orig#salt#misadventures#unrebloggable bc im just. so fucking tired its been such a bad month and that post going around is genuinely utterly fucking gutting for me#because it really does come off as ''yeah it'd be totally easy to WRITE AN ENTIRE QUEST SYSTEM INSTEAD'' are you INSANE#like hey guess what! we do actually know what that looks like! avidmc himself has an adventure map with custom written npcs and dialogue!#AND IT HAS TAKEN HIM LIKE FIVE YEARS TO MAKE BECAUSE AS IT TURNS OUT WRITING IS NOT SOMETHING YOU CAN JUST HAVE ON DEMAND WHENEVER#sorry. sorry. i'm just so -- it's been so hard to write recently#seeing people act like it's easy and simple to come up with something that complex under a time crunch is so utterly disheartening.#like. idk man no wonder people don't interact with fanfiction as much if this is how they feel about writing as a craft.#i recognize that this is a very cynical take on the general discussion here but please also understand . it's been SO bad#this is kind of just our last straw. because fuckssake yall.#actually fuck it maintagging if everyone else can do that so can i#misadventures smp
50 notes ¡ View notes
invisiblefoxfire ¡ 8 months ago
Text
2 days ago: I tried to change my name on the Epic launcher to something I could give folks to add me as a friend so we can do multiplayer on this week's free rhythm game.
Yesterday: Got an email from Epic support saying my username had violated their terms of service and been replaced with a randomized string of letters and numbers and that I had to change it to something else. Confused, I decided to not worry about it and just pick a different username, but it told me I couldn't change my username again for 2 weeks. Confused and frustrated, I wrote to support asking What The Heck.
Today: Got another email from Epic support saying my appeal had been reviewed and denied. My username had been found to violate the terms of service and I would have the chance to change it again in two weeks.
I wrote them back asking what, exactly, had violated the terms of service about the username SecretShrumbies. Is it some slang term I'm not aware of? Do they think I'm making a drug reference, if so, does that mean they do not allow any usernames that reference any sort of fungal life forms because they could maybe, just maybe, be making an extremely obscure drug reference? I pointed out that I'd based the username off of Shrumbo, a mushroom-shaped little friend from the extremely wholesome family-friendly game Ooblets - which was an Epic exclusive when it first came out!
So far, no response. I've asked around to see if anyone has any idea what's wrong with the username and the only suggestions have been "maybe they think you meant drugs somehow?" and "I bet they have a fucking AI bot scanning this shit and some fucking LLM decided that some combination of those letters meant something bad, and they never bothered having a real life actual human being look at it," and honestly, that's fucking horrifying and probably correct.
I almost wish I was still on Twitter so I could kick up a big public stink about it and maybe get them to fix it, but a stupid username I made up on the spot is not worth going back to that place.
I'm just... What the actual fuck kind of world do we live in at this point? There's no way to talk to a human being anymore. The fucking algorithms have decided what words we're allowed to say forever, based on their puritan advertiser-friendly insanity, we don't own anything, and you can never, ever just speak to a fucking human being anymore.
8 notes ¡ View notes
semioticapocalypse ¡ 1 year ago
Text
Tumblr media
More about the Antique Photo Sage GPT: two case studies on Medium.
On one instance APS even provided me with the approximation of the camera viewpoint coordinates (!).
Case studies
Antique Photo Sage GPT
Comment, like, reblog to see your karma scores improved instantly (or maybe, overnight).
26 notes ¡ View notes
drunk-on-poems-and-songs ¡ 4 months ago
Text
Gratitude Journaling
lIfE's A sTaGe, or some shit they said  
but look at you  
a fucking  
puppet  
with cut strings  
dancing like a drunk bird trying to impress the wind  
no one told you to perform  
no one asked    
yet there you go  
spinning like a top  
on a cracked table no one even sits at anymore  
do you hear it??  
the sound of silence clapping??  
no  
of course not  
because mannequins    
don’t move  
and the dead don’t applaud  
you’ve got an audience of cardboard souls    
and even they’re bored of your pathetic tricks    
not hate  
not love  
just—nothing    
and gods  
doesn’t that sting more?    
you think effort’s a currency    
like you’ll be paid in meaning or warmth    
but the universe doesn’t oWe you    
a single “well done” or even a “fuck you.”    
it doesn’t owe you shit    
you’re the kid showing off cartwheels    
to a portrait of their dead dad  
cute, if it weren’t so goddamn sad    
but hey cmon  
don’t stop now    
what’s one more tumble into insignificance?    
you’re the clown of your own funeral    
juggling voids, tripping over the infinite    
and maybe you’ll laugh when you hit the dirt    
because even that feels better than apathy    
better than knowing    
the world was never watching you at all    
you’re not cursed  
you’re not special    
you’re just noise in the static    
a speck in the endless gray    
so keep dancing  
bitch    
the spotlight isn’t even on you    
4 notes ¡ View notes
techoliviabennett ¡ 4 months ago
Text
Tumblr media
0 notes
rjas16 ¡ 6 months ago
Text
Think Smarter, Not Harder: Meet RAG
Tumblr media
How do RAG make machines think like you?
Imagine a world where your AI assistant doesn't only talk like a human but understands your needs, explores the latest data, and gives you answers you can trust—every single time. Sounds like science fiction? It's not.
We're at the tipping point of an AI revolution, where large language models (LLMs) like OpenAI's GPT are rewriting the rules of engagement in everything from customer service to creative writing. here's the catch: all that eloquence means nothing if it can't deliver the goods—if the answers aren't just smooth, spot-on, accurate, and deeply relevant to your reality.
The question is: Are today's AI models genuinely equipped to keep up with the complexities of real-world applications, where context, precision, and truth aren't just desirable but essential? The answer lies in pushing the boundaries further—with Retrieval-Augmented Generation (RAG).
While LLMs generate human-sounding copies, they often fail to deliver reliable answers based on real facts. How do we ensure that an AI-powered assistant doesn't confidently deliver outdated or incorrect information? How do we strike a balance between fluency and factuality? The answer is in a brand new powerful approach: Retrieval-Augmented Generation (RAG).
What is Retrieval-Augmented Generation (RAG)?
RAG is a game-changing technique to increase the basic abilities of traditional language models by integrating them with information retrieval mechanisms. RAG does not only rely on pre-acquired knowledge but actively seek external information to create up-to-date and accurate answers, rich in context. Imagine for a second what could happen if you had a customer support chatbot able to engage in a conversation and draw its answers from the latest research, news, or your internal documents to provide accurate, context-specific answers.
RAG has the immense potential to guarantee informed, responsive and versatile AI. But why is this necessary? Traditional LLMs are trained on vast datasets but are static by nature. They cannot access real-time information or specialized knowledge, which can lead to "hallucinations"—confidently incorrect responses. RAG addresses this by equipping LLMs to query external knowledge bases, grounding their outputs in factual data.
How Does Retrieval-Augmented Generation (RAG) Work?
RAG brings a dynamic new layer to traditional AI workflows. Let's break down its components:
Embedding Model
Think of this as the system's "translator." It converts text documents into vector formats, making it easier to manage and compare large volumes of data.
Retriever
It's the AI's internal search engine. It scans the vectorized data to locate the most relevant documents that align with the user's query.
Reranker (Opt.)
It assesses the submitted documents and score their relevance to guarantee that the most pertinent data will pass along.
Language Model
The language model combines the original query with the top documents the retriever provides, crafting a precise and contextually aware response. Embedding these components enables RAG to enhance the factual accuracy of outputs and allows for continuous updates from external data sources, eliminating the need for costly model retraining.
How does RAG achieve this integration?
It begins with a query. When a user asks a question, the retriever sifts through a curated knowledge base using vector embeddings to find relevant documents. These documents are then fed into the language model, which generates an answer informed by the latest and most accurate information. This approach dramatically reduces the risk of hallucinations and ensures that the AI remains current and context-aware.
RAG for Content Creation: A Game Changer or just a IT thing?
Content creation is one of the most exciting areas where RAG is making waves. Imagine an AI writer who crafts engaging articles and pulls in the latest data, trends, and insights from credible sources, ensuring that every piece of content is compelling and accurate isn't a futuristic dream or the product of your imagination. RAG makes it happen.
Why is this so revolutionary?
Engaging and factually sound content is rare, especially in today's digital landscape, where misinformation can spread like wildfire. RAG offers a solution by combining the creative fluency of LLMs with the grounding precision of information retrieval. Consider a marketing team launching a campaign based on emerging trends. Instead of manually scouring the web for the latest statistics or customer insights, an RAG-enabled tool could instantly pull in relevant data, allowing the team to craft content that resonates with current market conditions.
The same goes for various industries from finance to healthcare, and law, where accuracy is fundamental. RAG-powered content creation tools promise that every output aligns with the most recent regulations, the latest research and market trends, contributing to boosting the organization's credibility and impact.
Applying RAG in day-to-day business
How can we effectively tap into the power of RAG? Here's a step-by-step guide:
Identify High-Impact Use Cases
Start by pinpointing areas where accurate, context-aware information is critical. Think customer service, marketing, content creation, and compliance—wherever real-time knowledge can provide a competitive edge.
Curate a robust knowledge base
RAG relies on the quality of the data it collects and finds. Build or connect to a comprehensive knowledge repository with up-to-date, reliable information—internal documents, proprietary data, or trusted external sources.
Select the right tools and technologies
Leverage platforms that support RAG architecture or integrate retrieval mechanisms with existing LLMs. Many AI vendors now offer solutions combining these capabilities, so choose one that fits your needs.
Train your team
Successful implementation requires understanding how RAG works and its potential impact. Ensure your team is well-trained in deploying RAG&aapos;s technical and strategic aspects.
Monitor and optimize
Like any technology, RAG benefits from continuous monitoring and optimization. Track key performance indicators (KPIs) like accuracy, response time, and user satisfaction to refine and enhance its application.
Applying these steps will help organizations like yours unlock RAG's full potential, transform their operations, and enhance their competitive edge.
The Business Value of RAG
Why should businesses consider integrating RAG into their operations? The value proposition is clear:
Trust and accuracy
RAG significantly enhances the accuracy of responses, which is crucial for maintaining customer trust, especially in sectors like finance, healthcare, and law.
Efficiency
Ultimately, RAG reduces the workload on human employees, freeing them to focus on higher-value tasks.
Knowledge management
RAG ensures that information is always up-to-date and relevant, helping businesses maintain a high standard of knowledge dissemination and reducing the risk of costly errors.
Scalability and change
As an organization grows and evolves, so does the complexity of information management. RAG offers a scalable solution that can adapt to increasing data volumes and diverse information needs.
RAG vs. Fine-Tuning: What's the Difference?
Both RAG and fine-tuning are powerful techniques for optimizing LLM performance, but they serve different purposes:
Fine-Tuning
This approach involves additional training on specific datasets to make a model more adept at particular tasks. While effective for niche applications, it can limit the model's flexibility and adaptability.
RAG
In contrast, RAG dynamically retrieves information from external sources, allowing for continuous updates without extensive retraining, which makes it ideal for applications where real-time data and accuracy are critical.
The choice between RAG and fine-tuning entirely depends on your unique needs. For example, RAG is the way to go if your priority is real-time accuracy and contextual relevance.
Concluding Thoughts
As AI evolves, the demand for RAG AI Service Providers systems that are not only intelligent but also accurate, reliable, and adaptable will only grow. Retrieval-Augmented generation stands at the forefront of this evolution, promising to make AI more useful and trustworthy across various applications.
Whether it's a content creation revolution, enhancing customer support, or driving smarter business decisions, RAG represents a fundamental shift in how we interact with AI. It bridges the gap between what AI knows and needs to know, making it the tool of reference to grow a real competitive edge.
Let's explore the infinite possibilities of RAG together
We would love to know; how do you intend to optimize the power of RAG in your business? There are plenty of opportunities that we can bring together to life. Contact our team of AI experts for a chat about RAG and let's see if we can build game-changing models together.
0 notes
lognservices ¡ 7 months ago
Text
0 notes
techenthuinsights ¡ 9 months ago
Text
0 notes
enterprise-cloud-services ¡ 1 year ago
Text
Explore the inner workings of LlamaIndex, enhancing LLMs for streamlined natural language processing, boosting performance and efficiency.
0 notes
rubylogan15 ¡ 1 year ago
Text
Explore the inner workings of LlamaIndex, enhancing LLMs for streamlined natural language processing, boosting performance and efficiency.
0 notes
dziretechnologies ¡ 1 year ago
Text
Tumblr media
Explore cutting-edge Generative AI Application Development Services tailored to unleash innovation and creativity. Our expert team at Dzire Technologies specializes in crafting custom solutions, leveraging state-of-the-art technology to bring your unique ideas to life. Transform your business with intelligent, dynamic applications that push the boundaries of what's possible. Elevate user experiences and stay ahead in the rapidly evolving digital landscape with our advanced Generative AI development services.
0 notes
uxproduct-blog ¡ 2 years ago
Text
0 notes
imsobadatnicknames2 ¡ 1 year ago
Note
How can you consider yourself any sort of leftist when you defend AI art bullshit? You literally simp for AI techbros and have the gall to pretend you're against big corporations?? Get fucked
I don't "defend" AI art. I think a particular old post of mine that a lot of people tend to read in bad faith must be making the rounds again lmao.
Took me a good while to reply to this because you know what? I decided to make something positive out of this and use this as an opportunity to outline what I ACTUALLY believe about AI art. If anyone seeing this decides to read it in good or bad faith... Welp, your choice I guess.
I have several criticisms of the way the proliferation of AI art generators and LLMs is making a lot of things worse. Some of these are things I have voiced in the past, some of these are things I haven't until now:
Most image and text AI generators are fine-tuned to produce nothing but the most agreeable, generically pretty content slop, pretty much immediately squandering their potential to be used as genuinely interesting artistic tools with anything to offer in terms of a unique aesthetic experience (AI video still manages to look bizarre and interesting but it's getting there too)
In the entertainment industry and a lot of other fields, AI image generation is getting incorporated into production pipelines in ways that lead to the immiseration of working artists, being used to justify either lower wages or straight-up layoffs, and this is something that needs to be fought against. That's why I unconditionally supported the SAG-AFTRA strikes last year and will unconditionally support any collective action to address AI art as a concrete labor issue
In most fields where it's being integrated, AI art is vastly inferior to human artists in any use case where you need anything other than to make a superficially pretty picture really fast. If you need to do anything like ask for revisions or minor corrections, give very specific descriptions of how objects and people are interacting with each other, or just like. generate several pictures of the same thing and have them stay consistent with each other, you NEED human artists and it's preposterous to think they can be replaced by AI.
There is a lot of art on the internet that consists of the most generically pretty, cookie-cutter anime waifu-adjacent slop that has zero artistic or emotional value to either the people seeing it or the person churning it out, and while this certainly was A Thing before the advent of AI art generators, generative AI has made it extremely easy to become the kind of person who churns it out and floods online art spaces with it.
Similarly, LLMs make it extremely easy to generate massive volumes of texts, pages, articles, listicles and what have you that are generic vapid SEO-friendly pap at best and bizzarre nonsense misinformation at worst, drowning useful information in a sea of vapid noise and rendering internet searches increasingly useless.
The way LLMs are being incorporated into customer service and similar services not only, again, encourages further immiseration of customer service workers, but it's also completely useless for most customers.
A very annoyingly vocal part the population of AI art enthusiasts, fanatics and promoters do tend to talk about it in a way that directly or indirectly demeans the merit and skill of human artists and implies that they think of anyone who sees anything worthwile in the process of creation itself rather than the end product as stupid or deluded.
So you can probably tell by now that I don't hold AI art or writing in very high regard. However (and here's the part that'll get me called an AI techbro, or get people telling me that I'm just jealous of REAL artists because I lack the drive to create art of my own, or whatever else) I do have some criticisms of the way people have been responding to it, and have voiced such criticisms in the past.
I think a lot of the opposition to AI art has critstallized around unexamined gut reactions, whipping up a moral panic, and pressure to outwardly display an acceptable level of disdain for it. And in particular I think this climate has made a lot of people very prone to either uncritically entertain and adopt regressive ideas about Intellectual Propety, OR reveal previously held regressive ideas about Intellectual Property that are now suddenly more socially acceptable to express:
(I wanna preface this section by stating that I'm a staunch intellectual property abolitionist for the same reason I'm a private property abolitionist. If you think the existence of intellectual property is a good thing, a lot of my ideas about a lot of stuff are gonna be unpalatable to you. Not much I can do about it.)
A lot of people are suddenly throwing their support behind any proposal that promises stricter copyright regulations to combat AI art, when a lot of these also have the potential to severely udnermine fair use laws and fuck over a lot of independent artist for the benefit of big companies.
It was very worrying to see a lot of fanfic authors in particular clap for the George R R Martin OpenAI lawsuit because well... a lot of them don't realize that fanfic is a hobby that's in a position that's VERY legally precarious at best, that legally speaking using someone else's characters in your fanfic is as much of a violation of copyright law as straight up stealing entire passages, and that any regulation that can be used against the latter can be extended against the former.
Similarly, a lot of artists were cheering for the lawsuit against AI art models trained to mimic the style of specific artists. Which I agree is an extremely scummy thing to do (just like a human artist making a living from ripping off someone else's work is also extremely scummy), but I don't think every scummy act necessarily needs to be punishable by law, and some of them would in fact leave people worse off if they were. All this to say: If you are an artist, and ESPECIALLY a fan artist, trust me. You DON'T wanna live in a world where there's precedent for people's artstyles to be considered intellectual property in any legally enforceable way. I know you wanna hurt AI art people but this is one avenue that's not worth it.
Especially worrying to me as an indie musician has been to see people mention the strict copyright laws of the music industry as a positive thing that they wanna emulate. "this would never happen in the music industry because they value their artists copyright" idk maybe this is a the grass is greener type of situation but I'm telling you, you DON'T wanna live in a world where copyright law in the visual arts world works the way it does in the music industry. It's not worth it.
I've seen at least one person compare AI art model training to music sampling and say "there's a reason why they cracked down on sampling" as if the death of sampling due to stricter copyright laws was a good thing and not literally one of the worst things to happen in the history of music which nearly destroyed several primarily black music genres. Of course this is anecdotal because it's just One Guy I Saw Once, but you can see what I mean about how uncritical support for copyright law as a tool against AI can lead people to adopt increasingly regressive ideas about copyright.
Similarly, I've seen at least one person go "you know what? Collages should be considered art theft too, fuck you" over an argument where someone else compared AI art to collages. Again, same point as above.
Similarly, I take issue with the way a lot of people seem EXTREMELY personally invested in proving AI art is Not Real Art. I not only find this discussion unproductive, but also similarly dangerously prone to validating very reactionary ideas about The Nature Of Art that shouldn't really be entertained. Also it's a discussion rife with intellectual dishonesty and unevenly applied definition and standards.
When a lot of people present the argument of AI art not being art because the definition of art is this and that, they try to pretend that this is the definition of art the've always operated under and believed in, even when a lot of the time it's blatantly obvious that they're constructing their definition on the spot and deliberately trying to do so in such a way that it doesn't include AI art.
They never succeed at it, btw. I've seen several dozen different "AI art isn't art because art is [definition]". I've seen exactly zero of those where trying to seriously apply that definition in any context outside of trying to prove AI art isn't art doesn't end up in it accidentally excluding one or more non-AI artforms, usually reflecting the author's blindspots with regard to the different forms of artistic expression.
(However, this is moot because, again, these are rarely definitions that these people actually believe in or adhere to outside of trying to win "Is AI art real art?" discussions.)
Especially worrying when the definition they construct is built around stuff like Effort or Skill or Dedication or The Divine Human Spirit. You would not be happy about the kinds of art that have traditionally been excluded from Real Art using similar definitions.
Seriously when everyone was celebrating that the Catholic Church came out to say AI art isn't real art and sharing it as if it was validating and not Extremely Worrying that the arguments they'd been using against AI art sounded nearly identical to things TradCaths believe I was like. Well alright :T You can make all the "I never thought I'd die fighting side by side with a catholic" legolas and gimli memes you want, but it won't change the fact that the argument being made by the catholic church was a profoundly conservative one and nearly identical to arguments used to dismiss the artistic merit of certain forms of "degenerate" art and everyone was just uncritically sharing it, completely unconcerned with what kind of worldview they were lending validity to by sharing it.
Remember when the discourse about the Gay Sex cats pic was going on? One of the things I remember the most from that time was when someone went "Tell me a definition of art that excludes this picture without also excluding Fountain by Duchamp" and how just. Literally no one was able to do it. A LOT of people tried to argue some variation of "Well, Fountain is art and this image isn't because what turns fountain into art is Intent. Duchamp's choice to show a urinal at an art gallery as if it was art confers it an element of artistic intent that this image lacks" when like. Didn't by that same logic OP's choice to post the image on tumblr as if it was art also confer it artistic intent in the same way? Didn't that argument actually kinda end up accidentally validating the artistic status of every piece of AI art ever posted on social media? That moment it clicked for me that a lot of these definitions require applying certain concepts extremely selectively in order to make sense for the people using them.
A lot of people also try to argue it isn't Real Art based on the fact that most AI art is vapid but like. If being vapid definitionally excludes something from being art you're going to have to exclude a whooole lot of stuff along with it. AI art is vapid. A lot of art is too, I don't think this argument works either.
Like, look, I'm not really invested in trying to argue in favor of The Artistic Merits of AI art but I also find it extremely hard to ignore how trying to categorically define AI art as Not Real Art not only is unproductive but also requires either a) applying certain parts of your definition of art extremely selectively, b) constructing a definition of art so convoluted and full of weird caveats as to be functionally useless, or c) validating extremely reactionary conservative ideas about what Real Art is.
Some stray thoughts that don't fit any of the above sections.
I've occassionally seen people respond to AI art being used for shitposts like "A lot of people have affordable commissions, you could have paid someone like $30 to draw this for you instead of using the plagiarism algorithm and exploiting the work of real artists" and sorry but if you consider paying an artist a rate that amounts to like $5 for several hours of work a LESS exploitative alternative I think you've got something fucked up going on with your priorities.
Also it's kinda funny when people comment on the aforementioned shitposts with some variation of "see, the usage of AI art robs it of all humor because the thing that makes shitposts funny is when you consider the fact that someone would spend so much time and effort in something so stupid" because like. Yeah that is part of the humor SOMETIMES but also people share and laugh at low effort shitposts all the time. Again you're constructing a definition that you don't actually believe in anywhere outside of this type of conversations. Just say you don't like that it's AI art because you think it's morally wrong and stop being disingenuous.
So yeah, this is pretty much everything I believe about the topic.
I don't "defend" AI art, but my opposition to it is firmly rooted in my principles, and that means I refuse to uncritically accept any anti-AI art argument that goes against those same principles.
If you think not accepting and parroting every Anti-AI art argument I encounter because some of them are ideologically rooted in things I disagree with makes me indistinguishable from "AI techbros" you're working under a fucked up dichotomy.
2K notes ¡ View notes
mostlysignssomeportents ¡ 5 months ago
Text
The GOP is not the party of workers
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/12/13/occupy-the-democrats/#manchin-synematic-universe
Tumblr media
The GOP says it's the "party of the working class" and indeed, they have promoted numerous policies that attack select groups within the American ruling class. But just because the party of unlimited power for billionaires is attacking a few of their own, it doesn't make them friends to the working people.
The best way to understand the GOP's relationship to worker is through "boss politics" – that's where one group of elites consolidates its power by crushing rival elites. All elites are bad for working people, so any attack on any elite is, in some narrow sense, "pro-worker." What's more, all elites cheat the system, so any attack on any elite is, again, "pro-fairness."
In other words, if you want to prosecute a company for hurting workers, customers, neighbors and the environment, you have a target-rich environment. But just because you crush a corrupt enterprise that's hurting workers, it doesn't mean you did it for the workers, and – most importantly – it doesn't mean that you will take workers' side next time.
Autocrats do this all the time. Xi Jinping engaged in a massive purge of corrupt officials, who were indeed corrupt – but he only targeted the corrupt officials that made up his rivals' power-base. His own corrupt officials were unscathed:
https://web.archive.org/web/20181222163946/https://peterlorentzen.com/wp-content/uploads/2018/11/Lorentzen-Lu-Crackdown-Nov-2018-Posted-Version.pdf
Putin did this, too. Russia's oligarchs are, to a one, monsters. When Putin defenestrates a rival – confiscates their fortune and sends them to prison – he acts against a genuinely corrupt criminal and brings some small measure of justice to that criminal's victims. But he only does this to the criminals who don't support him:
https://www.npr.org/sections/money/2022/03/29/1088886554/how-putin-conquered-russias-oligarchy
The Trump camp – notably JD Vance and Josh Hawley – have vowed to keep up the work of the FTC under Lina Khan, the generationally brilliant FTC Chair who accomplished more in four years than her predecessors have in 40. Trump just announced that he would replace Khan with Andrew Ferguson, who sounds like an LLM's bad approximation of Khan, promising to deal with "woke Big Tech" but also to end the FTC's "war on mergers." Ferguson may well plow ahead with the giant, important tech antitrust cases that Khan brought, but he'll do so because this is good grievance politics for Trump's base, and not because Trump or Ferguson are committed to protecting the American people from corporate predation itself:
https://pluralistic.net/2024/11/12/the-enemy-of-your-enemy/#is-your-enemy
Writing in his newsletter today, Hamilton Nolan describes all the ways that the GOP plans to destroy workers' lives while claiming to be a workers' party, and also all the ways the Dems failed to protect workers and so allowed the GOP to outlandishly claim to be for workers:
https://www.hamiltonnolan.com/p/you-cant-rebrand-a-class-war
For example, if Ferguson limits his merger enforcement to "woke Big Tech" companies while ending the "war on mergers," he won't stop the next Albertson's/Kroger merger, a giant supermarket consolidation that just collapsed because Khan's FTC fought it. The Albertson's/Kroger merger had two goals: raising food prices and slashing workers' wages, primarily by eliminating union jobs. Fighting "woke Big Tech" while waving through mergers between giant companies seeking to price-gouge and screw workers does not make you the party of the little guy, even if smashing Big Tech is the right thing to do.
Trump's hatred of Big Tech is highly selective. He's not proposing to do anything about Elon Musk, of course, except to make Musk even richer. Musk's net worth has hit $447b because the market is buying stock in his companies, which stand to make billions from cozy, no-bid federal contracts. Musk is a billionaire welfare queen who hates workers and unions and has a long rap-sheet of cheating, maiming and tormenting his workforce. A pro-worker Trump administration could add labor conditions to every federal contract, disqualifying businesses that cheat workers and union-bust from getting government contracts.
Instead, Trump is getting set to blow up the NLRB, an agency that Reagan put into a coma 40 years ago, until the Sanders/Warren wing of the party forced Biden to install some genuinely excellent people, like general counsel Jennifer Abruzzo, who – like Khan – did more for workers in four years than her predecessors did in 40. Abruzzo and her colleagues could have remained in office for years to come, if Democratic Senators had been able to confirm board member Lauren McFerran (or if two of those "pro-labor" Republican Senators had voted for her). Instead, Joe Manchin and Kirsten Synema rushed to the Senate chamber at the last minute in order to vote McFerran down and give Trump total control over the NLRB:
https://www.axios.com/2024/12/11/schumer-nlrb-vote-manchin-sinema
This latest installment in the Manchin Synematic Universe is a reminder that the GOP's ability to rebrand as the party of workers is largely the fault of Democrats, whose corporate wing has been at war with workers since the Clinton years (NAFTA, welfare reform, etc). Today, that same corporate wing claims that the reason Dems were wiped out in the 2024 election is that they were too left, insisting that the path to victory in the midterms and 2028 is to fuck workers even worse and suck up to big business even more.
We have to take the party back from billionaires. No Dem presidential candidate should ever again have "proxies" who campaign to fire anti-corporate watchdogs like Lina Khan. The path to a successful Democratic Party runs through worker power, and the only reliable path to worker power runs through unions.
Nolan's written frequently about how bad many union leaders are today. It's not just that union leaders are sitting on historically unprecedented piles of cash while doing less organizing than ever, at a moment when unions are more popular than they've been in a century with workers clamoring to join unions, even as union membership declines. It's also that union leaders have actually endorsed Trump – even as the rank and file get ready to strike:
https://docs.google.com/document/d/1Yz_Z08KwKgFt3QvnV8nEETSgTXM5eZw5ujT4BmQXEWk/edit?link_id=0&can_id=9481ac35a2682a1d6047230e43d76be8&source=email-invitation-to-cover-amazon-labor-union-contract-fight-rally-cookout-on-monday-october-14-2024-2&email_referrer=email_2559107&email_subject=invitation-to-cover-jfk8-workers-authorize-amazon-labor-union-ibt-local-1-to-call-ulp-strike&tab=t.0
The GOP is going to do everything it can to help a tiny number of billionaires defeat hundreds of millions of workers in the class war. A future Democratic Party victory will come from taking a side in that class war – the workers' side. As Nolan writes:
If billionaires are destroying our country in order to serve their own self-interest, the reasonable thing to do is not to try to quibble over a 15% or a 21% corporate tax rate. The reasonable thing to do is to eradicate the existence of billionaires. If everyone knows our health care system is a broken monstrosity, the reasonable thing to do is not to tinker around the edges. The reasonable thing to do is to advocate Medicare for All. If there is a class war—and there is—and one party is being run completely by the upper class, the reasonable thing is for the other party to operate in the interests of the other, much larger, much needier class. That is quite rational and ethical and obvious in addition to being politically wise.
Nolan's remedy for the Democratic Party is simple and straightforward, if not easy:
The answer is spend every last dollar we have to organize and organize and strike and strike. Women are workers. Immigrants are workers. The poor are workers. A party that is banning abortion and violently deporting immigrants and economically assaulting the poor is not a friend to the labor movement, ever. (An opposition party that cannot rouse itself to participate on the correct side of the ongoing class war is not our friend, either—the difference is that the fascists will always try to actively destroy unions, while the Democrats will just not do enough to help us, a distinction that is important to understand.)
Cosigned.
175 notes ¡ View notes
olderthannetfic ¡ 22 days ago
Note
I'm realizing that there's a part of me that never really internalized the. Um. Principle of art? Idk what to call it–Why art doesn't have to be skillful, or maybe, more specifically, why a person should not just give up if their work is not perfect, why a person's work should be worth anything if it's not "good" especially if they aren't willing to break their backs to improve. Whenever I see people who talk about art like that what I've always done is keep it in mind,bit inside I've always just wished I understood how what their saying is true. If it's true.
So when someone says "AI Art has no soul" for a fraction of a second I think "this sounds like baloney, what does that mean?" Before I remember myself and think "oh huh, yeah, I've seen people talk about how AI really just isn't as advanced or capable of the profit that all the company's and investors think it is, and there are doubtlessly things a veteran artist can do that these programs can't pull off, even if I'm not skilled enough to know what those are yet." And therefore when I come across people online talking about, say, the glaring limitations of LLMs that are now a headache because of how many people blindingly trust them to do their writing for them, that always makes more sense to me than people who talk about "AI art having no soul" and such. And I suspect a big part of it is some sort of insecurity of mine
Because I didn't realize that this is how I felt (I thought I was just mostly, in every part of me, Anti AI, like ‘fuck that noise’ y'know?) Until it came to bite me in the ass. A woman who's like my sister though we aren't related, is letting me live in her house as I try to get my bearings and handle on life after deciding to push my last year of college back a year, and she's like, a capable adult with experience in the job market and some industries so I trust her when she says "you know you're going to have to focus on developing your painting and traditional art skills, since with the Advent of AI Art, no one wants to pay for that service anymore." Because that makes sense. Artists in my country, as far as my college goes anyway, are embracing and trying to figure out how to integrate AI into their workflow so as not to be taken over by it. So the market has definitely changed, I've not really met another artist here who has any profoundly negative views of AI art, definitely customers are for the most part the same. For some job security, I better get a handle on the form of art that can't be replicated well by AI.
And yet I feel horrible. Because now I fear it's going to be a thing where any digital.art I may end up doing in this house will be looked upon disapprovingly. Because I'm not as good in traditional painting and I should be focusing on that, there's no need to be putting so much of my energy into digital art like this because it's no longer going to bring me money. All the personal projects I'm doing, my animatics, my oc character sheets are as they have always been, a waste of time or the thing I tend to focus on rather than my school work art. It doesn't matter that I've realized I kind of hate painting, I still prefer it to every other section of art I could have chosen to specialize in (digital art is under graphic design and I hate graphic design and Photoshop/Adobe/coreldraw etc.) And all of that is bumming me out because my main drive for art seems to be in my personal art despite it getting in the way of school work. And I can't stand the thought that it's not worth anything, that I should take a break from it, that AI art is better at it than me therefore there's no point to it. But what else am I supposed to do when I'm not entirely sure how much of all.that is false? And it's become apparent my entire philosophy of art is on shaky ground.
I just want to know and understand the truth so I can stop feeling this way. I'm sorry for the long ask I'm just really conflicted
--
I'm honestly confused about what your philosophy of art even is.
The vibes I'm getting off of this ask are 1. depression/anxiety and 2. extremely black-and-white thinking.
There is no single Truth™ in art, though there are strong tendencies. AI art sucks for some ethical reasons, regardless of whether the art is pleasing to the eye. AI art also sucks because the way it is programmed leads it to default to one style of human face unless explicitly told not to. "AI has no souuuuul" is silly nonsense. Plenty of boring art by humans also feels soulless.
The actual issue is that "Draw a hot girl" elicits lots of different responses from human artists. AI goes straight for AI Face. There has been drama and much hilarity about shitty book covers where what was supposed to be this individual character has ended up with AI Face in a really visible way. AI also suuuuucks at drawing hands or having little details correct. (Like... even more than human artists already suck at hands, which lbr, is a lot.) AI is objectively bad at art in a bunch of formal ways. It may get better over time, and the least discerning type of customer does not notice, but it does, in fact, suck at its job.
AI art is preferred because it is cheap.
This is the same principle as moving factory jobs from country to country looking for the lowest paid labor.
If you want to make money on art, you have to find something that cannot be done elsewhere and for less. For many people, this means building a cult of personality around themselves so that they are the product and are not replaceable.
You do not sound like someone who wants to be a commercial artist.
I get that you feel you have no obvious path to supporting yourself, but that doesn't mean you actually want to be an artist. Hobby art has plenty of value emotionally and for enriching your life. It does not need to make money to have value.
However, if you want to make a living at art, basic digital arts skills already wouldn't have paid the bills before AI took over, at least not in any market I'm familiar with. The people who pay rent this way generally have a strong interest in multiple types of art, and they are far more interested in fulfilling a brief from a client.
--
Honestly, nonnie, if you can't already tell how bad AI is at digital photorealistic paintings or whatever, I think you do need to go focus on painting classes.
It's not that I think you should be a painter, but if your art school is any good, they should have some instruction on formal, traditional art aesthetics. This will include things like anatomy and composition.
It is that da Vinci Michelangelo shit that will set one digital artist apart from another. I see way too many people whose whole education was drawing bishies for DeviantArt. They refine and refine and refine the surface details without ever addressing the underlying issues with proportion and perspective.
91 notes ¡ View notes