#applied AI
Explore tagged Tumblr posts
jcmarchi · 1 month ago
Text
How to 8‑bit quantize large models using bits and bytes
New Post has been published on https://thedigitalinsider.com/how-to-8%e2%80%91bit-quantize-large-models-using-bits-and-bytes/
How to 8‑bit quantize large models using bits and bytes
Tumblr media
Deep learning is consistently changing so many fields, from NLP (natural language processing) to computer vision.
However, as these models continue to grow in size and complexity, the demands on the hardware required for memory and compute continue to skyrocket. In light of this, there are promising strategies to overcome these challenges, one of which is quantization. This lowers the precision of numbers used in the model without a noticeable loss in performance. 
In this article, I will dive into the theoretical processes underlying this strategy and show the practical implementation of 8‑bit quantization within a large parameter model, in this case, we will be using the IBM Granite model and BitsAndBytes for quantization. 
Introduction
The quick growth of deep learning has resulted in an arms race of models boasting billions of parameters, which, in most cases, achieve stellar performance but require enormous computational resources. 
As engineers and researchers look for methods to make these large models more efficient, quantization has shown to be an incredibly effective solution. By lowering the bit width of number representations from 32‑bit floating point to x‑bit integers, quantization decreases the overall model size, speeds up inference, and cuts energy consumption, all while keeping a high accuracy in the output.
I will explore the concepts and techniques behind 8‑bit quantization in this article. I will explain the approach’s benefits, outline the theory behind it, and walk you through the process step by step. 
I will then show you a practical application: quantizing the IBM Granite model using BitsAndBytes. 
Understanding quantization
At its core, quantization is the process of mapping input values from a quite large set (usually continuous and high-precision) to a much smaller and more discrete set, which has lower precision. Deep learning typically involves converting 32‑bit floating‑point numbers to x‑bit integer alternatives. 
The result is a massive reduction in memory usage and computation time.
Benefits of quantization
Lower memory footprint: Lower precision means that each parameter requires much less memory.
Increased speed: Integer math is generally much faster than floating‑point operations (FlOps), especially on hardware optimized for low‑bit computations.
Energy efficiency: Lower precision computations consume far less power, making them ideal for mobile and edge devices.
Types of quantization
Uniform quantization: This method maps a range of floating‑point values uniformly to integer values.
Non‑uniform quantization: Uses a more complicated mapping based on the distribution of the weights or activations of the network.
Symmetric vs. asymmetric quantization:
Symmetric: Uses the same scale and zero‑point for positive and negative values.
Asymmetric: Allows different scales and zero‑points, which is useful for distributions that are not centered around zero.
AI assistants: Only as smart as your knowledge base
AI assistants need real-time, seamless connections to your company’s databases, documents, and internal communication tools to realize their full potential.
Tumblr media
Why 8‑bit quantization?
8‑bit quantization is when each weight or activation in the model is fully represented with 8 bits, thus offering us 256 discrete values. 
This approach helps maintain compression and precision by enabling:
Memory savings: Lowering the uint from 32 bits to 8 bits per parameter can cut the memory footprint by up to 75%.
Speed gains: Many hardware accelerators and CPUs are fully optimized for 8‑bit arithmetic, which massively improves inference times.
Minimal accuracy loss: With careful calibration and potentially fine‑tuning, the degradation in performance with 8-bit quantization is often minimal.
Deployment on edge devices: The reduced model size and faster computations make 8‑bit quantized models perfect for devices with limited computational resources.
Theoretical underpinnings of quantization
Quantization is thoroughly rooted in signal processing and numerical analysis. The objective here is to reduce precision whilst also controlling the quantization error, the difference between the original value and its quantized version.
Quantization error
Scale and zero‑point
A linear mapping is normally used to perform quantization:
Scale (S): Sets the step size between our quantized values.
Zero‑point (Z): The integer value assigned to the real number zero.
The process normally involves a calibration phase to determine the optimal scale and zero‑point values. This is then followed by the actual quantization of weights and activations.
Quantization Aware Training (QAT) vs. Post‑Training Quantization (PTQ)
Quantization Aware Training (QAT): This integrates a simulated quantization into the training process, allowing the model to adapt its weights to quantization noise.
Post‑Training Quantization (PTQ): Applies quantization to a pre‑trained model using calibration data. PTQ is simpler and faster to implement but it may incur a slightly larger accuracy drop compared to QAT.
Steps in 8‑bit quantization
Applying 8‑bit quantization includes some essential steps:
Preprocessing and calibration
Step 1: Investigate the Model’s Dynamic Range
Before quantization, we need to know the weights and activation ranges:
Collect Statistics: Pass a part of the dataset through the model to collect statistics (min, max, mean, standard deviation) for all the layers.
Establish Ranges: Based on these statistics, create quantization ranges, possibly clipping outliers to create a tighter range.
Step 2: Calibration
Calibration is the process of selecting the best scale and zero-point for each tensor or layer:
Min/Max Calibration: Uses the minimum and maximum that were observed.
Percentile Calibration: Uses some percentile (e.g., 99.9th percentile) to avoid outliers. Calibration must be correct since poor decisions will result in significant loss of accuracy.
Quantization Aware Training vs. Post‑Training Quantization
Quantization Aware Training (QAT):
Advantages: Greater precision as the model learns how to compensate for quantization distortion.
Cons: Involves modifying the training procedure and extra computation.
Post‑Training Quantization (PTQ):
Advantages: It’s much easier to implement because the model is already pre-trained.
Disadvantages: It can sometimes result in a greater reduction in accuracy, specifically in precision-based models.
For most big models, a small loss of accuracy from PTQ is fine, while mission-critical applications can use QAT.
LLM economics: How to avoid costly pitfalls
Avoid costly LLM pitfalls: Learn how token pricing, scaling costs, and strategic prompt engineering impact AI expenses—and how to save.
Tumblr media
8-bit quantization applied
No matter which deep learning environment—PyTorch, TensorFlow, or ONNX—the concepts of 8‑bit quantization remain the same.
Practical considerations
Before implementing quantization, consider the following:
Hardware support
Ensure that the target hardware (CPUs, GPUs, or special accelerators like TPUs) natively supports 8‑bit operations.
Libraries
PyTorch: Gives us built-in support for QAT and PTQ through its designated quantization module.
TensorFlow Lite: Offers us utilities to transform models to an 8‑bit quantized format, especially for embedded and mobile applications.
ONNX Runtime: Supports quantized models for use across different platforms.
Model Structure: Not all the layers in the model are created equal when quantized. 
Convolutional and fully connected layers will generally be fine, but some activation and normalization layers may need further special treatment. 
Fine-Tuning: Fine-tuning the quantized model on a small calibration dataset can help restore any performance loss due to quantization noise.
BitsAndBytes: A specialized library for 8‑bit quantization
BitsAndBytes is an independent library that helps us further streamline the 8‑bit quantization process for very large models. Frameworks like PyTorch offer us native quantization support. However, BitsAndBytes provides additional optimizations designed to convert 32‑bit floating point weights into 8‑bit integers. 
With a simple config flag (e.g., load_in_8bit=True), it enables significant reductions in memory usage and speeds up inference without requiring massive code modifications.
Model structure: Not all layers are equally amenable to quantization. Convolutional and fully connected layers usually perform well under quantization, but some of the activation and normalization layers may need special treatment.
Fine‑tuning: Fine‑tuning the quantized model on a small calibration dataset can help us recover any performance loss due to quantization noise.
Integrating BitsAndBytes with your workflow
For seamless integration, BitsAndBytes can be used alongside other popular frameworks like PyTorch. When you pre-configure your model with BitsAndBytes, you simply have to specify the quantization configuration during model loading. 
This tells the system to automatically convert the weights from 32‑bit integers to 8‑bit integers on the fly thus reducing the overall memory footprint by up to 75% and enhancing inference speed, which is ideal for deployment in resource-constrained environments.
For example, by setting up your model with:
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
you can achieve a quick switch to 8‑bit precision. This approach not only optimizes memory usage but also maintains high performance, making it a valuable addition to your deep learning workflow.
Case study: Quantizing IBM Granite with 8‑bit using BitsAndBytes
IBM Granite is a 2‑billion parameter model designed for instruction‑following tasks. Due to its enormous size, it is possible to quantize IBM Granite to 8‑bit to reduce its memory footprint significantly with good performance. 
IBM Granite quantization: Example code
The following is the code segment for configuring IBM Granite with 8‑bit quantization:
# Setup IBM Granite model using 8-bit quantization. model_name = “ibm-granite/granite-3.1-2b-instruct” quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained(     model_name,     quantization_config=quantization_config,     device_map=“balanced”,  # Adjust as needed based on available GPU memory.     torch_dtype=torch.float16 ) tokeniser = AutoTokeniser.from_pretrained(model_name)
Code breakdown
Model Selection:
The model_name variable sets up the IBM Granite model to be used for instruction execution.
Quantization Setup:
BitsAndBytesConfig(load_in_8bit=True) activates 8‑bit quantization. It is a flag that informs the model loader to quantize 32‑bit floating point to 8‑bit integer.
Model loading:
AutoModelForCausalLM.from_pretrained() loads the model using the specified configuration. The parameter device_map=”balanced” helps distribute the model across available GPUs, and torch_dtype=torch.float16 ensures that any remaining computation uses half‑precision.
Tokenizer initialization:
The tokenizer is instantiated with AutoTokeniser.from_pretrained(model_name) and guarantees the input text undergoes correct preprocessing for the quantized model.
This method not only lowers the memory usage of the model by as much as 75%, it also increases inference speed, making it particularly suitable for deployment in memory-limited settings, such as edge devices.
Gold-copy data & AI in the trade lifecycle process
Use AI to streamline the trade lifecycle, reduce manual breaks, and sync data across systems for faster, more accurate investment decisions.
Tumblr media
Barriers and best practices
Even though 8-bit quantization is highly advantageous, it also has some challenges:
Challenges
Accuracy degradation
Some models can suffer from a loss of accuracy after quantization due to quantization noise.
Calibration difficulty
It is important to determine appropriate calibration data and techniques and may be difficult, especially for models with a broad dynamic range.
Hardware constraints
Ensure that your target deployment platform fully supports 8‑bit operation, or performance will be disappointing.
Best practices full calibration
Use a representative data set to accurately calibrate the model’s weights and activations.
Layer-by-layer analysis
Determine which layers are sensitive to quantization and evaluate the necessity to retain them at a higher precision.
Progressive evaluation
Quantization is not a one-shot fix. Repeat your strategy in turn experimenting with different calibration techniques and potentially mixing PTQ with QAT.
Use framework tools
Utilize the high-level quantization utilities integrated into frameworks such as PyTorch and TensorFlow, as these utilities are always being improved and updated.
Fine‑tuning
If possible, optimize the quantized model on a subset of your data to recover any performance loss due to quantization. 
Conclusion 
Quantization and 8‑bit quantization are powerful techniques for reducing the memory footprint and accelerating the inference of large models. By converting 32‑bit floating‑point values to 8‑bit integers, you can achieve significant memory savings and speedups with minimal accuracy loss. 
In the current article, we discussed the theoretical foundations of quantization and expounded on the steps involved in preprocessing, calibration, and choosing between quantization-aware training and post-training quantization.
We then gave practical examples using popular frameworks, finishing with a case study involving the quantization of the IBM Granite model using BitsAndBytes. 
As models in deep learning increase in size, mastering techniques like 8‑bit quantization will be needed to deploy efficient state‑of‑the‑art systems: right from the data center down to edge devices. 
Regardless of whether you’re an AI researcher or a deployment engineer, understanding how to make large models optimized is a needed skill in today’s AI landscape.
The application of 8-bit quantization through tools such as BitsAndBytes allows the reduction of the computational and memory overhead of big models, such as IBM Granite, to be achieved for more scalable, efficient, and energy-consumption-friendly deployment in diverse applications and hardware platforms. 
Happy quantizing, and may every bit and byte count in your models become leaner, faster, and more efficient!
Connect with like-minded AI professionals and enthusiasts at our in-person events across the globe.
Check out where we’ll be this year, and join us to discuss emerging topics with some of the world’s leading AI minds.
AI Accelerator Institute | Summit calendar
Unite with applied AI’s builders & execs. Join Generative AI Summit, Agentic AI Summit, LLMOps Summit & Chief AI Officer Summit in a city near you.
Tumblr media
0 notes
arielmcorg · 4 months ago
Text
#Opinión - El futuro de la IA y la computación en la nube, tendencias para el 2025
Actualmente las organizaciones están utilizando la Inteligencia Artificial (IA) para analizar grandes volúmenes de datos almacenados en la nube, obteniendo información y perspectivas que permiten mejorar operaciones y experiencias del cliente. Además, ofrece la escalabilidad y flexibilidad necesaria para que las empresas se adapten rápidamente y reduzcan costos (Fuente Baufest Latam).  Al…
0 notes
inkskinned · 4 months ago
Text
i hate to say it because i'm neurodivergent and a chronic-pain-haver but like... sometimes stuff is going to be hard and that's okay.
it's okay if you don't understand something the first few times it's explained to you. it's okay if you have to google every word in a sentence. it's okay if you need to spend a few hours learning the context behind a complicated situation. it's okay if you need to read something, think about it, and then come back to re-read it.
i get it. giving up is easier, and we are all broken down and also broke as hell. nobody has the time, nobody has the fucking energy. that is how they win, though. that is why you feel this way. it is so much easier, and that is why you must resist the impetus to shut down. fight through the desire you've been taught to "tl;dr".
embrace when a book is confusing for you. accept not all media will be transparent and glittery and in the genre you love. question why you need everything to be lily-white and soft. i get it. i also sometimes choose the escapism, the fantasy-romance. there's no shame in that. but every day i still try to make myself think about something, to actually process and challenge myself. it is hard, often, because of my neurodivergence. but i fight that urge, because i think it's fucking important.
especially right now. the more they convince you not to think, the easier it will be to feed you misinformation. the more we accept a message without criticism, the more power they will have over that message. the more you choose convenience, the more they will make propaganda convenient to you.
3K notes · View notes
aretovetechnologies01 · 1 year ago
Text
Aretove Technologies: Your Data Whispers, We Make it Roar
Tumblr media
Lost in a sea of data? Aretove Technologies unlocks its hidden power. We harness Predictive Analytics, Data Science, Applied AI, and Business Intelligence to transform your whispers into thunderous insights. Boost efficiency, anticipate risks, and personalize experiences. With Aretove, your data isn't just noise, it's your competitive edge. Make it roar.
0 notes
noosphe-re · 2 years ago
Text
"There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?' And someone else said, 'A poor choice of words in 1954'," he says. "And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the '50s, we might have avoided a lot of the confusion that we're having now." So if he had to invent a term, what would it be? His answer is instant: applied statistics. "It's genuinely amazing that...these sorts of things can be extracted from a statistical analysis of a large body of text," he says. But, in his view, that doesn't make the tools intelligent. Applied statistics is a far more precise descriptor, "but no one wants to use that term, because it's not as sexy".
'The machines we have now are not conscious', Lunch with the FT, Ted Chiang, by Madhumita Murgia, 3 June/4 June 2023
24K notes · View notes
visglobal01 · 2 years ago
Text
How Applied AI Can Transform Various Industries
Artificial intelligence (AI) is the science and engineering of creating machines and systems that can perform tasks that normally require human intelligence, such as perception, reasoning, learning, and decision-making. AI has been advancing rapidly in recent years, thanks to the availability of large amounts of data, powerful computing resources, and innovative algorithms.
To know more about Applied AI, click here
1 note · View note
visglobalaustralia · 2 years ago
Text
How Applied AI Can Transform Various Industries
Artificial intelligence (AI) is the science and engineering of creating machines and systems that can perform tasks that normally require human intelligence, such as perception, reasoning, learning, and decision-making. AI has been advancing rapidly in recent years, thanks to the availability of large amounts of data, powerful computing resources, and innovative algorithms.
To know more about Applied AI, click here.
1 note · View note
saxonai · 2 years ago
Text
Applied AI is a rose – understand the thorny challenges
Tumblr media
Applied AI – the application of AI technology in business, is skyrocketing. An Accenture report on AI revealed that 84% of business executives believe that AI adoption would drive their business growth. Applied AI empowers businesses with end-to-end process automation and continuous process improvement for greater productivity and profitability.
0 notes
sheepstitches · 27 days ago
Text
Bad End Forever AU / Happily Over and After AU
Tumblr media Tumblr media Tumblr media Tumblr media
(Isat / in stars and time spoilers)
Wanted to make some concept sprites for this au idea. Siffrin just keeps looping in act 5 Everytime he loses to the king. He never wakes up from Mirabelle and the gang. He just keeps waking up in the same. Broken. House.
323 notes · View notes
xinilia · 2 months ago
Text
ok I just watched Ashly Burch’s (Aloy’s voice actor) video responding to the horrid ai Aloy thing going around, and let me just say, I respect the fuck out of her. For an actress who has such an intimate relationship with a game company (she provides both voice and mocap for Aloy— absolutely invaluable if they want to continue the franchise) she was fully transparent about how she disapproved of it and supported the current strike against video game acting to demand ai reforms. Which is just. Such a badass move.
Seriously all I could think the whole time was like “Aloy would sooo do this” LMAO 😭
261 notes · View notes
cfserkgk · 1 year ago
Text
Tumblr media
I had a thought --- You know how Conan and Haibara are also the "same age" as Anya and the Eden kids in spy x family? So hence here's Conan and Haibara in the Eden uniforms since I can allow myself to fantasise.
I don't know if this has been done before, but I like coai a very lot, they're my childhood. There's just something about their camaraderie and mutual trust that makes me so happy.
744 notes · View notes
jcmarchi · 2 months ago
Text
Zero trust and AI: The next evolution in cybersecurity strategy
New Post has been published on https://thedigitalinsider.com/zero-trust-and-ai-the-next-evolution-in-cybersecurity-strategy/
Zero trust and AI: The next evolution in cybersecurity strategy
Tumblr media
Traditional approaches to cybersecurity have always been to defend the digital perimeter surrounding internal networks. However, with the popularity of remote work and cloud computing technologies, conventional security strategies are no longer as effective at protecting organizations.
Zero trust has now become the go-to security approach. Its guiding concepts are built around the mindset of “never trust, always verify.” Each user, access device, and network connection is strictly evaluated and monitored regardless of where they originate from.
Artificial intelligence (AI) has become an addition to zero trust security architecture. With the ability to analyze large volumes of information and apply complex processes to automate security functions, AI has helped how modern businesses approach their security planning.
Understanding zero trust in modern organizations
Digital environments have changed the cybersecurity paradigm in many different ways, as businesses have moved toward highly connected infrastructures.. Zero trust security models assume every network connection within the organization is a potential threat and requires various strategies to address them effectively.
Zero trust models work on several core principles that include:
Providing minimum access privileges: Employees should only be given access to information and systems that are absolutely essential for the job function they perform. This limits unauthorized access at all times, and in the event a security breach does occur, the damage is contained to a minimum.
Creation of isolated network areas: Rather than having a single company network, organizations should segment their systems and databases into smaller, isolated networks. This limits an attacker’s access to only a part of the system in the event of a successful perimeter breach.
Constant verification: All users and devices are checked and rechecked frequentlyTrust is never assumed, and all activity is closely monitored regardless of who is gaining access or what they’re doing.
Assumed breaches: With zero trust, potential breaches are always viewed as a possibility. Because of this, security strategies don’t just focus on prevention, but also limiting the possible damage from a successful attack.
Identity-centric security has now become an essential element for building a strong cybersecurity posture and improved operational resilience. A big part of this process is safeguarding sensitive information and making sure that even if breaches do occur, it’s less likely that it becomes compromised.
Tumblr media
The role of AI in strengthening zero trust models
Bringing AI and zero trust together represents a major step forward for cybersecurity. AI’s power to analyze large datasets, spot unusual network activity, and automate security responses makes the core principles of zero trust even stronger, allowing for a more flexible and resilient defense.
Improving identity and access management
With leveraging AI, managing various identities and provisioning system access within a zero trust environment can be improved. Machine learning models can scan user behaviors looking for anomalies indicative of compromised accounts or potentially dangerous network activity. Adaptive authentication protocols can then use these risk-based assessments to change various security validation parameters dynamically.
AI technology also helps automate authentication processes when validating user identities. They can help facilitate new user setups, streamlining IT processes while at the same time minimizing human error. This added efficiency reduces the strain and resource requirements of IT support teams and significantly reduces the possibility of accidentally giving out wrong access permissions.
Intelligent threat detection and response
Traditional security measures can overlook subtle, yet important indicators of malicious network activity. However, machine learning algorithms can aid in detecting these threats ahead of time, resulting in a far more proactive approach to threat response.
Autonomous threat hunting and incident resolution can reduce the time necessary to identify and contain breaches while mitigating any associated damage. With AI, network monitoring processes can be done automatically, allowing security personnel to act faster if and when the time comes.
AI can also provide organizations with predictive analytics that help to guard against possible attacks by anticipating them before they occur. By using threat intelligence gathered from external vendors, and at the same time, checking for system vulnerabilities, essential steps can be taken to tighten security defenses to avoid any weaknesses from being exploited.
Automating data security and governance processes
AI systems can help sensitive business information be protected in real time. As data is collected, it can be automatically classified into various categories. This dynamic classification allows AI systems to apply relevant security controls to certain datasets, helping to align with various compliance requirements while adhering to any of the organization’s specific data management policies.
Another important security element for modern organizations is data loss prevention (DLP). AI-driven DLP solutions can be configured to automatically supervise the way users access and relocate information within a system. This helps to identify potential data manipulation and greatly minimizes the danger of unauthorized system access and data leakage.
Next-generation automation: running artificial intelligence at the edge
My name is Helenio Gilabert. In this article, I’m going to tell you all about how we run artificial intelligence at Edge Solutions. It’s such a fascinating topic, but it can be a little overwhelming.
Tumblr media
New security challenges and considerations
Though AI drastically improves the capabilities of traditional zero-trust models, it also can present additional security considerations that require organizations’ attention. Some of these include:
Data privacy and ethical concerns
When applying AI in zero trust settings, balancing security and personal privacy is critical. Organizations need to be certain that their methods of collecting and analyzing data are done within the scope of applicable privacy laws and ethical boundaries.
Bias in AI systems should be dealt with as well. Machine learning algorithms trained on outdated data are capable of producing inaccurate results that could lead to more passive security measures being put in place. Organizations need to ensure that any of their AI-driven systems have supporting policies in place to prevent these biased analyses from taking place.
Integration and implementation challenges
Integrating AI into a zero trust framework isn’t always straightforward. Complications can surface – especially when it comes to system and network compatibility. Organizations need to ensure that their AI solutions can be seamlessly integrated into the existing tech stack and that there aren’t any potential barriers that will impede data flow to and from critical systems.
Another operational challenge with AI-driven security systems is finding qualified talent to operate them. Companies will likely need to allocate dedicated resources for training and staff development to keep systems functioning effectively.
The importance of regular AI model training
AI solutions, especially those that use complex learning algorithms, aren’t a “set-it-and-forget-it” implementation. With cyber threats constantly evolving, maintaining the effectiveness of AI-driven systems requires regular model training.
Without regular intervals of AI model retraining, these systems won’t function accurately and efficiently over time. An AI model must be regularly reviewed and modified to avoid false positive alerts, broken automation, or inadequate threat mitigation protocols.
The future of cybersecurity
Integrating AI with zero trust architecture has changed how businesses can approach their cybersecurity initiatives. As cyberthreats become increasingly more sophisticated, then the need for increased automation and identity-centric security planning will only continue to grow. 
With the proper implementation strategies in place, organizations can benefit from enhanced threat management, streamlined access management, and a more proactive approach to data protection.
Have you checked our 2025 events calendar?
We’ll be all over the globe, so why not have a look and see if we’re anywhere near you?
Join us and network with like-minded AI experts in your industry.
AI Accelerator Institute | Summit calendar
Unite with applied AI’s builders & execs. Join Generative AI Summit, Agentic AI Summit, LLMOps Summit & Chief AI Officer Summit in a city near you.
Tumblr media
1 note · View note
logicpng · 4 months ago
Text
Tumblr media Tumblr media
lil swing :3
[ Description in ALT ]
153 notes · View notes
rhan-hastur · 15 days ago
Text
Begging people to check the original poster whenever they're reblogging unsourced art, 99% of the time it's some shitty AI peddler that posts nothing but slop.
Please. For the sake of actual artists, be curious. Track down the source, and if you can't, at least wonder where the thing you're looking at is coming from, its purpose. Don't just mindlessly reblog anything you see.
75 notes · View notes
c0rinarii · 8 months ago
Text
Tumblr media
Union yaoi warmup sketches!
334 notes · View notes
proudproship · 9 months ago
Text
It is important to value art.
Yes, even if it was made by someone with different opinions.
Yes, even if you don't like the artstyle.
Yes, even if you don't like what it's about or portraying.
Yes, even if you don't like the medium.
Yes, even if you think it's bad, lazy, or pointless.
Just because you don't like the art and/or the artist doesn't mean you need to devalue it as a piece of art. It is still art. And art is important.
153 notes · View notes