#Nvidia GPU
Explore tagged Tumblr posts
Text
Stop showing me ads for Nvidia, Tumblr! I am a Linux user, therefore AMD owns my soul!
#linux#nvidia gpu#amd#gpu#they aren't as powerful as nvidia but they're cheaper and the value for money is better also.
38 notes
·
View notes
Text

fundamentally unserious graphics card
11 notes
·
View notes
Text
Un día como hoy (11 de octubre) en la tecnología

El 11 de octubre de 1999, NVIDIA presenta su primer tarjeta de video para el mercado masivo con la GPU GeForce 256, con 32 Mb de VRAM, considerada la primer GPU, que brindó soporte para DirectX 7 y se anunció el 31 de agosto de ese mismo año. Mejoraba a su antecesor la RIVA TNT2 #retrocomputingmx #Nvidia #geforce256
3 notes
·
View notes
Text

The machine spirit will be pleased
2 notes
·
View notes
Text
High-Risk Vulnerabilities Affecting Nvidia GPUs
Graphics card manufacturer Nvidia is currently issuing a warning to all owners of GeForce GPUs. According to an Nvidia security bulletin, several security vulnerabilities requiring urgent attention have been discovered in the company’s own display drivers and other software. A total of eight vulnerabilities are listed, all of them with a “High” severity rating. If you have an Nvidia GeForce GPU, you need to act now. According to Nvidia, it’s possible for attackers to gain access to your entire system by exploiting one of the vulnerabilities. With this kind of access, hackers can not only infiltrate and execute malicious code in your PC, but also read and steal personal data. The vulnerabilities affect GeForce software, Nvidia RTX, Quadro, NVS, and Tesla, both under Windows and Linux.
Urgent steps for GeForce users To address these security vulnerabilities, you’ll need the latest Nvidia GeForce drivers with version 566.03 for Windows and versions 565.57.01, 550.127.05, and 535.216.01 for Linux. Nvidia also points out that some distributors also supply the necessary security updates as versions 565.92, 561.03, 556.35, and 553.05. For Nvidia RTX, Quadro, and NVS, update versions 566.03, 553.24, and 538.95 will address the security issues.
If you already have the latest update for your GPU, you're good.
Stay safe in Tamriel, adventurers!
#elder scrolls#the elder scrolls#eso#nvidia#gpu#nvidia gpu#nvidia rtx#nvidia gtx#nvidia quardo#ESO PC NA#ESO PC EU
5 notes
·
View notes
Text
i never want a new Nvidia GPU or RTX until i turn on DLSS and set RT to Ultra, and God himself descends upon me in visual form at 60-80 fps
3 notes
·
View notes
Text
Switched to a GPU Dedicated Server - Best Tech Move I ever Made
I’ll be honest. I didn’t even know what a GPU Dedicated server was a year ago. I just used whatever hosting came with decent reviews. Most of the time, I was running websites, doing digital marketing stuff, editing videos now and then, and testing out AI tools. Nothing too crazy.
But as I got deeper into projects, especially machine learning experiments and video rendering, I started hitting walls. Stuff would lag like crazy. Models that should’ve trained in a few hours took almost a day. Some tools would crash midway through. At first, I thought I was doing something wrong.
Turns out, my regular VPS just couldn’t handle it.
I came across Ucartz’s GPU Dedicated Server while looking for something faster. It wasn’t some over-the-top promotion. Just a clean offer: GPU power, decent pricing, full control. I thought, “Screw it, let’s try.”
And honestly? It worked better than expected.
What Changed After Switching:
My video edits and exports were so much faster. No lag, no stuttering.
Machine learning models trained like they were on steroids.
I could multitask , running multiple apps, edit, and testing without a single freeze.
I’m not a hardware geek. I didn’t care about exact specs or clock speeds. I just needed something that worked, didn’t crash, and could keep up with what I was doing.
That’s exactly what I got.
Also, Ucartz didn’t flood me with emails or push upgrades every 10 minutes. I just picked the plan I needed, set it up, and got to work. Their support team helped me with setup and were chill throughout.
If you’re doing anything that needs serious processing - AI, design, data, even game development and you’re tired of slow or shared servers, I’d 100% recommend checking this out.
#share hosting#web development#web hosting#gpudedicatedserver#gpu#gpuserver#nvidia gpu#dedicatedserver
0 notes
Text
The End of VRAM Shortages? NVIDIA & DirectX Tech Cuts GPU Memory Usage by 90%
The debate has raged across forums and comment sections for years: is 8GB of VRAM enough for modern gaming? With NVIDIA's RTX 50-series cards like the RTX 5060 and 5050 launching with 8GB configurations, many gamers have felt the sting of VRAM limitations in demanding titles. That entire conversation might be about to change forever. A groundbreaking collaboration between NVIDIA and Microsoft is demonstrating a revolutionary method to slash VRAM consumption. Based on early-but-stunning tests with a new preview driver, this technology can reduce the VRAM footprint of game textures by a staggering up to 90%, potentially making 8GB GPUs vastly more capable for next-generation gaming.
0 notes
Text
#tumblr#ulissesfsdesouza#pcgamer#ubisoft#assassin's creed odyssey#kassandra of sparta#nvidia#nvidia gpu#msi 2025#ac photo mode#windows 10#microsoft#dualsense#controle sem fio#ps5
0 notes
Text
I think my graphics card is dying or smth
#nvidia gpu#blender#i have no idea why it's doing this#i was using blender and all of a sudden it did that#looks cool tho
0 notes
Note
Thoughts on the recent 8GB of VRAM Graphics Card controversy with both AMD and NVidia launching 8GB GPUs?
I think tech media's culture of "always test everything on max settings because the heaviest loads will be more GPU bound and therefore a better benchmark" has led to a culture of viewing "max settings" as the default experience and anything that has to run below max settings as actively bad. This was a massive issue for the 6500XT a few years ago as well.
8GiB should be plenty but will look bad at excessive settings.
Now, with that said, it depends on segment. An excessively expensive/high-end GPU being limited by insufficient memory is obviously bad. In the case of the RTX 5060Ti I'd define that as encountering situations where a certain game/res/settings combination is fully playable, at least on the 16GiB model, but the 8GiB model ends up much slower or even unplayable. On the other hand, if the game/res/settings combination is "unplayable" (excessively low framerate) on the 16GiB model anyway I'd just class that as running inappropriate settings.
Looking through the techpowerup review; Avowed, Black Myth: Wukong, Dragon Age: The Veilguard, God of War Ragnarök, Monster Hunter Wilds and S.T.A.L.K.E.R. 2: Heart of Chernobyl all see significant gaps between the 8GiB and 16GiB cards at high res/settings where the 16GiB was already "unplayable". These are in my opinion inappropriate game/res/setting combinations to test at. They showcase an extreme situation that's not relevant to how even a higher capacity card would be used. Doom Eternal sees a significant gap at 1440p and 4K max settings without becoming "unplayable".
F1 24 goes from 78.3 to 52.0 FPS at 4K max so that's a giant gap that could be said to also impact playability. Spider-Man 2 (wow they finally made a second spider-man game about time) does something similar at 1440p. The Last of Us Pt.1 has a significant performance gap at 1080p, and the 16GiB card might scrape playability at 1440p, but the huge gap at 4K feels like another irrelevant benchmark of VRAM capacity.
All the other games were pretty close between the 8GiB and 16GiB cards.
Overall, I think this creates a situation where you have a large artificial performance difference from these tests that would be unplayable anyway. The 8GiB card isn't bad - the benchmarks just aren't fair to it.
Now, $400 for a GPU is still fucking expensive and also Nvidia not sampling it is an attempt to trick people who might not realise it can be limiting sometimes but that's a whole other issue.
4 notes
·
View notes
Text

Got the RTX 5070 TI
I got this GPU to get better performance while streaming but right after getting it I got really upset about not having viewers for over 4 years so now I'm just coping 😔
1 note
·
View note
Text
Efemérides computacional: 23 de junio de 2007

El 23 de junio de 2007, NVIDIA lanzó CUDA la arquitectura de procesamiento en paralelo que permite a los desarrolladores aprovechar al máximo las tarjetas gráficas #retrocomputingmx #cuda #nvidia #GPU #computerhistory
0 notes
Text
Small computer conspiracy
Nvidia killed sli on purpose because it made people not buy either newer gpus or higher end ones.
Since sli gpus basically let you use 2 cheaper gpus for roughly the same performance of a higher end one, people would’ve bought those instead of a higher end card at the time. Or instead of buying a whole newer gen gpu they would just add another same older model in sli than upgrade.
Nvidia prolly saw this and went, “welp might as well not let them have good deals!” And slowly killed off sli development so devs wouldn’t have game support it, thus having no need to use multgpus anymore.
Also, they never made the connector faster past 2GB/s, which is absolutely slow as balls, yeah when it came out it was hella fast roughly pcie 1x8 (GeForce 6) but by like when pcie 3 gpus came out gpus needed much more data bandwidth to feed to properly work on larger workloads likely causing big bottlenecks.
This also made things like micro stuttering (which people overtime noticed) more prevalent due to previous bandwidth bottleneck. It can also be shown in nvlink, which was 50GB/s when released, 25-50 times faster than sli bridges, and they didn’t implement it in consumer cards as far as I know. Nvidia developed a faster better way for workload splitting for graphics cards, they just didn’t want to give it to us for previously said economic reasons. They also still use nvlink in their workstation and server applications, it’s not that they can’t make a faster interface it’s that they won’t give it to us.
Bye the time it perma-died with development completely ending by rtx 20, no one cared, because they planned this since they noticed how good of a deal it was and therefore bad for business. Also probably the reason they limited it to high end cards, so people wouldn’t be able to afford a “good deal” on multi-card setups and they would sell more higher end cards.
I would say the same for crossfire, but like it died because of the death of sli more than anything in my opinion. And the fact that they used pcie lanes later on, I personally blame shat driver support (amd’s specialty and poor optimization not out of malice, more like incompetence)
So basically, nvidia literally killed off one of the best technologies in history for the consumer because they lost a few bucks over fancy unaffordable graphics cards and got butthurt over it. Actually since they don’t care much about us anymore maybe they’ll bring it back?
This was more like a rant than a whole thought out essay or sum. Any criticism or response welcome! :3
#radeon#nvidia#nvidia drivers suck#fuck capitalism#sli#multiple gpus#NVIDIAAAAAAAAA#nvlink#rant post#rant#essay#opinions???#computers#graphics card#nvidia gpu
0 notes
Text

Boost Your AI Projects with Datacenter-Hosted GPUs!
Struggling with slow training times and hardware limits? Step into the future with NeuralRack.ai — where high-performance AI computing meets flexibility.
Whether you’re working on deep learning, large-scale ML models, or just need GPU power for a few hours, NeuralRack's datacenter-hosted GPUs are built to scale with you. ⚡
🎯 Why NeuralRack.ai? ✔️ Lightning-fast NVIDIA GPUs ✔️ Secure & reliable infrastructure ✔️ No long-term commitment ✔️ 24/7 support for seamless training ✔️ Affordable pricing for individuals & enterprises
💸 Ready to optimize your budget? Explore the plans 👉 Check Pricing
From research to production — train smarter, faster, and better with NeuralRack. Let your models run wild, not your wallet. 🔥
#AIComputing #MachineLearning #GPURental #DeepLearning #CloudGPUs #NeuralRack #AIInfrastructure #DataScienceTools #HighPerformanceComputing #AIDevelopment
#gpu#nvidia gpu#gpucomputing#nvidia#amd#hardware#artists on tumblr#artificial intelligence#rtx#rtx 5090#rtx4060#rtx4090
1 note
·
View note
Text
youtube
1 note
·
View note