#AI data processing
Explore tagged Tumblr posts
techenthuinsights ¡ 2 months ago
Text
0 notes
verschlimmbesserung ¡ 2 months ago
Text
New aerial surveillance footage obtained by the Southern Environmental Law Center has found that Musk's artificial intelligence company, xAI, is using 35 methane gas generators to power its "Colossus" supercomputer facility, the backbone of its flagship Grok. That's 20 more generators than the 15 xAI filed permits for, and 35 more than it was approved to use.
Generators like xAI's emit huge amounts of nitrogen dioxide, a highly reactive gas which causes irreversible respiratory damage over time. And that's before you consider its effects on the ozone layer, or its contribution to acid rain, smog, and nutrient pollution in local soil and waterways. With 35 generators now chugging along, that's a whole chorus of turbines spewing the toxic gas into low-income, minority-led communities 24/7.
0 notes
hathuablog1 ¡ 3 months ago
Text
"Understanding Grok Technology Behind AI Databases: Revolutionizing Data Processing and Intelligence" 2025
Grok Technology: In the ever-evolving world of artificial intelligence (AI), various technologies are being developed to improve the way we process and manage data. Among the many innovations, Grok technology stands out as one of the most transformative tools, especially in the context of AI-driven databases. Grok, an advanced AI model designed to comprehend and process information deeply, has…
0 notes
itesservices ¡ 7 months ago
Text
Enhance your business operations with AI-based data processing. This technology optimizes data management, automates repetitive tasks, and enables smarter decision-making. By leveraging AI, organizations can improve accuracy, reduce costs, and boost efficiency. Stay ahead in the competitive landscape by integrating AI-driven solutions for streamlined workflows and actionable insights. Embrace the future of data processing today! 
0 notes
sophiebaybey ¡ 21 days ago
Text
Not to preach to the choir but I wonder if people generally realize that AI models like ChatGPT aren't, like, sifting through documented information when you ask it particular questions. If you ask it a question, it's not sifting through relevant documentation to find your answer, it is using an intensely inefficient method of guesswork that has just gone through so many repeated cycles that it usually, sometimes, can say the right thing when prompted. It is effectively a program that simulates monkeys on a typewriter at a mass scale until it finds sets of words that the user says "yes, that's right" to enough times. I feel like if it was explained in this less flattering way to investors it wouldn't be nearly as funded as it is lmao. It is objectively an extremely impressive technology given what it has managed to accomplish with such a roundabout and brain-dead method of getting there, but it's also a roundabout, brain-dead method of getting there. It is inefficient, pure and simple.
3K notes ¡ View notes
cogitotech ¡ 2 years ago
Text
0 notes
anghraine ¡ 2 months ago
Text
Nothing to do with anything else, but one of the most annoying aspects of autism is being hypersensitive to touch. I feel compelled to change into a very loose nightgown every evening because otherwise the sensation of anything against my skin by that time makes me want to claw my flesh from my body.
But, simultaneously, there's part of my mind that responds to ... like, a hug or something very small with "I yearn for human connection and some kind of touch however minimal, yay! a hug!" And meanwhile the other part is screaming "BAD BAD BAD RED ALERT BAD," like having a green and red light simultaneously flashing in my head.
Doesn't feel great, Bob!
22 notes ¡ View notes
37-feral-raccoons ¡ 1 month ago
Text
comp sci majors who also hate generative AI reblog please I need to know some people in my field are sane 😭
4 notes ¡ View notes
Text
I'm so tired of those ai "comic memes" people always post on facebook. So sick of them.
4 notes ¡ View notes
insert-game ¡ 2 months ago
Text
i hate gen AI so much i wish crab raves upon it
2 notes ¡ View notes
mister-forest ¡ 10 months ago
Text
Tumblr media
Chiques fíjense de activar la opción de no compartir datos en el apartado "Visibilidad" en Ajustes ‼️‼️
5 notes ¡ View notes
marginal-effect ¡ 9 months ago
Text
getting a samsung phone was probably one of the worst mistakes ive made tech wise in years its like having a iphone all over again except worse somehow
4 notes ¡ View notes
lazer-exe ¡ 7 months ago
Text
"spotify wrapped was clearly AI"
Two questions. What, exactly, do you think AI is? And did you think spotify had people HAND PICKING your top songs before this???? be for real
4 notes ¡ View notes
professor-rye ¡ 10 months ago
Text
I feel like, with the uproar over Nanowrimo right now, we have an opportunity to really push back at shitty AI, but I feel like we also need to be smart about it.
Just saying "Generative AI is bad! Fuck you!" is not going to make a huge dent in shitty ai practices, because they'll just dismiss us out of hand. But if we ask the really hard hitting questions, then we might be able to start making some level of progress.
Mozilla is actually doing a ton of good work towards this very goal.
They've been working to try to shift industry goals towards more transparent, conscientious, and sustainable practices, and I think their approach has a lot of promise.
AI is not inherently bad or harmful (hells, even generative AI isn't. It's just a tool, thus neutral at its core), but harmful practices and a lack of transparency make it to where we can not fucking trust them, at least in their current iterations.
But the cat is out of the fucking bag, and its not going back in even if we do point out all the harm. Too many people like the idea of making their lives easier, and you can't deny the overwhelming potential that AI offers.
But that doesn't mean we have to tolerate the harm it currently causes.
3 notes ¡ View notes
frank-olivier ¡ 7 months ago
Text
Tumblr media
Deep Learning, Deconstructed: A Physics-Informed Perspective on AI’s Inner Workings
Dr. Yasaman Bahri’s seminar offers a profound glimpse into the complexities of deep learning, merging empirical successes with theoretical foundations. Dr. Bahri’s distinct background, weaving together statistical physics, machine learning, and condensed matter physics, uniquely positions her to dissect the intricacies of deep neural networks. Her journey from a physics-centric PhD at UC Berkeley, influenced by computer science seminars, exemplifies the burgeoning synergy between physics and machine learning, underscoring the value of interdisciplinary approaches in elucidating deep learning’s mysteries.
At the heart of Dr. Bahri’s research lies the intriguing equivalence between neural networks and Gaussian processes in the infinite width limit, facilitated by the Central Limit Theorem. This theorem, by implying that the distribution of outputs from a neural network will approach a Gaussian distribution as the width of the network increases, provides a probabilistic framework for understanding neural network behavior. The derivation of Gaussian processes from various neural network architectures not only yields state-of-the-art kernels but also sheds light on the dynamics of optimization, enabling more precise predictions of model performance.
The discussion on scaling laws is multifaceted, encompassing empirical observations, theoretical underpinnings, and the intricate dance between model size, computational resources, and the volume of training data. While model quality often improves monotonically with these factors, reaching a point of diminishing returns, understanding these dynamics is crucial for efficient model design. Interestingly, the strategic selection of data emerges as a critical factor in surpassing the limitations imposed by power-law scaling, though this approach also presents challenges, including the risk of introducing biases and the need for domain-specific strategies.
As the field of deep learning continues to evolve, Dr. Bahri’s work serves as a beacon, illuminating the path forward. The imperative for interdisciplinary collaboration, combining the rigor of physics with the adaptability of machine learning, cannot be overstated. Moreover, the pursuit of personalized scaling laws, tailored to the unique characteristics of each problem domain, promises to revolutionize model efficiency. As researchers and practitioners navigate this complex landscape, they are left to ponder: What unforeseen synergies await discovery at the intersection of physics and deep learning, and how might these transform the future of artificial intelligence?
Yasaman Bahri: A First-Principle Approach to Understanding Deep Learning (DDPS Webinar, Lawrence Livermore National Laboratory, November 2024)
youtube
Sunday, November 24, 2024
3 notes ¡ View notes