#OpenAI GPT-3
Explore tagged Tumblr posts
Text
Procesamiento de Lenguaje Natural y Robótica: Las IAs que Están Cambiando Nuestro Mundo
Procesamiento de Lenguaje Natural (NLP): ¿Qué es y para qué sirve? El Procesamiento de Lenguaje Natural (NLP) es una rama de la inteligencia artificial que se centra en la interacción entre las computadoras y el lenguaje humano. Su objetivo principal es permitir que las máquinas entiendan, interpreten y generen lenguaje humano de manera natural y útil. El NLP se encuentra en la intersección de…
#ASIMO#Boston Dynamics Atlas#DJI Drones#Facebook RoBERTa#Google BERT#IA conversacional#IBM Watson#inteligencia artificial#iRobot Roomba#Microsoft Azure Text Analytics#NLP#OpenAI GPT-3#Pepper#procesamiento de lenguaje natural#robótica
0 notes
Text
youtube
Explore my curated collection of top AI image generators that are not only incredibly powerful but also completely free! Whether you're a designer, marketer, or simply an art enthusiast, these tools are bound to spark your creativity and elevate your visual content to new heights.
#ai chatbot#midjourney#artificial intelligence#midjourney ai#chat gpt 4#openai#ai chat#open ai#generative ai#gpt 3#chatbot ai#chatbot online#chatbots#scale ai#openai chat#ai chatbot online#google ai chatbot#ai website#nvidia h100#conversational ai#openai chatbot#dall e#my ai#ai trading#data robot#ai chatbot gpt#shortly ai#best ai chatbot#tome ai#assembly ai
2 notes
·
View notes
Text
هل يدفع الذكاء الاصطناعي كوكب الأرض نحو أزمة مناخية جديدة؟
البصمة البيئية للذكاء الاصطناعي… بين التقدم التكنولوجي والتهديد المناخي عادةً ما تتركز النقاشات حول استخدام الذكاء الاصطناعي في الأعمال على قضايا أمن البيانات، والاستخدامات الأخلاقية، ومخاطر الاعتماد المفرط على هذه التقنية الناشئة. غير أن هناك نقاشات ومخاوف متزايدة بشأن البصمة البيئية للذكاء الاصطناعي، وهي مخاوف مشروعة وجديرة بالاهتمام. ومع التأثير المتزايد للذكاء الاصطناعي على استهلاك الطاقة…
#Exowatt#GPT-3#ISO 42001#OpenAI#مايكروسوفت#مراكز البيانات#نماذج اللغة#أمازون#الكفاءة الطاقية#المسؤولية البيئية#الانبعاثات الكربونية#الاستدامة البيئية#البلوك تشين#التكنولوجيا الخضراء#الحوسبة السحابية#الذكاء الاصطناعي#الذكاء الاصطناعي المسؤول#الطاقة المتجددة#العملات المشفرة#انبعاثات التكنولوجيا#استهلاك المياه#استهلاك الطاقة#تقييم دورة الحياة#تغير المناخ#جوجل
0 notes
Text
Training AI: The Real Carbon Footprint Could AI Be the Next Big Energy Consumer After Bitcoin?
The adoption of artificial intelligence (AI) is growing continuously with the aid of innovations, automation, robots, machine learning, and countless other advanced technologies, we can perform tasks with greater efficiency and precision. However, amidst the buzz of AI’s current and potential achievements lies the perilous issue of AI’s increasing energy consumption and sustainable growth. Should we format AI as an added energy predator like we do with Bitcoin?
The Increase in Electricity Consumption Eagered by AI Models
The revolution in AI technology is underpinned by deep-learning-based language models, such as GPT series developed by OpenAI and Gemini by Google DeepMind. Such models require overwhelming computational capabilities for training and execution. During training, billions of parameters (the frameworks of an AI’s “understanding”) must be processed, utilizing powerful GPU clusters available in an expansive network of data centers.
One research suggested that training an AI model consumes energy equivalent to a household’s annual consumption. In the AI industry’s context, running on idle capping machines in different data center locations translates to millions of dollars in costs throughout the year: Maintaining Energy Hungry AI Models’ infrastructure. In this regard, OpenAI claimed GPT-3 consumed approximately 1,287 MWh of electricity (equivalent to about 550 tons of carbon dioxide emissions, depending on the energy source). This worrying singular model trend spells doom as demand-oriented deployments bolster the industry’s energy appetite.
From Bitcoin to AI: A Looming Energy Battle?
The unique nature of verifying transactions through the proof-of-work (PoW) algorithm in Bitcoin mining poses a massive contradiction as a decentralized and competitive practice. This has culminated in the excessive criticism directed towards Bitcoin mining for its massive energy consumption which is estimated to be 120-150 TWh terawatt hours.’ A good analogy for this would be to equate energy consumption to entire countries such as the Netherlands and Argentina.
Bitcoin, however, has some competition in the form of AI, and as it seems it is catching up at an astonishing rate. It is estimated that virtually by 2025 the global energy demand in training and deploying AI systems could rival or surpass Bitcoin’s current consumption levels. Unlike Bitcoin, AI’s energy requirements are not limited to training; they also encompass inference engines for real-time applications. These include chatbots, recommendation systems, search engines, etc.
The Environmental Impact
The trend AI adoption and development follows is deeply concerning. Data centers that enable the deployment and training of AI models account for roughly 1-2% of global electricity usage, and this figure will drastically rise in light of the accelerated adoption of AI across the globe. Should these data centers be powered by fossil fuels, the carbon emissions would further deteriorate the climate and environment well-being, all the while making attempts to urge people to cut down on their carbon footprint.
In the case of large companies, some of these problems are being addressed by investing in renewable energies and improving the efficacy of the hardware being used. For example, NVIDIA’s new AI chips are built to do more calculations for every unit of energy used. Still, the advances in these technologies and the scale of the construction of AI systems remain extraordinarily difficult.
The Future: Moderating the Rate of Developments and Sustainability
What is the answer then? Energy usage needs to be approached in a most effective manner. From the very beginning, it should be a concern in hardware and software structuring. There are some attempts utilizing machine learning on pruning and quantization methodologies which limit the resources which are used to perform computations while retaining sufficiently high levels of performance. A considerable amount of attention is also required from the states and the IT circles to clean energy to sustain the newly designed AI infrastructure.
In AIXCircle, we are of the opinion that AI can change the world in many domains such as health care, finance, etc. As these opportunities are enticing, one cannot stay back and tried grabbing them. However, it’s also essential that AI should not become a technology like Bitcoin which offers a lot in terms of capabilities but fails to address the environmental challenges it creates.
Conclusion
The sharp rise in AI’s energy consumption serves as a reminder that model training should be approached with a balanced perspective toward sustainable resource use. As the competition to create environmentally responsible, resource-efficient AI systems heats up, it is critical to align innovations with ethical responsibility.
0 notes
Text
ChatGPT 圖片生成夯到伺服器快燒融,OpenAI 緊急限流,免費用戶一天最多 3 張
OpenAI 近期推出 ChatGPT 4o,強化了 AI 圖像生成功能,讓用戶能夠生成更細緻、真實的圖片。然而,這項功能大受歡迎,導致伺服器負載過重。OpenAI 執行長 Sam Altman 稍早在社群媒體 X 平台的個人帳號發文表示:「大家都很愛這個功能,但我們的 GPU 快融化了。」 Continue reading ChatGPT 圖片生成夯到伺服器快燒融,OpenAI 緊急限流,免費用戶一天最多 3 張
0 notes
Text
Elon Musk Shakes Up the AI Landscape as Grok-3 Outclasses Other Models
Tech leaders are on a race to build the best AI model and Musk has put his Grok-3 where his mouth is. But OpenAI will not stay idle. Elon Musk and xAI’s new release, Grok-3, has risen to the top of the AI industry. Released on Monday, the chatbot is currently ranked first in the Chatbot Arena leaderboard. The leaderboard ranks best large-language models and AI chatbots based on user preference.…
0 notes
Text
Sam Altman: The Visionary Shaping AI, Startups, and the Future of Technology
Sam Altman’s journey began on April 22, 1985. He was born in Chicago, Illinois, but he grew up in St. Louis, Missouri. From the very start, Sam was a curious and tech-savvy kid. When he was just eight years old, he got his first computer. It was an Apple Macintosh. This event sparked a love for technology that would shape his life. Growing up in a Jewish family, Sam’s parents emphasized…
#Angel Investing#Artificial Intelligence#Clean Energy Ventures#Future of Technology#GPT-3#OpenAI#Sam Altman#Startup Success Stories#Tech Innovation#Y Combinator
0 notes
Text
"The Dream of Becoming a Hollywood Director, Realized by AI: The Infinite Possibilities of Sora"
Overview of Sora: Developer: OpenAITechnology: text-to-video AIFeature: Creates videos up to 1 minute in lengthCompetitors: PIKA by Picalabs, Gen2 by RunwayCapabilities: Generates diverse characters, specific types of motions, and complex scenesCurrent Status: Under internal testing, not yet released to the general publicKey Features Offered by Sora: Image-to-Video: Converts images to…
View On WordPress
#ai for oceans#ai free#Ai Lab#ai ui#chat openai#expert ai#open ai#open ai online#openai chat#openai gpt 3 chat
0 notes
Text
I decided to put here what chat gpt said about it all cause it was just so beautiful, so much better than how I explained it 😭💗
Chat GPT:
"That statement — "We are, all of us, stardust, held together by love for an instant" — is beautifully symbolic and rich in meaning. Let’s break it down:
#1. "We are, all of us, stardust"
This phrase emphasizes a sense of shared origin and interconnectedness. The idea that all living beings (and even non-living matter) are made of "stardust" is inspired by a scientific truth: the elements that make up our bodies, like carbon, oxygen, and nitrogen, were forged in the cores of stars billions of years ago. When those stars exploded as supernovae, they scattered these elements across the cosmos, which eventually formed planets, life, and us.
In a more poetic sense, this line reminds Aaravos that both elves and humans — indeed, all beings — share a common origin. It suggests that the boundaries between races or beings (like the immortal elves and mortals) are ultimately superficial because they all come from the same cosmic source.
---
#2. "Held together by love"
This part of the statement shifts from the cosmic to the emotional and spiritual. If "stardust" symbolizes the physical essence of all beings, "love" represents the force that gives meaning to their existence. It implies that the true binding force of life is not just physical matter, but the connections, compassion, and relationships that beings share.
For Aaravos, this could be a reminder that his love for his daughter — and her love for humans — is what gives their existence value, even in the face of tragedy. It also highlights that love, as fleeting as it might be, is what defines the experience of life, both for mortals and immortals.
---
#3. "For an instant"
This phrase introduces a poignant awareness of impermanence. It acknowledges that all existence, no matter how seemingly eternal, is temporary when viewed on a cosmic scale. To immortal beings, like Startouch elves, a human life might seem like a fleeting moment — and yet, even their own existence is but an "instant" in the vastness of the universe.
This emphasizes the fleeting nature of love, connection, and life itself. The Merciful One could be suggesting that Aaravos's grief, though immense now, is part of the transient nature of existence. Even the pain of loss is a reflection of the beauty of having loved in the first place.
---
Putting it Together
The Merciful One’s words seem to be an attempt at comforting Aaravos by placing his grief in a broader, cosmic context. They acknowledge the shared, fragile nature of all beings and suggest that love — even if temporary — is what gives meaning to life. By reminding him of the impermanence of all things, the Merciful One might also be gently encouraging him to find peace and let go of his century-long sorrow, knowing that his daughter’s love and actions mattered, even if they caused pain.
In essence, the statement is about unity, love, and the fleeting beauty of existence. It’s a way of saying: Though we are all made of the same cosmic material, it is love that gives us form and meaning in this brief moment we share."
There's something that I want to shed a little light on:
"We are, all of us, stardust, held together by love for an instant."
Yes, this quote irritates a lot of us, but there's actually something really meaningful about it...
You guys remember when Aaravos was talking to Sol Regem, and he repeated the words the Stern One said to him? "Sometimes the line between cruelty and mercy can be thin."
After the last episode of s6, that made me realize something about Aaravos: he cares about what others say to him (as also shown with Terry in s7), and he keeps their words with him, recalling what they said when the current moment reminds him of it. So this...

Right here, he was smiling as he repeated the words the Merciful One said to him. This says a lot imo. Those words meant something meaningful to him, as also shown by his expression here when they said it:

So, I'm going to say it again.
"We are, all of us, stardust, held together by love for an instant."
At first, I thought they were basically saying "yeah yeah we all feel sorry for you right now"
But no, it's more. I needed chat gpt to help me with this cause I just couldn't seem to fully understand this quote, but now I do lol
The Merciful One's words were a reminder. Startouch elves are immortal, and they understand time more than any other species in TDP. When they said that "all of us" are stardust, I believe they meant every creature.
But the part I want you to think about is the rest of the sentence: "held together by love for an instant"
Stardust is scattered (atoms, nitrogen, etc.) And what I think the Merciful One was saying is that love is what keeps us whole/connected. But because of their sense of time, they're reminding Aaravos that it's fleeting; such a brief moment in the vastness of time. So, this was basically what they said:
"Every one of us is made out of stardust, and love is what keeps us whole/connected, a fleeting moment in the vastness of time."
There's either two ways to interpret what they were trying to tell him by reminding him of this fact:
1) This was the nature of things, and it was time to stop mourning...
2) For all of them, the moments experienced are brief, so focus not on how Leola's gone, but on the little time he spent with her
And that's why I believe Aaravos repeated those words: because at that moment, the focus was love.
#Oh. My. God.#this actually brought tears to my eyes#THANK YOU CHAT GPT 💞#this was the latest OpenAI btw#thedragonprince#the dragon prince#tdp#tdp s6#tdp s6 spoilers#aaravos#tdp aaravos#the merciful one#tdp the merciful one#continuethesaga#giveusthewholesaga#greenlight arc 3#netflix#fandom#tdp fandom#twinkling star ✧˖°.#shooting star ͙͘͡★
44 notes
·
View notes
Text
#history#knowledge#artificial intelligence#black history#tech#technology#gpt4#gpt#chat gpt#gpt 3#chatgpt#ai technology#custom gpt#openai
0 notes
Text
❗Disclaimer: Esta guía no juzga el uso de la IA como apoyo o inspiración. Su objetivo es ayudar a detectar narraciones abusivas. Las pautas que leeréis a continuación pueden aparecer en cualquier narración, porque todos las usamos de forma natural. Lo que buscamos señalar es cuando estas estructuras se ven de forma repetitiva, poco orgánica y automatizada.
Cuando comenzó el auge de la inteligencia artificial en los foros, su uso era más evidente. Muchos textos se detectaban fácilmente, y surgió el debate sobre si debíamos permitirla o no. No teníamos herramientas ni experiencia para diferenciar entre un texto escrito a mano y uno generado por IA, así que el tema quedaba en el aire: intuición, suposiciones y confusión. Con el paso del tiempo, y a medida que estas herramientas evolucionaron, empezó a ser más complicado notar la diferencia. Como administración, comenzamos a investigar, a fijarnos más, a analizar cómo escribe la IA, qué patrones repite, qué tics deja. Nos encontramos con estructuras, giros, palabras y recursos que aparecían una y otra vez. Y nos dimos cuenta de que la mayoría de los usuarios la usan, en mayor o menor medida.
Hoy sabemos que ya no se puede luchar contra ello. La IA ha llegado para quedarse, y aunque no podamos evitar su uso, sí creemos que estamos en nuestro derecho de aprender a reconocerla. Esta guía nace de todo lo que hemos observado y aprendido, y su objetivo no es juzgar, sino ofreceros las herramientas necesarias para que cada uno decida, desde su propio criterio, con quién quiere escribir. Porque aunque la inteligencia artificial se normalice, el derecho a elegir sigue siendo nuestro.
📌 “Aunque la IA ya se usaba en 2023, el lanzamiento de GPT‑4.5 el 27 de febrero de 2025 elevó de forma notable la calidad narrativa: ganó fluidez, coherencia emocional y naturalidad, haciéndose prácticamente indetectable sin análisis cuidadoso.” – Fuente: ChatGPT (modelo GPT‑4.5, OpenAI, información verificada).
1. Cambios drásticos en la forma de escribir
Una de las formas más evidentes de detectar el uso de IA en un foro de rol —y no solo por parte de los usuarios, sino también de los propios administradores— es conocer la trayectoria previa. Muchos llevamos años en este mundillo. Hemos roleado juntos, nos hemos leído, hemos participado en foros que han estado activos durante largo tiempo. Y eso significa que ya sabemos cómo escriben ciertos usuarios, cuál es su estilo, su forma de expresarse. Por eso, cuando hay un cambio muy brusco, se nota.
Hazte estas preguntas:
¿Esa persona ya escribía así de bien hace años?
¿Usaba ese tipo de frases, estructuras poéticas, metáforas complejas?
¿Tenía textos extensos, con vocabulario variado y ritmo narrativo?
¿O más bien escribía de forma normal, sencilla, sin mucha floritura?
Si ese usuario ya escribía así antes de que la IA se popularizara, lo más probable es que no esté usando IA, o que si lo hace, la use como apoyo, no como sustituto. Pero si su estilo ha cambiado de forma repentina, pasando de una narración plana o más neutra a textos con frases cargadas, adornos literarios... entonces es legítimo sospechar que está utilizando inteligencia artificial. La mejora natural es posible, claro que sí. Todos aprendemos, evolucionamos, escribimos mejor con el tiempo. Pero ese cambio suele ser gradual y se da dentro de un estilo personal reconocible.
2. Comparaciones y metáforas constantes
La IA busca embellecer la prosa por defecto, a menos que le pidas lo contrario. Esto lleva a un uso constante de comparaciones, repetitivas y a veces hasta forzadas.
“Sus ojos, como dos lunas rotas.” “Una sombra, como el recuerdo de un viejo sueño.” “La luz temblaba sobre su rostro como un suspiro.”
3. Coma + "su…" Esa construcción —coma + "su..."— es muy típica de IA.
“Caminó hacia ella, su sombra alargándose sobre el suelo empedrado.” “Le dedicó una sonrisa fugaz, su voz sonó apenas como un susurro.” “Se detuvo al borde del acantilado, su respiración agitada por la carrera.” “Miró el anillo entre sus dedos, su mente perdida en recuerdos.”
4. Empezar las narraciones con el nombre del personaje
Este es uno de los patrones más fáciles de detectar cuando el usuario copia y pega directamente desde la IA sin editar el post. No pasa nada si alguien lo hace de vez en cuando, pero si normalmente empieza sus posts igual, con “Nombre + verbo”, es una pista clara de que probablemente está escrito por una inteligencia artificial. Muchas narraciones empiezan así porque el prompt suele tener la estructura: “Escribe desde el punto de vista de X”, y por tanto la IA arranca desde ahí.
5. Metarol involuntario
La IA, si no se le indica específicamente lo contrario, tiende a hacer metarol. Es decir, describe lo que siente, piensa o hace el otro personaje, aunque no se haya narrado. A menos que se le enseñe bien, la IA no respeta los límites entre personajes y acaba narrando por los demás, aunque sea de forma muy sutil.
6. Frases cliché, grandilocuentes y poco naturales
La IA adora crear frases que suenan potentes, pero están vacías o son muy forzadas, como sacadas de una película dramática o una novela de Wattpad.
“Tú eres la tormenta... pero yo soy lo que queda cuando el viento se detiene.” “Eras mi luz. Pero yo nací para reinar en la sombra.” “No vine a quedarme. Vine a arrasar con todo.”
7. Palabras muy repetidas
La IA tiende a abusar de ciertas palabras una y otra vez en distintas narraciones. Algunas de las más comunes son: “Antiguo”, “Apenas”, “Ancestral”, “Denso”.
“Una energía antigua manaba de la fuente.” “Emanó una luz apenas perceptible.”
8. Comparativas poéticas o forzadas
Este es un tic muy típico de la IA: usar estructuras comparativas en formato "No X, sino Y", o "No lo hacía por esto, sino por aquello", buscando un aire profundo, matizado o poético… pero cuando se repite demasiado, canta muchísimo.
“No era oscuridad. Era descanso.” “No lo dijo como amenaza, sino como promesa.” “No con la urgencia del deseo, sino con la calma de quien ya sabe el final.”
9. Finales de párrafo grandilocuentes
La IA tiende a cerrar párrafos o posts con frases que suenan como sentencia, metáfora o remate emocional. Estos cierres suelen ser frases cortas o párrafos breves que resumen la emoción, el conflicto o la esencia del momento con un toque dramático y cuidado, a menudo poético, que da sensación de “gran final”.
“Y, aunque el mundo a su alrededor se desmoronara en cenizas, ella mantuvo la mirada firme, sabiendo que la verdadera batalla siempre se libra en el silencio.” “Porque no todo se gana con espadas ni con fuego; a veces, la victoria más grande es resistir sin perder la esperanza en la oscuridad más profunda.” “Mientras las sombras avanzaban, él supo que su historia no terminaría en ese instante; sería escrita en cada suspiro que aún quedaba por dar.”
Esta guía no pretende ofrecer verdades absolutas ni una fórmula infalible para detectar narraciones generadas por inteligencia artificial. Aun así, hay personas que escriben así por naturaleza, porque ese es su estilo, y eso también hay que tenerlo en cuenta. No todos los textos que cumplen estas pautas son necesariamente de IA, y no todos los que no las cumplen son cien por cien humanos. Al final, detectar una IA es también una cuestión de intuición, percepción y observación.
Esta guía es solo una herramienta. No para juzgar, sino para ayudarte a observar y a tomar decisiones informadas sobre con quién compartir tus historias. Porque al final, el rol no va solo de escribir: va de conectar. Y eso, ninguna IA puede imitarlo del todo.
❗¿Has detectado otros patrones? Si tú, que estás leyendo esto, has notado otras señales, muletillas, frases o patrones que crees que también podrían ser indicios de IA, nos encantaría que los compartieras con nosotros. Así podremos ampliar esta guía entre todos y tener una herramienta más completa para cuidarnos como comunidad.
#kaelkoth#guia#inteligencia artificial#IA#foroactivo#foro de rol#rol por foro#escritura#narrativa#rol#rol hispano
124 notes
·
View notes
Text
You know how Greta Thunberg said "You have stolen our dreams"?
This is how I feel about Sam Altman and AI.
I was *robbed* of a future where AI is a cool tool, instead of yet another shiny, meaningless tech buzzword, and a shit feature that nobody wants to increase sales. Instead of something to help us better diagnose cancer, we are setting the planet on fire and completely disregarding anything Hayao Miyazaki has said about how he feels about AI, all just to see how we'd look as Studio Ghibli characters.
You see, I study AI. But I applied before the whole ChatGPT thing. At the time, OpenAI let a few select people prompt GPT-3. To generate YouTube titles and that sort of thing.
Back then, AI was mostly used for analytical purposes. To detect fires early, to help analyze protein folding, to develop new medication. And this was what drew me in.
When ChatGPT hit the scenes, I was genuinely excited for the potential of it. For the potential to make the internet more accessible, to be used for good.
Oh, how naïve I was back then.
Instead of that, AI is - in the best case scenario - used as yet another meaningless tech buzzword. It infests any product of any company that has nothing else to offer.
And that is the best case scenario. In the average case, instead of just being enshittification itself, it helps to accelerate enshittification by generating meaningless slop to poison search results, both in text and in picture form.
In the worst case scenario, AI is actively being used for harm. Used to generate nonconsensual imagery of people. Used as a tool for misinformation, for manipulating the public opinion, not only enshittifying the internet, but actively making it a worse, more hostile, more adverse place.
And that does not even touch on the issue of how training data is gathered, and the legal and ethical problems this raises, which, I hope, being on Tumblr, you're all well aware of by now. To any artist, I fully support you using nightshade to actively poison your work.
So yes. Despite being a student of AI, I am disgusted with what this field has become.
The following paragraphs are directed at anyone who has worked or currently works on any generative AI system:
You have stolen my dreams.
Not only have you stolen my dreams, you have plundered them for every dollar, every cent, against any moral or ethical code, in search of profits over everything.
You are going against every moral code that people should be committed to. But you don't care, as long as you can make a quick buck.
You don't care if Hayao Miyazaki has called generative AI "an insult to life itself". You just want to see yourself in the Studio Ghibli style, because to you, everything, even art, is something to be commoditized, to be mass-produced just so it can be instantly forgotten.
FUCK YOU AND THE MECHANICAL HORSE YOU RODE IN ON.
127 notes
·
View notes
Text
AI Reminder
Quick reminder folks since there's been a recent surge of AI fanfic shite. Here is some info from Earth.org on the environmental effects of ChatGPT and it's fellow AI language models.
"ChatGPT, OpenAI's chatbot, consumes more than half a million kilowatt-hours of electricity each day, which is about 17,000 times more than the average US household. This is enough to power about 200 million requests, or nearly 180,000 US households. A single ChatGPT query uses about 2.9 watt-hours, which is almost 10 times more than a Google search, which uses about 0.3 watt-hours.
According to estimates, ChatGPT emits 8.4 tons of carbon dioxide per year, more than twice the amount that is emitted by an individual, which is 4 tons per year. Of course, the type of power source used to run these data centres affects the amount of emissions produced – with coal or natural gas-fired plants resulting in much higher emissions compared to solar, wind, or hydroelectric power – making exact figures difficult to provide.
A recent study by researchers at the University of California, Riverside, revealed the significant water footprint of AI models like ChatGPT-3 and 4. The study reports that Microsoft used approximately 700,000 litres of freshwater during GPT-3’s training in its data centres – that’s equivalent to the amount of water needed to produce 370 BMW cars or 320 Tesla vehicles."
Now I don't want to sit here and say that AI is the worst thing that has ever happened. It can be an important tool in advancing effectiveness in technology! However, there are quite a few drawbacks as we have not figured out yet how to mitigate these issues, especially on the environment, if not used wisely. Likewise, AI is not meant to do the work for you, it's meant to assist. For example, having it spell check your work? Sure, why not! Having it write your work and fics for you? You are stealing from others that worked hard to produce beautiful work.
Thank you for coming to my Cyn Talk. I love you all!
238 notes
·
View notes
Text
Recent advances in artificial intelligence (AI) have generalized the use of large language models in our society, in areas such as education, science, medicine, art, and finance, among many others. These models are increasingly present in our daily lives. However, they are not as reliable as users expect. This is the conclusion of a study led by a team from the VRAIN Institute of the Universitat Politècnica de València (UPV) and the Valencian School of Postgraduate Studies and Artificial Intelligence Research Network (ValgrAI), together with the University of Cambridge, published today in the journal Nature. The work reveals an “alarming” trend: compared to the first models, and considering certain aspects, reliability has worsened in the most recent models (GPT-4 compared to GPT-3, for example). According to José Hernández Orallo, researcher at the Valencian Research Institute in Artificial Intelligence (VRAIN) of the UPV and ValgrAI, one of the main concerns about the reliability of language models is that their performance does not align with the human perception of task difficulty. In other words, there is a discrepancy between expectations that models will fail according to human perception of task difficulty and the tasks where models actually fail. “Models can solve certain complex tasks according to human abilities, but at the same time fail in simple tasks in the same domain. For example, they can solve several doctoral-level mathematical problems, but can make mistakes in a simple addition,” points out Hernández-Orallo. In 2022, Ilya Sutskever, the scientist behind some of the biggest advances in artificial intelligence in recent years (from the Imagenet solution to AlphaGo) and co-founder of OpenAI, predicted that “perhaps over time that discrepancy will diminish.” However, the study by the UPV, ValgrAI, and University of Cambridge team shows that this has not been the case. To demonstrate this, they investigated three key aspects that affect the reliability of language models from a human perspective.
25 September 2024
50 notes
·
View notes
Text
Elon Musk Shakes Up the AI Landscape as Grok-3 Outclasses Other Models
Tech leaders are on a race to build the best AI model and Musk has put his Grok-3 where his mouth is. But OpenAI will not stay idle. Elon Musk and xAI’s new release, Grok-3, has risen to the top of the AI industry. Released on Monday, the chatbot is currently ranked first in the Chatbot Arena leaderboard. The leaderboard ranks best large-language models and AI chatbots based on user preference.…
0 notes
Text
A welcome result of the increased efficiency of the models—both the hosted ones and the ones I can run locally—is that the energy usage and environmental impact of running a prompt has dropped enormously over the past couple of years.
OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days. I have it on good authority that neither Google Gemini nor Amazon Nova (two of the least expensive model providers) are running prompts at a loss.
I think this means that, as individual users, we don’t need to feel any guilt at all for the energy consumed by the vast majority of our prompts. The impact is likely neglible compared to driving a car down the street or maybe even watching a video on YouTube.
Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign that training costs can and should continue to drop.
For less efficient models I find it useful to compare their energy usage to commercial flights. The largest Llama 3 model cost about the same as a single digit number of fully loaded passenger flights from New York to London. That’s certainly not nothing, but once trained that model can be used by millions of people at no extra training cost.
good news! or bad news, depending on how you see it.
32 notes
·
View notes