#GPT-DeepSeek AI
Explore tagged Tumblr posts
Text
"GPT-DeepSeek AI: क्या यह GPT-4 को पछाड़ देगा? सच जानकर हैरान रह जाएंगे!"
#GPT-DeepSeek AI#GPT-4#AI तुलना#लैंग्वेज मॉडल्स#SEO#कंटेंट क्रिएशन#AI तकनीक#DeepSeek AI#आर्टिफिशियल इंटेलिजेंस#SEO में AI
3 notes
·
View notes
Text
The best thing I have seen in months. <3
youtube
8 notes
·
View notes
Text
youtube
DeepSeek Censors Tiananmen Square, West Philippine Sea, and Taiwan
A few days ago, DeepSeek was released in China which caused shockwaves to the Generative AI community. Claiming to be able to develop the LLM at a considerably cheaper price, NVidia stock price immediately plummeted around 17%, dragging stocks across the Semiconductor Sector along with it. We tested DeepSeek and how it responded to politically charged questions. It seems that the response is actively being censored on some key issues such as those involving the West Philippine Sea, Tiananmen Square, and Taiwan. However, the censorship comes a few seconds after the actual response is rendered.
5 notes
·
View notes
Text
What is DeepSeek AI? A Comprehensive Guide
Discover DeepSeek AI its features, uses, comparisons with ChatGPT, how to invest, run locally, and more in this comprehensive guide

2 notes
·
View notes
Text
DeepSeek
The water usage associated with AI primarily stems from the cooling requirements of data centers that power AI systems. These data centers house servers and other computing infrastructure that generate significant heat, necessitating cooling systems—often water-based—to maintain optimal operating temperatures.
Key Factors Influencing Water Usage:
Cooling Systems: Many data centers use evaporative cooling towers or other water-intensive cooling methods. The amount of water required depends on the size of the data center, the cooling technology used, and the local climate.
Energy Source: The water footprint of AI also indirectly depends on the energy sources powering the data centers. For example, thermoelectric power plants (coal, natural gas, nuclear) require large amounts of water for cooling, while renewable sources like wind and solar have minimal water usage.
AI Model Training: Training large AI models, such as GPT or other deep learning systems, requires significant computational resources, which in turn increases energy and water consumption.
Estimates of Water Usage:
A 2021 study estimated that training a large AI model like GPT-3 could consume up to 700,000 liters of water (for cooling and electricity generation), depending on the location and energy mix of the data center.
A 2023 study highlighted that Google’s U.S. data centers alone consumed 12.7 billion liters of water in 2021, a significant portion of which supports AI-related computations.
Reducing Water Usage:
Efficient Cooling Technologies: Transitioning to air-cooled systems or advanced cooling methods like liquid immersion can reduce water dependency.
Renewable Energy: Using renewable energy sources with low water footprints can mitigate indirect water usage.
Geographic Location: Placing data centers in cooler climates or regions with abundant renewable energy can minimize water and energy demands.
In summary, while AI itself doesn’t "use" water directly, the infrastructure supporting it can have a substantial water footprint. Efforts to improve efficiency and sustainability are critical as AI adoption grows.
#ai#deepseek#gpt#chatgpt#ai war#ai wars#trolleng#trolledu#water#water conservation#conservation#politics
3 notes
·
View notes
Text
如何快速部署 LLM 模型到 GPU 主機?從環境建置到服務啟動
GPU 主機 – 隨著 ChatGPT、LLaMA、DeepSeek 等大型語言模型(LLM)廣泛應用,越來越多企業與開發者希望將 LLM 模型自建於本地或 GPU 實體主機上。這樣不僅能取得更高的資料控制權,避免私密資訊外洩,也能有效降低長期使用成本,並規避商業 API 在頻率、使用量、功能上的限制與資安疑慮。
然而,部署 LLM 模型的第一道門檻往往是環境建置。從 CUDA 驅動的版本對應、PyTorch 的安裝,到 HuggingFace 模型快取與推論引擎選型,都是需要考量的技術細節。如果沒有明確指引,往往容易在初期階段耗費大量時間摸索。
這篇文章將以平易近人的方式,帶你從挑選 GPU 主機開始,逐步說明環境建置、部署流程、模型上線、API 串接、容器化管理與後續運維建議,協助你成功將 LLM 模型部署到實體主機,快速打造自己的本地 AI 推論平台。
#AI Server#AI 主機#AI 伺服器#AI主機租用#DeepSeek#GPT#GPU Server#GPU 主機#GPU 主機租用#GPU 伺服器#LLaMA#OpenAI#PyTorch#實體主機
0 notes
Text
1️⃣ GPT o3
2️⃣ DeepSeek-R1-0528
3️⃣ Claude Opus 4
4️⃣ Gemini 2.5 PRO
This radar chart reveals how today's leading AI models compare across key capabilities that directly impact creative workflows, from code generation (LiveCodeBench, HumanEval) that can help automate repetitive tasks or create interactive experiences, to advanced reasoning (MATH, AIME_2024) that enables more sophisticated problem-solving in complex creative projects
OpenAI's GPT o3 (shown in bright blue) demonstrates exceptional all-around performance, particularly excelling in general knowledge (MMLU) and coding tasks, making it a versatile choice for creators who need an AI assistant that can handle everything from conceptual brainstorming to technical implementation.
DeepSeek and Claude Opus 4 show distinctive strengths in mathematical and analytical tasks, which translates to better performance in data-driven creative work like generative art algorithms or music composition, while Gemini 2.5 PRO's balanced profile suggests reliability across diverse creative applications
#machinelearning#artificialintelligence#art#digitalart#mlart#datascience#ai#algorithm#bigdata#gpt o3#deepseek#claude opus 4#gemini 2.5 PRO
1 note
·
View note
Text
Day 5
Dear Diary
Have you ever pitted the ai's against each other for fun.
Well I have. In that school yard, he said, she said, they said type of needling.
They range from some facts to getting quite "sarcastic" (I think that's purely because I only ever go on them at 2am and ask them stupid questions because I can't sleep and I don't want to get stuck in a doom scroll or a never ending research pit, you know the type you ask your cousins at sleep overs).
Well tonight it was an outlier says this, deepseek says that, chat gpt brags this, etc.
All in all there was balanced arguments happening. Which all sum up as. The other ai programmes are good at searching for facts but can sometimes pick up wrong information that they appreciate you correcting them about. And chat gpt is that kid in math class staring up at the clouds trying to find bunny shapes.
For instance this was Chat gpt's conclusion to their answer to me saying Deepseek and Outlier and Cortana say they're better systems than Chat gpt

Chat literally said it cannot be trusted and has its head in the cloud.
Love
Manic Mouse
My pain is high. Why won't it let me sleep?
0 notes
Text
Elon Musk Shakes Up the AI Landscape as Grok-3 Outclasses Other Models
Tech leaders are on a race to build the best AI model and Musk has put his Grok-3 where his mouth is. But OpenAI will not stay idle. Elon Musk and xAI’s new release, Grok-3, has risen to the top of the AI industry. Released on Monday, the chatbot is currently ranked first in the Chatbot Arena leaderboard. The leaderboard ranks best large-language models and AI chatbots based on user preference.…
0 notes
Text
I'm just a girl, standing in front of chat gpt, asking him... pretty much everything actually...
0 notes
Text
DeepSeek Rekabeti Kızıştırdı: OpenAI ve Baidu’dan Ücretsiz AI Hamlesi!
🤖 Yapay zeka rekabetinde yeni dönem!💥 OpenAI ve Baidu, DeepSeek’in yükselişi karşısında AI hizmetlerini ücretsiz sunuyor!📅 Baidu’nun AI sohbet botu 1 Nisan’da ücretsiz olurken, OpenAI GPT-5’i sınırsız erişime açıyor! 📢 Bu karar, AI dünyasını nasıl etkileyecek? 🤔👇 OpenAI ve Baidu, Yapay Zeka Rekabetinde Ücretsiz Hizmet Dönemine Geçiyor! 📌 Yapay zeka alanındaki rekabet, Çinli girişim DeepSeek’in…
0 notes
Text
very valid point! I don't really expect to consider it an adequate substitute for general human interaction any time soon, in large part due to said preference (if nothing else, at least in its current form, the way it is totally directed by me is boring). but tbh this is wise to keep in mind even for humans in many cases
(also the asshole boss thing is really funny)
I'm starting to get really worried about what the future of chatbots is going to be like, particularly because it seems like the obvious (and bad!) solution to the loneliness epidemic.
There's this trope about billionaires that surround themselves with yes men, and I think a huge chunk of people would do that if it were an option: people who don't push back against you, people who laugh at all your jokes, people who are always super interested in everything that you have to say, who tell you your ideas are good, that you're sexy, that you're actually just an exceptional human being who deserves every good thing in this world.
And now, through the power of chatbots, we're approaching the democratization of yes men.
People are already using the LLMs for this, though maybe they're not viewing it like that. They want a companion, they want someone to talk to, and yeah, they want that conversation to have minimal friction, they want affirmations. They don't want to be told that what they said to your coworker was a little fucked up, actually. And the chatbot can always be there, no wait times, no messages that aren't immediately responded to.
So this is all before getting into the fact that the chatbots are going to be run by megacorps that are using behavioral analysis and dark patterns. Which is obviously bad, particularly for people who have no actual real life friends.
I'm having trouble modeling the future we're hurtling into. I think I can see how it would be for individuals, what it would be like to be one of these people who gets locked into this chatbot relationship in a really bad way, a friend who's always engaging and witty and a good listener. But on a societal level ... seems like there have got to be some non-obvious ways that this compounds and spirals.
#responses#ai#I may consider mostly substituting my therapist though#she is susceptible to many of the same factors but is much stupider about it#at least the bot doesn't need several months to understand what I'm saying.#I've almost exclusively used deepseek because of the huge context window#it seems to use approximately the same tone and response format whatever I ask#I might just be prompting poorly though#but sounds like gpt maybe varies more
334 notes
·
View notes
Text
points out that OpenAI kept GPT-2 proprietary due on the basis of the risks (!) it might pose, but now of course everyone in the world has access to a far superior model.
30 notes
·
View notes
Text

The DeepSeek panic reveals an AI world ready to blow❗💥
The R1 chatbot has sent the tech world spinning – but this tells us less about China than it does about western neuroses
The arrival of DeepSeek R1, an AI language model built by the Chinese AI lab DeepSeek, has been nothing less than seismic. The system only launched last week, but already the app has shot to the top of download charts, sparked a $1tn (£800bn) sell-off of tech stocks, and elicited apocalyptic commentary in Silicon Valley. The simplest take on R1 is correct: it’s an AI system equal in capability to state-of-the-art US models that was built on a shoestring budget, thus demonstrating Chinese technological prowess. But the big lesson is perhaps not what DeepSeek R1 reveals about China, but about western neuroses surrounding AI.
For AI obsessives, the arrival of R1 was not a total shock. DeepSeek was founded in 2023 as a subsidiary of the Chinese hedge fund High-Flyer, which focuses on data-heavy financial analysis – a field that demands similar skills to top-end AI research. Its subsidiary lab quickly started producing innovative papers, and CEO Liang Wenfeng told interviewers last November that the work was motivated not by profit but “passion and curiosity”.
This approach has paid off, and last December the company launched DeepSeek-V3, a predecessor of R1 with the same appealing qualities of high performance and low cost. Like ChatGPT, V3 and R1 are large language models (LLMs): chatbots that can be put to a huge variety of uses, from copywriting to coding. Leading AI researcher Andrej Karpathy spotted the company’s potential last year, commenting on the launch of V3: “DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget.” (That quoted budget was $6m – hardly pocket change, but orders of magnitude less than the $100m-plus needed to train OpenAI’s GPT-4 in 2023.)
R1’s impact has been far greater for a few different reasons.
First, it’s what’s known as a “chain of thought” model, which means that when you give it a query, it talks itself through the answer: a simple trick that hugely improves response quality. This has not only made R1 directly comparable to OpenAI’s o1 model (another chain of thought system whose performance R1 rivals) but boosted its ability to answer maths and coding queries – problems that AI experts value highly. Also, R1 is much more accessible. Not only is it free to use via the app (as opposed to the $20 a month you have to pay OpenAI to talk to o1) but it’s totally free for developers to download and implement into their businesses. All of this has meant that R1’s performance has been easier to appreciate, just as ChatGPT’s chat interface made existing AI smarts accessible for the first time in 2022.
Second, the method of R1’s creation undermines Silicon Valley’s current approach to AI. The dominant paradigm in the US is to scale up existing models by simply adding more data and more computing power to achieve greater performance. It’s this approach that has led to huge increases in energy demands for the sector and tied tech companies to politicians. The bill for developing AI is so huge that techies now want to leverage state financing and infrastructure, while politicians want to buy their loyalty and be seen supporting growing companies. (See, for example, Trump’s $500bn “Stargate” announcement earlier this month.) R1 overturns the accepted wisdom that scaling is the way forward. The system is thought to be 95% cheaper than OpenAI’s o1 and uses one tenth of the computing power of another comparable LLM, Meta’s Llama 3.1 model. To achieve equivalent performance at a fraction of the budget is what’s truly shocking about R1, and it’s this that has made its launch so impactful. It suggests that US companies are throwing money away and can be beaten by more nimble competitors.
But after these baseline observations, it gets tricky to say exactly what R1 “means” for AI. Some are arguing that R1’s launch shows we’re overvaluing companies like Nvidia, which makes the chips integral to the scaling paradigm. But it’s also possible the opposite is true: that R1 shows AI services will fall in price and demand will, therefore, increase (an economic effect known as Jevons paradox, which Microsoft CEO Satya Nadella helpfully shared a link to on Monday). Similarly, you might argue that R1’s launch shows the failure of US policy to limit Chinese tech development via export controls on chips. But, as AI policy researcher Lennart Heim has argued, export controls take time to work and affect not just AI training but deployment across the economy. So, even if export controls don’t stop the launches of flagships systems like R1, they might still help the US retain its technological lead (if that’s the outcome you want).
All of this is to say that the exact effects of R1’s launch are impossible to predict. There are too many complicating factors and too many unknowns to say what the future holds. However, that hasn’t stopped the tech world and markets reacting in a frenzy, with CEOs panicking, stock prices cratering, and analysts scrambling to revise predictions for the sector. And what this really shows is that the world of AI is febrile, unpredictable and overly reactive. This a dangerous combination, and if R1 doesn’t cause a destructive meltdown of this system, it’s likely that some future launch will.
Daily inspiration. Discover more photos at Just for Books…?
#just for books#DeepSeek#Opinion#Artificial intelligence (AI)#Computing#China#Asia Pacific#message from the editor
27 notes
·
View notes
Text
It's kind of hilarious how China waited until after all the American Oligarchs lined up to kiss Trump's butt cheeks, in order to line their pockets with government money to fuel their AI programs, to launch DeepSeek.
It's supposedly better than Chat GPT, made with a fraction of the price and open source, which means every Tom Dick and Harry can now be in the AI business. This basically has cut US AI stock off at the knees now that they can no longer monopolise the market.
I expect China to now announce that they also have a manned mission to Mars on its way, along with a mining expedition to the asteroid belt. It cost them twenty bucks and some change they found down the back of the couch
(I'm here to see all of Musk's dreams squashed. I'm petty that way)
(also, maybe more importantly, DeepSeek uses a lot less energy to run than the American AI versions, so it's a lot more environmentally friendly)
18 notes
·
View notes