#Gemini25Pro
Explore tagged Tumblr posts
techfinancehubplus · 17 days ago
Text
Google Unveils Deep Search: Gemini 2.5 Pro-Powered AI Transforms Online Research
Tumblr media
Google is once again pushing the boundaries of artificial intelligence, rolling out a groundbreaking Deep Search feature powered by its new Gemini 2.5 Pro model. Marketed as a personal research assistant, Deep Search promises to revolutionize the way users gather and synthesize information from across the web—delivering comprehensive, citation-backed reports in minutes.
An AI That Researches for You
Deep Search operates with a level of autonomy previously unseen in consumer research tools. Users can initiate queries as simple as “best electric cars for families” or as complex as “global impact of AI on the job market.” Behind the scenes, the system devises a custom research plan, scours hundreds of websites, and critically examines sources before presenting findings with clarity and nuance. Its integrated Audio Overview option even summarizes results for users on the go.
Powered by Gemini 2.5 Pro
At the core of this innovation is Gemini 2.5 Pro, Google’s state-of-the-art large language model. Unlike earlier models, Gemini 2.5 Pro demonstrates advanced reasoning, superior instruction following, and enhanced multimodal capabilities—enabling it to assess images, charts, and data alongside text. Users receive organized, citation-rich reports ideal for competitive analysis, due diligence, academic work, and major decisions like home buying.
Competing—and Winning—Against Rivals
Google claims Deep Search is already outperforming major competitors. In head-to-head studies with a peer research model from OpenAI, users preferred Gemini 2.5 Pro-driven results nearly 70% of the time, particularly citing improvements in thoroughness and writing quality.
Access and Availability
Deep Search is currently exclusive to Google AI Pro and AI Ultra subscribers through gemini.google.com and the Gemini mobile apps in the U.S. While early adopters are professionals and students, the feature is poised to change how everyone—from business leaders to casual enthusiasts—tackles complex online searches.
The Future of Web Research
By turning AI into a hands-on research assistant, Google is advancing its vision of a smarter, more accessible internet. As generative models like Gemini 2.5 Pro take center stage, the company’s latest move signals an era where users no longer settle for lists of links—they expect actionable, in-depth knowledge, and Deep Search aims to deliver just that.
0 notes
govindhtech · 3 months ago
Text
Qwen 3 Benchmarks Surpassing Gemini 2.5 Pro, and Grok-3
Tumblr media
After four months, Alibaba's new model family may surpass DeepSeek-R1, the top open-weights big language model.
Qwen 3: Faster, Deeper
Overview
Qwen3 is the latest big language model from Qwen. Qwen3-235B-A22B flagship model exceeds DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro in math, coding, and general capabilities. A tiny MoE model, Qwen3-30B-A3B, beats QwQ-32B with ten times as many active parameters, and even Qwen3-4B can compete with Qwen2.5-72B-Instruct.
We are open-weighting two MoE models: Qwen3-235B-A22B, a big model with 235 billion total parameters and 22 billion activated parameters, and Qwen3-30B-A3B, a smaller model with 30 billion total parameters and 3 billion activated parameters.
Six dense models—Qwen3-32B, Qwen3-14B, Qwen3-8B, Qwen3-4B, Qwen3-1.7B, and Qwen3-0.6B—are also open-weighted under Apache 2.0.
Hugging Face, ModelScope, and Kaggle now provide post-trained and pre-trained models like Qwen3-30B-A3B-Base. It recommends SGLang and vLLM for deployment. Ollama, LMStudio, MLX, llama.cpp, and KTransformers are recommended for local usage. These solutions make Qwen3 easy to integrate into development, production, and research workflows.
Qwen 3 allows researchers, developers, and organisations worldwide to design unique solutions using these cutting-edge models.
Try Qwen3 on the mobile app and chat.qwen.ai!
Important Features
Mixed Thinking
Qwen3 models introduce hybrid problem-solving. They offer two modes:
Thinking Mode: The model deliberates before responding. This is ideal for complex topics that require more thought.
Non-Thinking Mode: The model replies almost rapidly, making it suitable for simpler questions where depth is less important than speed.
As previously established, Qwen 3 delivers smooth and scalable performance benefits connected to computational reasoning budget. This design makes task-specific budgets easier to configure, improving inference quality and cost.
Supports several languages
Qwen 3 models accommodate 119 dialects. Due to their multilingual capabilities, these models may be used worldwide, opening up new possibilities.
Increased Agentic Capability
It optimised Qwen 3 models for coding and agentic capabilities and strengthened MCP support. The following examples show how Qwen3 thinks and acts.
In comparison to Qwen2.5
Qwen3 has a much larger pretraining dataset than Qwen2.5. Qwen2.5 was pre-trained on 18 trillion tokens, whereas Qwen3 uses 36 trillion over 119 languages and dialects. Qwen2.5-VL applied these research to enhance it. To add math and code data, Qwen2.5-Math and Qwen2.5-Coder developed synthetic data. Code samples, textbooks, and Q&As are included.
Qwen3 Pre-workout
It takes three stages to prepare for training. The model was pretrained on about 30 trillion tokens with a 4K context length in stage 1 (S1). The model learnt basic language and general knowledge at this time. In stage 2 (S2), we added STEM, coding, and reasoning challenges to the dataset. The model was pretrained with 5 trillion extra tokens. High-quality long-context data was used to extend the context to 32K tokens in the last stage. This assures the model can efficiently handle longer inputs.
Qwen 3 dense base models perform similarly to Qwen2.5 base models with more parameters due to model architectural advancements, more training data, and more efficient training methods. Qwen2.5-3B/7B/14B/32B/72B-Base and Qwen3-1.7B/4B/8B/14B/32B-Base work similarly. Qwen 3 dense base models outperform Qwen2.5 models in STEM, coding, and reasoning. For Qwen3-MoE basis models, they perform similarly to Qwen2.5 dense base models with 10% of active parameters. Thus, training and inference costs drop dramatically.
Post-training
The hybrid model, which can reason step-by-step and respond swiftly, was trained using a four-stage pipeline. This pipeline includes reasoning-based reinforcement learning (RL), thinking mode fusion, long chain-of-thought (CoT) cold start, and generic RL.
First, it improved the models using lengthy CoT data from coding, maths, logical reasoning, and STEM issues. Teaching the model fundamental thinking was the goal. The second phase increased reinforcement learning computing power using rule-based incentives to better model exploration and exploitation.
The third phase enhanced the thinking model utilising extended CoT data and regularly used instruction-tuning data to include non-thinking skills. The second stage's upgraded thinking model produced this data, ensuring smooth reasoning and rapid reaction times. The fourth step employed reinforcement learning (RL) on over 20 broad-domain tasks to increase the model's general capabilities and repair undesired behaviours. Agent capabilities, format following, and instruction following were among these duties.
Agentic uses
Qwen 3 calls tools well. To fully exploit Qwen3's agentic features, use Qwen-Agent. Qwen-Agent's inherent encapsulation of tool-calling templates and parsers simplifies development.
The MCP configuration file, Qwen-Agent integrated tool, or custom tools can define available tools.
0 notes
globalresourcesvn · 2 months ago
Text
Google Gemini 2.5 Pro: Bản Xem Trước Đột Phá Dành Cho Doanh Nghiệp & Nhà Phát Triển!
## Google Gemini 2.5 Pro: Bản Xem Trước Đột Phá Dành Cho Doanh Nghiệp & Nhà Phát Triển! #GoogleGemini #AI #Gemini25Pro #TríTuệNhânTạo #CôngNghệCao #DoanhNghiệp #PhátTriển Google đã chính thức tung ra phiên bản xem trước của Gemini 2.5 Pro, một bước tiến vượt bậc so với phiên bản ra mắt tại sự kiện I/O 2025 hồi tháng Năm. Đây là một tin cực kỳ đáng chú ý dành cho các doanh nghiệp và nhà phát triển…
0 notes
globalresourcesvn · 3 months ago
Text
Siêu phẩm Gemini 2.5 Pro Preview: Google "thả xích" quyền truy cập sớm cho nhà phát triển!
Siêu phẩm Gemini 2.5 Pro Preview: Google “thả xích” quyền truy cập sớm cho nhà phát triển! #Gemini25Pro #Google #AI #PhátTriểnỨngDụng #MãHóa #CôngNghệ #TríTuệNhânTạo #I/OEdition Google đã chính thức công bố quyền truy cập sớm vào phiên bản xem trước của Gemini 2.5 Pro (I/O Edition) vào thứ Ba vừa qua. Đây là một bản cập nhật đáng chú ý của mô hình ngôn ngữ lớn Gemini 2.5 Pro, được thiết kế đặc…
0 notes