#ResponsesAPI
Explore tagged Tumblr posts
Text
Remote MCP server, Code Interpreter, Image Generation in API

OpenAI Responses API
Developers and organisations can now use the Responses API with Code Interpreter, image generation, and remote MCP server functionality.
Today, OpenAI's Responses API, which creates agentic apps, gets more features. Image creation, Code Interpreter, and better file search are included, along with support for all remote Model Context Protocol (Remote MCP) servers. These tools support OpenAI o-series reasoning models, GPT 4.1, and GPT 4o.
The Responses API lets o3 and o4-mini call tools and functions directly in their chain of thought, producing more relevant and contextual responses. By retaining reasoning tokens across requests and tool calls, o3 and o4-mini with the Responses API improve model intelligence and reduce developer costs and latency.
The Responses API, essential for agentic systems, has been improved. Over hundreds of thousands of developers have used the API to handle billions of tokens for agentic applications like education aids, market intelligence agents, and coding agents since March 2025.
New features and built-in tools improve agentic systems constructed with the Responses API's functionality and dependability.
Additional Responses API Resources
Many new tools are incorporated into the Responses API:
Remote MCP Server Support
Remote Model Context Protocol (remote MCP) servers can now connect to API tools. Open protocol MCP standardises how apps give Large Language Models (LLMs) context. MCP servers let developers to connect OpenAI models to Cloudflare, HubSpot, Intercom, PayPal, Plaid, Shopify, Stripe, Square, Twilio, and Zapier with little code. OpenAI joins the MCP steering committee to improve the ecosystem and standard.
Image making
Developers can use OpenAI's latest image generating model, gpt-image-1, in the Responses API. This program supports multi-turn edits for granular, step-by-step image editing through prompts and real-time streaming for image previews. Even if the Images API can produce images, the Responses API's image generating tool is innovative. The reasoning model series o3 model supports this tool.
Interpreter Code
Responses API now has this utility. The Code Interpreter can aid with data analysis, complex mathematics and coding challenges, and “thinking with images” by empowering models to understand and deal with images. Models like o3 and o4-mini fare better on Humanity's Last Exam when they use the Code Interpreter.
Enhancements to File Search
Since March 2025, the API has offered file search, but new functionalities have been introduced. Developers can use the file search tool to extract relevant document chunks into the model based on user queries. The changes enable vector storage searches and attribute filtering with arrays.
These tools work with the GPT-4o, GPT-4.1, and OpenAI o-series reasoning models (o1, o3, o3-mini, and o4-mini for availability under the pricing/availability section). Developers can use these built-in technologies to construct stronger agents with one API call. Industry-standard benchmarks show that models that call more tools while reasoning perform better. O3 and o4-mini's ability to invoke tools and functions straight from their reasoning yields more contextually relevant responses.
Saving reasoning tokens across tool calls and requests improves model intelligence and reduces latency and cost.
New Responses API Features
Along with the new tools, developers and enterprises may now use privacy, visibility, and dependability features:
Background Mode: This lets developers manage long tasks reliably and asynchronously. Background mode prevents timeouts and network issues while solving difficult problems with reasoning models, which can take minutes. Developers can stream events or poll background objects for completion to see the latest state. Agentic products like Operator, Codex, and deep research have similar functions.
Reasoning Summaries: The API may now summarise the model's internal logic in natural language. Similar to ChatGPT, this helps developers debug, audit, and improve end-user experiences. Reasoning summaries are free.
Customers who qualify for Zero Data Retention (ZDR) can reuse encrypted reasoning items between API queries. OpenAI does not store these reasoning pieces. Sharing reasoning items between function calls improves intelligence, reduces token usage, and increases cache hit rates for models like o3 and o4-mini, reducing latency and costs.
Price, availability
These new features and tools are available now. The OpenAI o-series reasoning models (o1, o3, o3-mini, and o4-mini) and GPT-4o and GPT-4.1 series support them. Only the reasoning series' o3 model supports image production.
Current tools cost the same. The new tools' pricing is specified:
Images cost $5.00/1M for text input tokens, $10.00/1M for image input tokens, and $40.00/1M for image output tokens with a 75% discount on cached input tokens.
Each Code Interpreter container costs $0.03.
File search costs $2.50/1k tool calls and $0.10/GB vector storage daily.
Developers pay for API output tokens, not the tool itself.
#remoteMCP#OpenAIResponsesAPI#OpenAI#ResponsesAPI#remoteModelContextProtocol#CodeInterpreter#technology#technews#technologyynews#news#govindhtech
0 notes
Link
MCP: الجسر الجديد بين نماذج الذكاء الاصطناعي ومصادر البيانات المتنوعة.. في خطوة تُبرز التوجه نحو التعاون والانفتاح في قطاع الذكاء الاصطناعي، أعلنت شركة OpenAI عن تبنيها لبروتوكول "سياق النموذج" (Model Context Protocol - MCP) المفتوح المصدر، الذي طورته شركة Anthropic. يهدف هذا البروتوكول إلى تمكين نماذج الذكاء الاصطناعي من الوصول المباشر إلى مصادر البيانات الخارجية، مما يعزز دقة وملاءمة استجاباتها. **ما هو
0 notes