<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Openai-Sdk on RockB</title><link>https://baeseokjae.github.io/tags/openai-sdk/</link><description>Recent content in Openai-Sdk on RockB</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 08 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://baeseokjae.github.io/tags/openai-sdk/index.xml" rel="self" type="application/rss+xml"/><item><title>AI Agents SDK Comparison 2026: Strands vs OpenAI SDK vs Mastra</title><link>https://baeseokjae.github.io/posts/ai-agents-sdk-comparison-2026/</link><pubDate>Fri, 08 May 2026 00:00:00 +0000</pubDate><guid>https://baeseokjae.github.io/posts/ai-agents-sdk-comparison-2026/</guid><description>Side-by-side technical comparison of the three most important AI agent SDKs in 2026 — AWS Strands, OpenAI Agents SDK, and Mastra — covering features, pricing, and which to pick for your use case.</description><content:encoded><![CDATA[<p>Three SDKs have emerged as the default starting points when teams reach for an AI agent framework in 2026: AWS Strands Agents, the OpenAI Agents SDK, and Mastra. Each reflects a different design philosophy — model-driven minimalism, industry-standard tooling, and batteries-included TypeScript — and each is genuinely good at what it targets. This comparison cuts through the marketing to give you a technical, opinionated view of all three so you can make the right call for your project without burning two weeks on trials.</p>
<h2 id="ai-agents-sdk-comparison-2026-strands-vs-openai-sdk-vs-mastra">AI Agents SDK Comparison 2026: Strands vs OpenAI SDK vs Mastra</h2>
<p>The AI agent framework market crossed a tipping point in 2025: over 57% of engineering organizations now ship at least one agent to production, and the tooling landscape fragmented fast enough that framework selection became a real architectural decision. By early 2026, three SDKs pull the majority of new project starts — AWS Strands (launched May 2025, Apache 2.0), the OpenAI Agents SDK (the official Python and TypeScript SDK that made Responses API the primary agentic interface), and Mastra (TypeScript-first, 23,200+ GitHub stars, $35M funded). Their combined footprint touches hundreds of millions of daily API calls, enterprise deployments on every major cloud, and developer communities that generate more GitHub activity than any previous generation of AI frameworks. Understanding the differences is not academic — it determines whether your team ships in days or weeks, whether you get type safety or runtime surprises, and whether your AWS infrastructure investments carry forward into your agent layer. This article covers each SDK in depth, then gives you a concrete decision framework for the most common real-world scenarios.</p>
<h2 id="strands-agents-sdk-awss-model-driven-approach-to-agent-building">Strands Agents SDK: AWS&rsquo;s Model-Driven Approach to Agent Building</h2>
<p>AWS Strands Agents reached 14 million downloads within its first year, a pace that reflects both Amazon&rsquo;s distribution muscle and the genuine simplicity of its model-driven design. Launched in May 2025 under Apache 2.0, Strands starts from a different premise than most frameworks: rather than asking developers to define explicit graphs or chain sequences, it lets the underlying LLM decide at runtime which tools to call, in which order, and when to stop. You write a Python function, decorate it with <code>@tool</code>, pass it to an <code>Agent</code>, and call the agent with a natural-language prompt. The complete working agent is five lines. Strands powers production AWS services including Amazon Q Developer and the VPC Reachability Analyzer, and supports nine model providers — Amazon Bedrock, Anthropic, OpenAI, Google Gemini, LiteLLM, Llama, Ollama, Writer, and custom providers — through a unified interface. Switching providers is a one-line config change. OpenTelemetry instrumentation ships built-in, routing traces to AWS X-Ray and CloudWatch without configuration. Version 1.0 added three multi-agent patterns — Graph, Swarm, and Workflow — plus the A2A (Agent-to-Agent) protocol for cross-framework interoperability. For teams running on AWS, the Bedrock AgentCore deployment path is fully managed: push your agent code and get auto-scaling, IAM integration, and CloudWatch dashboards at no additional tooling cost.</p>
<h3 id="strands-hello-world">Strands Hello World</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> strands <span style="color:#f92672">import</span> Agent, tool
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@tool</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">search_docs</span>(query: str) <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Search internal documentation for a query.&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> <span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Results for &#39;</span><span style="color:#e6db74">{</span>query<span style="color:#e6db74">}</span><span style="color:#e6db74">&#39;: [doc1, doc2, doc3]&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(tools<span style="color:#f92672">=</span>[search_docs])
</span></span><span style="display:flex;"><span>print(agent(<span style="color:#e6db74">&#34;What does our refund policy say?&#34;</span>))
</span></span></code></pre></div><p>That is a production-ready agent skeleton. The model reads the function name, type annotations, and docstring to auto-generate the JSON Schema it sends to the LLM — your documentation doubles as the tool specification.</p>
<h3 id="multi-agent-with-strands">Multi-Agent with Strands</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> strands <span style="color:#f92672">import</span> Agent
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> strands.multiagent <span style="color:#f92672">import</span> swarm
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>researcher <span style="color:#f92672">=</span> Agent(system_prompt<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You research topics thoroughly.&#34;</span>)
</span></span><span style="display:flex;"><span>writer <span style="color:#f92672">=</span> Agent(system_prompt<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You write clear summaries.&#34;</span>)
</span></span><span style="display:flex;"><span>reviewer <span style="color:#f92672">=</span> Agent(system_prompt<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You check accuracy and tone.&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>result <span style="color:#f92672">=</span> swarm([researcher, writer, reviewer], <span style="color:#e6db74">&#34;Write a brief on AI agent trends in 2026&#34;</span>)
</span></span></code></pre></div><p>Strands&rsquo; model-driven loop handles routing — you define roles, the framework handles orchestration.</p>
<h2 id="openai-agents-sdk-the-industry-standard-with-500m-daily-api-calls">OpenAI Agents SDK: The Industry Standard with 500M+ Daily API Calls</h2>
<p>OpenAI&rsquo;s platform processes over 500 million API calls per day across its ecosystem, and the Agents SDK is the official Python and TypeScript layer that structures those calls into production agentic workflows. Released in early 2026 and stabilized at version 0.13.4 (April 2026), it exposes four core primitives — Agents, Handoffs, Guardrails, and Tracing — that cover the majority of real-world agent patterns without requiring you to build orchestration infrastructure from scratch. The Responses API is the primary agentic interface: it handles multi-turn state, streaming, tool call parsing, and result injection in a single unified surface that replaces the older chat completions loop. The SDK&rsquo;s documentation is the most comprehensive in the space — hundreds of working examples, a dedicated cookbook, and a Discord community with active OpenAI engineers. Enterprise support tiers include dedicated TAMs and SLA-backed uptime guarantees no open-source-only project can match. For teams that need to ship quickly, trust a well-maintained dependency, and want confidence that the SDK tracks OpenAI model capabilities (including Codex as the next-generation coding agent), the OpenAI Agents SDK is the lowest-risk choice. It also integrates with any provider conforming to the chat completions format via LiteLLM, making it more model-agnostic in practice than its name implies.</p>
<h3 id="openai-agents-sdk-hello-world">OpenAI Agents SDK Hello World</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> agents <span style="color:#f92672">import</span> Agent, Runner
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> asyncio
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;support_agent&#34;</span>,
</span></span><span style="display:flex;"><span>    instructions<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You help users resolve account and billing issues.&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gpt-4o&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">main</span>():
</span></span><span style="display:flex;"><span>    result <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> Runner<span style="color:#f92672">.</span>run(agent, <span style="color:#e6db74">&#34;My invoice for April is incorrect.&#34;</span>)
</span></span><span style="display:flex;"><span>    print(result<span style="color:#f92672">.</span>final_output)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>asyncio<span style="color:#f92672">.</span>run(main())
</span></span></code></pre></div><h3 id="handoffs-routing-between-specialist-agents">Handoffs: Routing Between Specialist Agents</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> agents <span style="color:#f92672">import</span> Agent, handoff
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>billing_agent <span style="color:#f92672">=</span> Agent(name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;billing&#34;</span>, instructions<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Handle billing questions.&#34;</span>)
</span></span><span style="display:flex;"><span>tech_agent <span style="color:#f92672">=</span> Agent(name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;tech_support&#34;</span>, instructions<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Handle technical issues.&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>triage_agent <span style="color:#f92672">=</span> Agent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;triage&#34;</span>,
</span></span><span style="display:flex;"><span>    instructions<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Route the user to the right specialist.&#34;</span>,
</span></span><span style="display:flex;"><span>    handoffs<span style="color:#f92672">=</span>[handoff(billing_agent), handoff(tech_agent)],
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>Handoffs are typed: the SDK validates the handoff target at startup rather than failing at runtime, catching configuration errors before they reach production.</p>
<h3 id="typescript-support">TypeScript Support</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-typescript" data-lang="typescript"><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">Agent</span>, <span style="color:#a6e22e">run</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@openai/agents&#34;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">agent</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Agent</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">name</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;support_agent&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">instructions</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;You help users resolve account and billing issues.&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">model</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;gpt-4o&#34;</span>,
</span></span><span style="display:flex;"><span>});
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">result</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> <span style="color:#a6e22e">run</span>(<span style="color:#a6e22e">agent</span>, <span style="color:#e6db74">&#34;My invoice for April is incorrect.&#34;</span>);
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">console</span>.<span style="color:#a6e22e">log</span>(<span style="color:#a6e22e">result</span>.<span style="color:#a6e22e">finalOutput</span>);
</span></span></code></pre></div><p>The TypeScript SDK mirrors the Python API surface, making it straightforward to maintain parity between backend Python agents and TypeScript-based frontend or edge deployments.</p>
<h2 id="mastra-the-typescript-first-framework-with-23200-github-stars">Mastra: The TypeScript-First Framework with 23,200+ GitHub Stars</h2>
<p>Mastra hit 23,200 GitHub stars and 300,000+ weekly npm downloads within its first major release cycle — adoption numbers that outpace every previous TypeScript AI framework by a significant margin. Built by the team behind Gatsby (Sam Bhagwat and the former Gatsby core engineers), Mastra applies the same philosophy that made Gatsby the dominant static-site framework: opinionated structure, batteries-included defaults, and a development experience that eliminates entire categories of configuration work. Backed by $35M in total funding including a $22M Series A led by Spark Capital in April 2026, Mastra has enterprise deployments at Brex, Docker, Elastic, MongoDB, Salesforce, Replit, and SoftBank. The Marsh McLennan enterprise search agent built on Mastra is used by 100,000+ employees daily. What makes Mastra structurally different from competitors is its unified runtime: you get agents, tools, memory, workflow orchestration, RAG pipelines, evals, observability, and a local development UI (Mastra Studio) in a single <code>@mastra/core</code> package — not a collection of loosely related libraries you wire together yourself. All LLM providers are supported via OpenAI-compatible API, and Mastra has first-party MCP support, meaning any MCP server integrates in three lines of TypeScript. For the 60–70% of YC X25 agent startups building in TypeScript, Mastra is the default choice.</p>
<h3 id="mastra-hello-world">Mastra Hello World</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-typescript" data-lang="typescript"><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">Mastra</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@mastra/core&#34;</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">Agent</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@mastra/core/agent&#34;</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">openai</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@ai-sdk/openai&#34;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">supportAgent</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Agent</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">name</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;support_agent&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">instructions</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;You help users resolve account and billing issues.&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">model</span>: <span style="color:#66d9ef">openai</span>(<span style="color:#e6db74">&#34;gpt-4o&#34;</span>),
</span></span><span style="display:flex;"><span>});
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">export</span> <span style="color:#66d9ef">const</span> <span style="color:#a6e22e">mastra</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Mastra</span>({ <span style="color:#a6e22e">agents</span><span style="color:#f92672">:</span> { <span style="color:#a6e22e">supportAgent</span> } });
</span></span></code></pre></div><p>Run <code>npx mastra dev</code> and Mastra Studio opens at <code>http://localhost:4111</code> — a full chat interface, trace viewer, workflow runner, and eval dashboard, all without writing test code.</p>
<h3 id="mastra-workflow-engine">Mastra Workflow Engine</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-typescript" data-lang="typescript"><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">Workflow</span>, <span style="color:#a6e22e">Step</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@mastra/core/workflow&#34;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">researchStep</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Step</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">id</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;research&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">execute</span>: <span style="color:#66d9ef">async</span> ({ <span style="color:#a6e22e">context</span> }) <span style="color:#f92672">=&gt;</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#75715e">// fetch and summarize data
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>    <span style="color:#66d9ef">return</span> { <span style="color:#a6e22e">summary</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;...&#34;</span> };
</span></span><span style="display:flex;"><span>  },
</span></span><span style="display:flex;"><span>});
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">writeStep</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Step</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">id</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;write&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">execute</span>: <span style="color:#66d9ef">async</span> ({ <span style="color:#a6e22e">context</span> }) <span style="color:#f92672">=&gt;</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">const</span> { <span style="color:#a6e22e">summary</span> } <span style="color:#f92672">=</span> <span style="color:#a6e22e">context</span>.<span style="color:#a6e22e">getStepResult</span>(<span style="color:#e6db74">&#34;research&#34;</span>);
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> { <span style="color:#a6e22e">article</span><span style="color:#f92672">:</span> <span style="color:#e6db74">`Based on: </span><span style="color:#e6db74">${</span><span style="color:#a6e22e">summary</span><span style="color:#e6db74">}</span><span style="color:#e6db74">`</span> };
</span></span><span style="display:flex;"><span>  },
</span></span><span style="display:flex;"><span>});
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">pipeline</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Workflow</span>({ <span style="color:#a6e22e">name</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;content_pipeline&#34;</span> })
</span></span><span style="display:flex;"><span>  .<span style="color:#a6e22e">step</span>(<span style="color:#a6e22e">researchStep</span>)
</span></span><span style="display:flex;"><span>  .<span style="color:#a6e22e">then</span>(<span style="color:#a6e22e">writeStep</span>);
</span></span></code></pre></div><p>Workflows in Mastra are typed end-to-end: <code>context.getStepResult()</code> is fully typed from the previous step&rsquo;s return value, catching data-flow errors at compile time.</p>
<h2 id="feature-comparison-workflow-rag-evals-mcp-and-multi-model-support">Feature Comparison: Workflow, RAG, Evals, MCP, and Multi-Model Support</h2>
<p>The three SDKs differ most sharply in what they include out of the box. As of May 2026, a team picking a framework inherits very different surface areas. Strands is deliberately minimal: it gives you the agent loop, tool execution, multi-agent patterns, and MCP support — but RAG, evals, and workflow orchestration are your problem to solve with external libraries. The OpenAI Agents SDK occupies the middle ground: Guardrails cover basic input/output validation, Tracing covers observability, and Handoffs cover multi-agent routing, but production-grade RAG, formal evals, and complex branching workflows still require third-party integration. Mastra is the batteries-included option: RAG with vector store integration, a formal eval framework with LLM-as-judge and custom scorer support, a typed workflow engine with parallel and sequential execution, first-party MCP support, and OpenTelemetry traces all ship in <code>@mastra/core</code>. The trade-off is that Mastra&rsquo;s larger footprint means more to learn upfront, while Strands&rsquo; minimal API means you reach for external libraries sooner but start faster. For greenfield production projects where the team will invest in proper tooling regardless, Mastra&rsquo;s integrated stack reduces total configuration work substantially.</p>
<table>
  <thead>
      <tr>
          <th>Feature</th>
          <th>Strands</th>
          <th>OpenAI SDK</th>
          <th>Mastra</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Language</td>
          <td>Python (+ TS beta)</td>
          <td>Python + TypeScript</td>
          <td>TypeScript</td>
      </tr>
      <tr>
          <td>Workflow Engine</td>
          <td>Basic (Graph/Swarm)</td>
          <td>Handoffs only</td>
          <td>Full typed engine</td>
      </tr>
      <tr>
          <td>RAG</td>
          <td>External</td>
          <td>External</td>
          <td>Built-in</td>
      </tr>
      <tr>
          <td>Evals</td>
          <td>External</td>
          <td>External</td>
          <td>Built-in</td>
      </tr>
      <tr>
          <td>MCP Support</td>
          <td>First-party</td>
          <td>First-party (0.13.4)</td>
          <td>First-party</td>
      </tr>
      <tr>
          <td>Multi-model</td>
          <td>9+ providers</td>
          <td>Any OpenAI-compat</td>
          <td>Any via AI SDK</td>
      </tr>
      <tr>
          <td>Observability</td>
          <td>OTEL built-in</td>
          <td>Built-in tracing</td>
          <td>OTEL built-in</td>
      </tr>
      <tr>
          <td>Local Dev UI</td>
          <td>None</td>
          <td>None</td>
          <td>Mastra Studio</td>
      </tr>
      <tr>
          <td>License</td>
          <td>Apache 2.0</td>
          <td>MIT</td>
          <td>Apache 2.0</td>
      </tr>
      <tr>
          <td>Stars (May 2026)</td>
          <td>~8,000</td>
          <td>~20,000</td>
          <td>23,200+</td>
      </tr>
  </tbody>
</table>
<p>MCP support deserves special mention across all three: the Model Context Protocol has become the de facto standard for connecting agents to external tools and data sources, and all three SDKs support it in first-party fashion as of early 2026. This means the same MCP server fleet (GitHub, Slack, Postgres, Notion, etc.) can serve agents across all three frameworks, letting teams standardize on MCP integrations independent of framework choice.</p>
<h2 id="pricing-and-licensing-open-source-vs-proprietary">Pricing and Licensing: Open Source vs Proprietary</h2>
<p>All three SDKs are open-source, but their cost profiles diverge once you move past the library itself. Strands Agents is Apache 2.0 — free to use commercially with no restrictions. The primary cost driver is Amazon Bedrock consumption: Claude Sonnet on Bedrock costs $3/million input tokens and $15/million output tokens, roughly comparable to direct Anthropic pricing. AWS AgentCore (the managed deployment runtime for Strands agents) bills on compute and model consumption with no platform fee, making it cost-transparent for AWS shops that already have consolidated billing. The OpenAI Agents SDK is MIT-licensed with zero library cost, but you are effectively locked into OpenAI&rsquo;s pricing for the best experience: GPT-4o at $2.50/million input tokens and $10/million output tokens (with 50% prompt caching discounts at scale), with Responses API storage billed at $0.10/GB/day for conversation state. Enterprise contracts start at $2M/year and unlock dedicated capacity, custom rate limits, and SLA guarantees. Mastra is Apache 2.0 for the core framework, with Mastra Platform (the hosted deployment and management layer) offering a free tier for development and team plans starting at $49/month. The framework is LLM-cost-neutral — you pay whichever provider you use directly — and works with every major provider via the AI SDK abstraction. For cost-sensitive projects, Mastra + a self-hosted or cost-optimized provider (Groq, Together, Cerebras) is the most cost-effective path; for AWS-committed teams, Strands + Bedrock leverages existing enterprise agreements.</p>
<h3 id="cost-comparison-for-a-typical-agent-workload">Cost Comparison for a Typical Agent Workload</h3>
<p>Assume 10 million LLM tokens per day (mixed input/output):</p>
<ul>
<li><strong>Strands + Bedrock Claude Sonnet</strong>: ~$90–$130/day depending on input/output ratio</li>
<li><strong>OpenAI SDK + GPT-4o</strong>: ~$60–$100/day with prompt caching at scale</li>
<li><strong>Mastra + Anthropic Claude Sonnet (direct)</strong>: ~$90–$130/day plus $49+/month platform fee</li>
</ul>
<p>These are order-of-magnitude estimates. Actual costs vary significantly by caching hit rate, context length, and model selection. All three support cheaper models (Haiku, GPT-4o-mini, Gemma) that can reduce costs by 10–20x for appropriate workloads.</p>
<h2 id="which-sdk-should-you-choose-for-your-use-case">Which SDK Should You Choose for Your Use Case?</h2>
<p>Framework selection is a bet you live with for months, so treat it like an architectural decision: match capabilities to actual requirements rather than picking what looks impressive in a demo. Strands wins on three dimensions — AWS-native deployment, fastest time to first working agent, and existing Bedrock investments. If your team runs on AWS, already has Bedrock credits, and wants an agent running in an afternoon with minimal framework overhead, Strands is the right call. It is also the right pick for Python-first teams that want to iterate fast without a heavy framework opinion in the way. The OpenAI Agents SDK wins on community, documentation, and enterprise support. If your team is new to agents, values comprehensive examples and responsive official support, or has existing OpenAI contracts with dedicated capacity, the Agents SDK gives you the lowest adoption risk. The TypeScript SDK parity also makes it practical for full-stack teams that ship both server and client. Mastra wins for TypeScript production teams building serious applications. If you are shipping an agent-powered product — not just a prototype — and your team writes TypeScript, Mastra&rsquo;s integrated RAG, evals, workflow engine, and Mastra Studio will save you three to four weeks of configuration and plumbing work. The $35M funded commercial roadmap also signals sustained investment, which matters when evaluating long-term dependency risk. The only case where none of these is the obvious winner is a Python team that needs deep workflow control with complex branching and human-in-the-loop checkpoints — in that case, LangGraph&rsquo;s state-machine model is more appropriate than any of these three.</p>
<h3 id="decision-matrix">Decision Matrix</h3>
<table>
  <thead>
      <tr>
          <th>Scenario</th>
          <th>Recommended SDK</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>AWS-native, Python team, fast prototype</td>
          <td>Strands Agents</td>
      </tr>
      <tr>
          <td>Python team, needs best docs + enterprise support</td>
          <td>OpenAI Agents SDK</td>
      </tr>
      <tr>
          <td>TypeScript team, production product</td>
          <td>Mastra</td>
      </tr>
      <tr>
          <td>Multi-cloud, model-agnostic, Python</td>
          <td>Strands or OpenAI SDK</td>
      </tr>
      <tr>
          <td>Need built-in RAG + evals + workflows</td>
          <td>Mastra</td>
      </tr>
      <tr>
          <td>Largest ecosystem and community</td>
          <td>OpenAI Agents SDK</td>
      </tr>
      <tr>
          <td>Apache 2.0 + AWS deployment</td>
          <td>Strands Agents</td>
      </tr>
  </tbody>
</table>
<h2 id="getting-started-hello-world-in-all-three-sdks">Getting Started: Hello World in All Three SDKs</h2>
<p>Getting a working agent takes under five minutes with any of the three SDKs. The installation paths, dependency counts, and environment requirements differ enough to be worth documenting side-by-side. Strands requires Python 3.10+ and a single pip install; the most minimal working agent needs no API key if you use Ollama locally. The OpenAI Agents SDK requires Python 3.10+ or Node.js 18+ (for TypeScript), an OpenAI API key, and <code>pip install openai-agents</code> or <code>npm install @openai/agents</code>. Mastra requires Node.js 18+ and is scaffolded via <code>npm create mastra@latest</code>, which generates a complete project including TypeScript config, <code>.env</code> key stubs, and a starter agent — the scaffold takes 60 seconds from cold install to first response. All three have free local development paths: Strands with Ollama, the OpenAI SDK with any local OpenAI-compatible server, and Mastra with Ollama via the <code>@ai-sdk/ollama</code> provider. Production deployments differ significantly: Strands targets AWS Bedrock AgentCore, the OpenAI SDK targets OpenAI&rsquo;s hosted infrastructure or any compliant server, and Mastra can deploy to Mastra Platform, Vercel, Cloudflare Workers, or any Node.js runtime.</p>
<h3 id="strands-installation-and-first-agent">Strands Installation and First Agent</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install strands-agents strands-agents-tools
</span></span><span style="display:flex;"><span>export ANTHROPIC_API_KEY<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;sk-ant-...&#34;</span>
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> strands <span style="color:#f92672">import</span> Agent, tool
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> strands.models <span style="color:#f92672">import</span> AnthropicModel
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@tool</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">get_current_time</span>() <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Return the current UTC time.&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">from</span> datetime <span style="color:#f92672">import</span> datetime, timezone
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> datetime<span style="color:#f92672">.</span>now(timezone<span style="color:#f92672">.</span>utc)<span style="color:#f92672">.</span>isoformat()
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span>AnthropicModel(<span style="color:#e6db74">&#34;claude-sonnet-4-6&#34;</span>),
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[get_current_time],
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>print(agent(<span style="color:#e6db74">&#34;What time is it right now?&#34;</span>))
</span></span></code></pre></div><h3 id="openai-agents-sdk-installation-and-first-agent">OpenAI Agents SDK Installation and First Agent</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install openai-agents
</span></span><span style="display:flex;"><span>export OPENAI_API_KEY<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;sk-...&#34;</span>
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> agents <span style="color:#f92672">import</span> Agent, Runner
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> asyncio
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;time_agent&#34;</span>,
</span></span><span style="display:flex;"><span>    instructions<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Answer questions helpfully and concisely.&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gpt-4o&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">main</span>():
</span></span><span style="display:flex;"><span>    result <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> Runner<span style="color:#f92672">.</span>run(agent, <span style="color:#e6db74">&#34;What time is it right now?&#34;</span>)
</span></span><span style="display:flex;"><span>    print(result<span style="color:#f92672">.</span>final_output)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>asyncio<span style="color:#f92672">.</span>run(main())
</span></span></code></pre></div><h3 id="mastra-installation-and-first-agent">Mastra Installation and First Agent</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>npm create mastra@latest
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Follow scaffold prompts: choose OpenAI or Anthropic, name your agent</span>
</span></span><span style="display:flex;"><span>cd my-mastra-app
</span></span><span style="display:flex;"><span>npm run dev
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-typescript" data-lang="typescript"><span style="display:flex;"><span><span style="color:#75715e">// src/mastra/agents/time-agent.ts
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">Agent</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@mastra/core/agent&#34;</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">openai</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@ai-sdk/openai&#34;</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">createTool</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@mastra/core/tools&#34;</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">z</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;zod&#34;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">getCurrentTime</span> <span style="color:#f92672">=</span> <span style="color:#a6e22e">createTool</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">id</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;get_current_time&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">description</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;Return the current UTC time&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">inputSchema</span>: <span style="color:#66d9ef">z.object</span>({}),
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">outputSchema</span>: <span style="color:#66d9ef">z.object</span>({ <span style="color:#a6e22e">time</span>: <span style="color:#66d9ef">z.string</span>() }),
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">execute</span>: <span style="color:#66d9ef">async</span> () <span style="color:#f92672">=&gt;</span> ({ <span style="color:#a6e22e">time</span>: <span style="color:#66d9ef">new</span> Date().<span style="color:#a6e22e">toISOString</span>() }),
</span></span><span style="display:flex;"><span>});
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">export</span> <span style="color:#66d9ef">const</span> <span style="color:#a6e22e">timeAgent</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Agent</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">name</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;time_agent&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">instructions</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;Answer questions helpfully and concisely.&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">model</span>: <span style="color:#66d9ef">openai</span>(<span style="color:#e6db74">&#34;gpt-4o&#34;</span>),
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">tools</span><span style="color:#f92672">:</span> { <span style="color:#a6e22e">getCurrentTime</span> },
</span></span><span style="display:flex;"><span>});
</span></span></code></pre></div><p>Open <code>http://localhost:4111</code> in Mastra Studio to chat with your agent, inspect tool calls, and view traces — all with zero additional configuration.</p>
<hr>
<h2 id="faq">FAQ</h2>
<p><strong>Q: Can I use Strands Agents without an AWS account?</strong>
Yes. Strands supports Anthropic, OpenAI, Gemini, Ollama, and LiteLLM as model providers — no AWS account is required. The AWS-native features (Bedrock, AgentCore, X-Ray) are optional. You can run a fully functional Strands agent locally against Ollama with no cloud dependencies.</p>
<p><strong>Q: Does the OpenAI Agents SDK work with non-OpenAI models?</strong>
Yes. The SDK supports any provider that implements the OpenAI chat completions API format. In practice, this covers Anthropic, Gemini, and dozens of open-source models via LiteLLM. The Responses API features require OpenAI&rsquo;s hosted API, but basic agent functionality works across compatible providers.</p>
<p><strong>Q: Is Mastra stable enough for production in 2026?</strong>
Yes. Mastra is in production at Brex, Docker, Elastic, MongoDB, Salesforce, Replit, and SoftBank as of May 2026. The Marsh McLennan deployment serves 100,000+ daily users. The Apache 2.0 license, $35M funding, and active commercial roadmap make it a low-dependency-risk choice for production TypeScript projects.</p>
<p><strong>Q: How does MCP support compare across the three SDKs?</strong>
All three have first-party MCP support as of May 2026. Strands added MCP via <code>MCPClient</code> in its initial release; the OpenAI Agents SDK added MCP server support in version 0.13.4 (April 2026); Mastra shipped first-party MCP from the start. The connection pattern is similar in all three — point at an MCP server URL, list tools, pass to agent. The same MCP server fleet works with all three frameworks, so your integration investments are portable.</p>
<p><strong>Q: Which SDK has the best local development experience?</strong>
Mastra wins clearly. Mastra Studio (<code>npx mastra dev</code>) gives you a full-featured local UI with chat interface, trace viewer, workflow runner, memory inspector, and eval dashboard. Strands and the OpenAI Agents SDK both rely on terminal output and external observability tools (X-Ray, the OpenAI dashboard) for debugging. If development velocity and debugging experience matter, Mastra&rsquo;s Studio cuts investigation time significantly during the build phase.</p>
]]></content:encoded></item></channel></rss>