<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>AI Agent Framework on RockB</title><link>https://baeseokjae.github.io/tags/ai-agent-framework/</link><description>Recent content in AI Agent Framework on RockB</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 05 May 2026 12:05:41 +0000</lastBuildDate><atom:link href="https://baeseokjae.github.io/tags/ai-agent-framework/index.xml" rel="self" type="application/rss+xml"/><item><title>AWS Strands Agents SDK: Build Production AI Agents in 2026</title><link>https://baeseokjae.github.io/posts/strands-agents-aws-guide-2026/</link><pubDate>Tue, 05 May 2026 12:05:41 +0000</pubDate><guid>https://baeseokjae.github.io/posts/strands-agents-aws-guide-2026/</guid><description>Complete guide to AWS Strands Agents SDK — install, build, deploy, and monitor production AI agents with minimal code in 2026.</description><content:encoded><![CDATA[<p>AWS Strands Agents is an open-source Python and TypeScript SDK that lets you build production-ready AI agents in under 10 lines of code. Released by AWS in May 2025 and reaching 14 million+ downloads, it uses a model-driven loop where you describe a goal, attach tools, and the agent decides at runtime what to call and in what order.</p>
<h2 id="what-is-aws-strands-agents-sdk">What Is AWS Strands Agents SDK?</h2>
<p>AWS Strands Agents SDK is an open-source AI agent framework developed by Amazon Web Services that uses a model-driven paradigm — you describe what you want the agent to achieve, attach a set of tools, and the underlying LLM decides which tools to call, in which order, and when to stop. Unlike graph-based frameworks that require you to wire explicit nodes and edges, Strands agents reason dynamically at runtime, adapting their execution plan based on intermediate results. Since its preview launch in May 2025, Strands has accumulated 14 million+ downloads and powers internal AWS services including Amazon Q Developer, AWS Glue, and the VPC Reachability Analyzer. The SDK supports 9+ model providers — Amazon Bedrock, Anthropic, OpenAI, Gemini, LiteLLM, Llama, Ollama, and Writer — through a unified API, so you can prototype locally with Ollama and deploy to Bedrock without touching your agent logic. Version 1.0 added Graph, Swarm, and Workflow multi-agent patterns and the A2A (Agent-to-Agent) protocol for cross-framework interoperability. The result is the lowest barrier-to-entry of any major agent framework available in 2026.</p>
<h2 id="why-strands-agents-key-advantages-over-competitors">Why Strands Agents? Key Advantages Over Competitors</h2>
<p>Strands Agents stands out from competing frameworks through three core design decisions: minimal API surface, native MCP support, and built-in observability. Where LangGraph requires you to define state schemas, build graph topologies, and wire conditional edges before writing any business logic, a functional Strands agent is four lines: import the SDK, declare tools with a decorator, instantiate an <code>Agent</code>, and call it with a string. Verisk Analytics deployed a RAG-backed Strands agent on Amazon Bedrock and reduced mean-time-to-resolution (MTTR) by 60% without any manual coding pipeline. That outcome is possible because Strands handles the entire agentic loop — tool selection, execution, result injection, and follow-up reasoning — automatically. The framework is also model-agnostic by design: switching from Claude Sonnet on Bedrock to GPT-4o on OpenAI is a one-line config change, not a framework migration. Finally, OpenTelemetry instrumentation is built in — every tool call, reasoning step, and model invocation emits traces and metrics without additional setup, routing natively to AWS X-Ray and CloudWatch. For teams already running workloads on AWS, this is a compelling stack: write minimal Python, deploy to Bedrock AgentCore, and get full observability for free.</p>
<h3 id="when-strands-beats-the-alternatives">When Strands Beats the Alternatives</h3>
<p>Strands is the right choice when you want fast iteration, MCP tool reuse, or an AWS-native deployment path. If your workflow is deterministic and requires strict execution ordering with human-in-the-loop checkpoints, LangGraph&rsquo;s state-machine model is stronger. If you&rsquo;re organizing a large team of specialized agents with a crew/role mental model, CrewAI&rsquo;s abstractions are more intuitive for product managers. For everything else — especially when MCP tool servers already exist for your data sources — Strands wins on productivity.</p>
<h2 id="installation-and-quick-start-python--typescript">Installation and Quick Start (Python &amp; TypeScript)</h2>
<p>Installing AWS Strands Agents takes under a minute and requires no AWS account for local development with Ollama. For Python, run <code>pip install strands-agents</code> and optionally <code>pip install strands-agents-tools</code> for the 30+ built-in tools. For TypeScript, run <code>npm install @strands/agents</code>. A minimal working agent in Python requires four lines of code: import <code>Agent</code> and <code>tool</code>, decorate a function with <code>@tool</code>, create <code>agent = Agent(tools=[your_tool])</code>, and call <code>agent(&quot;your prompt&quot;)</code>. The agent calls your tool as many times as needed, reasons about the results, and returns a final answer — no graph, no state schema, no chain. For production use with Amazon Bedrock, set <code>AWS_DEFAULT_REGION</code> and install <code>strands-agents[bedrock]</code>; for Anthropic, set <code>ANTHROPIC_API_KEY</code> and pass <code>model=AnthropicModel(&quot;claude-sonnet-4-6&quot;)</code> to the <code>Agent</code> constructor. The model-swap is the only change required; all tool definitions, agent logic, and multi-agent patterns remain identical across providers.</p>
<h3 id="your-first-agent-in-python">Your First Agent in Python</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> strands <span style="color:#f92672">import</span> Agent, tool
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@tool</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">get_weather</span>(city: str) <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Return current weather for a city.&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> <span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;</span><span style="color:#e6db74">{</span>city<span style="color:#e6db74">}</span><span style="color:#e6db74">: 22°C, partly cloudy&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(tools<span style="color:#f92672">=</span>[get_weather])
</span></span><span style="display:flex;"><span>response <span style="color:#f92672">=</span> agent(<span style="color:#e6db74">&#34;What&#39;s the weather in Berlin?&#34;</span>)
</span></span><span style="display:flex;"><span>print(response)
</span></span></code></pre></div><p>That is a complete, runnable agent. Add a <code>model=</code> parameter to switch providers; add more <code>@tool</code> functions to expand capabilities. The agent loop handles everything else.</p>
<h3 id="typescript-quick-start">TypeScript Quick Start</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-typescript" data-lang="typescript"><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">Agent</span>, <span style="color:#a6e22e">tool</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;@strands/agents&#34;</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">import</span> { <span style="color:#a6e22e">z</span> } <span style="color:#66d9ef">from</span> <span style="color:#e6db74">&#34;zod&#34;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">getWeather</span> <span style="color:#f92672">=</span> <span style="color:#a6e22e">tool</span>({
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">name</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;get_weather&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">description</span><span style="color:#f92672">:</span> <span style="color:#e6db74">&#34;Return current weather for a city&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">parameters</span>: <span style="color:#66d9ef">z.object</span>({ <span style="color:#a6e22e">city</span>: <span style="color:#66d9ef">z.string</span>() }),
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">execute</span>: <span style="color:#66d9ef">async</span> ({ <span style="color:#a6e22e">city</span> }) <span style="color:#f92672">=&gt;</span> <span style="color:#e6db74">`</span><span style="color:#e6db74">${</span><span style="color:#a6e22e">city</span><span style="color:#e6db74">}</span><span style="color:#e6db74">: 22°C, partly cloudy`</span>,
</span></span><span style="display:flex;"><span>});
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">agent</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Agent</span>({ <span style="color:#a6e22e">tools</span><span style="color:#f92672">:</span> [<span style="color:#a6e22e">getWeather</span>] });
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">await</span> <span style="color:#a6e22e">agent</span>.<span style="color:#a6e22e">run</span>(<span style="color:#e6db74">&#34;What&#39;s the weather in Berlin?&#34;</span>);
</span></span></code></pre></div><h2 id="core-concepts-tools-model-providers-and-the-agent-loop">Core Concepts: Tools, Model Providers, and the Agent Loop</h2>
<p>The Strands agent loop is the runtime engine that drives all agent behavior: it feeds the user prompt and tool definitions to the model, receives tool-call responses, executes the designated tools, injects results back into the conversation context, and repeats until the model emits a final text response with no pending tool calls. This loop is fully managed — you never write it manually. Tools are the primary extension point: any Python function decorated with <code>@tool</code> automatically becomes callable by the model. The SDK introspects the function&rsquo;s type annotations and docstring to auto-generate the JSON Schema that Strands passes to the model, so documentation doubles as the tool spec. Model providers are swappable at <code>Agent</code> instantiation time: <code>BedrockModel</code>, <code>AnthropicModel</code>, <code>OpenAIModel</code>, <code>OllamaModel</code>, and others all implement the same interface. The agent loop itself does not care which model is running; only the <code>model=</code> parameter changes. For advanced control, you can set <code>max_parallel_tool_calls</code>, configure <code>system_prompt</code>, and attach <code>callbacks</code> for streaming output or custom logging — all without modifying the loop itself. This clean separation of concerns is what makes Strands agents easy to test and maintain at production scale.</p>
<h3 id="understanding-tool-execution">Understanding Tool Execution</h3>
<p>When the model decides to call a tool, Strands validates the arguments against the tool&rsquo;s schema, executes the function (sync or async), and serializes the return value back into the conversation. If the model requests multiple tools simultaneously, Strands executes them in parallel by default, then batches the results into a single context injection. This parallelism is transparent — your tool functions do not need to know about concurrency.</p>
<h2 id="built-in-tool-ecosystem-and-mcp-integration">Built-in Tool Ecosystem and MCP Integration</h2>
<p>Strands ships with 30+ production-ready tools in the <code>strands-agents-tools</code> package covering file operations, HTTP requests, shell execution, Python REPL, database queries, image analysis, and AWS service calls (S3, DynamoDB, Lambda invoke). Beyond built-ins, Strands has first-class Model Context Protocol (MCP) support — you can connect any MCP server to your agent with three lines of code and instantly access thousands of community-built tool servers covering GitHub, Slack, Postgres, Notion, and hundreds of other services. This MCP-first design means you rarely need to write custom tools for standard integrations. The pattern is: create an <code>MCPClient</code> pointing at your MCP server URL, call <code>.list_tools()</code> to get the tool list, and pass the list to <code>Agent(tools=mcp_tools)</code>. The agent treats MCP tools identically to native <code>@tool</code> functions — same loop, same observability, same parallelism. Practically, this means a team can build an agent that queries a Postgres database, creates GitHub issues, and sends Slack notifications using only MCP servers, with zero custom integration code and full OTEL tracing on every call.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> strands <span style="color:#f92672">import</span> Agent
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> strands.tools.mcp <span style="color:#f92672">import</span> MCPClient
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">with</span> MCPClient(<span style="color:#e6db74">&#34;http://localhost:8080&#34;</span>) <span style="color:#66d9ef">as</span> mcp:
</span></span><span style="display:flex;"><span>    tools <span style="color:#f92672">=</span> mcp<span style="color:#f92672">.</span>list_tools()
</span></span><span style="display:flex;"><span>    agent <span style="color:#f92672">=</span> Agent(tools<span style="color:#f92672">=</span>tools)
</span></span><span style="display:flex;"><span>    agent(<span style="color:#e6db74">&#34;Query the last 10 orders from the database&#34;</span>)
</span></span></code></pre></div><p>Connecting a new data source becomes a matter of running an MCP server — no bespoke integration code required.</p>
<h2 id="multi-agent-patterns-graph-swarm-and-workflow">Multi-Agent Patterns: Graph, Swarm, and Workflow</h2>
<p>Strands Agents 1.0 introduced three structured multi-agent patterns that cover the most common production topologies: Graph, Swarm, and Workflow. Graph mode lets you define explicit agent nodes and directed edges, giving you deterministic control over which agent hands off to which — useful when you need auditability or strict sequencing for compliance. Swarm mode spins up a pool of identical worker agents that pull tasks from a shared queue; each worker operates independently and reports back to a coordinator, making Swarm ideal for embarrassingly parallel workloads like bulk document processing or parallel API calls across accounts. Workflow mode is the simplest: a linear pipeline where each agent&rsquo;s output becomes the next agent&rsquo;s input, perfect for ETL-style tasks (extract → transform → load) or document processing pipelines (ingest → summarize → classify → store). All three patterns use the same underlying <code>Agent</code> primitive and the same tool and model interfaces — the pattern controls topology, not implementation. The A2A (Agent-to-Agent) protocol allows agents built with different frameworks (LangGraph, CrewAI, custom REST services) to be treated as tools by a Strands orchestrator, so you are not locked in even at the multi-agent layer. Each pattern ships with built-in session management, so state is durable across invocations.</p>
<h3 id="using-the-handoff-primitive">Using the Handoff Primitive</h3>
<p>The <code>handoff</code> primitive passes control to another agent or a human reviewer without terminating the session. This is the key building block for human-in-the-loop workflows: an agent completes as much work as possible autonomously, then hands off to a human when it encounters ambiguity, and resumes when the human responds. Handoffs are serializable — the session state is stored in the Session Manager (backed by DynamoDB or any key-value store) so the resuming agent picks up exactly where the previous one stopped.</p>
<h2 id="deploying-to-production-lambda-fargate-and-bedrock-agentcore">Deploying to Production: Lambda, Fargate, and Bedrock AgentCore</h2>
<p>Strands agents can be deployed to AWS Lambda for short-lived event-driven tasks, AWS Fargate for long-running streaming agents, or Amazon Bedrock AgentCore for a fully managed production path with built-in identity, memory, observability, and tool integration. Lambda deployments suit agents with sub-15-minute execution windows and sporadic invocation patterns — the pay-per-request model is cost-efficient at low to medium scale. Fargate is the right target when agents need to stream responses in real time or when execution windows exceed Lambda&rsquo;s limit; a recommended architecture uses API Gateway fronting a Fargate container running the agent loop, with Lambda functions backing individual tools. Bedrock AgentCore is AWS&rsquo;s managed runtime for Strands agents: it eliminates infrastructure provisioning, handles auto-scaling, provides persistent session storage, and integrates Strands observability with CloudWatch and X-Ray automatically. For enterprise teams, AgentCore is the fastest path from prototype to production because it removes the need to manage container registries, IAM roles for tool execution, or session storage backends. A typical Fargate-hosted streaming agent can be wired up in under 30 lines of FastAPI code and deployed as a Docker container with a standard ECS task definition.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># Fargate: expose agent as streaming HTTP endpoint</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> fastapi <span style="color:#f92672">import</span> FastAPI
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> strands <span style="color:#f92672">import</span> Agent
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> strands.models <span style="color:#f92672">import</span> BedrockModel
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>app <span style="color:#f92672">=</span> FastAPI()
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(model<span style="color:#f92672">=</span>BedrockModel(<span style="color:#e6db74">&#34;us.anthropic.claude-sonnet-4-6-v1&#34;</span>))
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@app.post</span>(<span style="color:#e6db74">&#34;/run&#34;</span>)
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">run_agent</span>(prompt: str):
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">async</span> <span style="color:#66d9ef">for</span> chunk <span style="color:#f92672">in</span> agent<span style="color:#f92672">.</span>stream(prompt):
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">yield</span> chunk
</span></span></code></pre></div><p>For Lambda, package the agent with a handler that instantiates <code>Agent</code> per invocation and returns the <code>agent(event[&quot;prompt&quot;])</code> response as JSON.</p>
<h3 id="bedrock-agentcore-deployment">Bedrock AgentCore Deployment</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> strands.deploy <span style="color:#f92672">import</span> BedrockAgentCore
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>core <span style="color:#f92672">=</span> BedrockAgentCore(
</span></span><span style="display:flex;"><span>    agent<span style="color:#f92672">=</span>agent,
</span></span><span style="display:flex;"><span>    region<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;us-east-1&#34;</span>,
</span></span><span style="display:flex;"><span>    memory_backend<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;dynamodb&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>core<span style="color:#f92672">.</span>deploy()  <span style="color:#75715e"># provisions all infrastructure</span>
</span></span></code></pre></div><p>AgentCore handles IAM, VPC, scaling policies, and CloudWatch dashboards automatically.</p>
<h2 id="observability-and-monitoring-with-opentelemetry">Observability and Monitoring with OpenTelemetry</h2>
<p>Strands Agents ships with built-in OpenTelemetry instrumentation that requires zero configuration to activate — every agent invocation automatically emits spans for model calls, tool executions, and reasoning steps. This matters because AI agent debugging without traces is guesswork: you need to see which tools were called, in what order, with what arguments, and how long each step took. Strands traces integrate natively with AWS X-Ray (via the OTEL OTLP exporter pointing at the X-Ray collector), CloudWatch (via the CloudWatch OTEL Distro), and any third-party backend that accepts OTLP — Grafana, Jaeger, or Datadog. Fan-out routing is supported, so you can send traces to X-Ray for the AWS console while simultaneously forwarding to Grafana for team dashboards. Custom attributes added via standard OTEL APIs propagate through multi-agent chains, so a root trace from an orchestrator spans all downstream worker agent calls. Metrics include token counts per call, tool latency by name, and agent loop iteration counts — all available in CloudWatch without any custom metric publishing code. In practice, a Strands agent with three tools on Bedrock produces full end-to-end traces in X-Ray within the first invocation, with no instrumentation code written by the developer.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> opentelemetry <span style="color:#f92672">import</span> trace
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> opentelemetry.sdk.trace <span style="color:#f92672">import</span> TracerProvider
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> opentelemetry.sdk.trace.export <span style="color:#f92672">import</span> SimpleSpanProcessor
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> opentelemetry.exporter.otlp.proto.grpc.trace_exporter <span style="color:#f92672">import</span> OTLPSpanExporter
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>provider <span style="color:#f92672">=</span> TracerProvider()
</span></span><span style="display:flex;"><span>provider<span style="color:#f92672">.</span>add_span_processor(SimpleSpanProcessor(OTLPSpanExporter()))
</span></span><span style="display:flex;"><span>trace<span style="color:#f92672">.</span>set_tracer_provider(provider)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Strands automatically picks up the configured OTEL provider</span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> Agent(tools<span style="color:#f92672">=</span>[<span style="color:#f92672">...</span>])
</span></span></code></pre></div><p>No Strands-specific configuration needed — standard OTEL SDK setup is sufficient.</p>
<h2 id="strands-agents-vs-langgraph-vs-crewai-which-should-you-use">Strands Agents vs LangGraph vs CrewAI: Which Should You Use?</h2>
<p>Strands Agents, LangGraph, and CrewAI each occupy a distinct position in the 2026 AI agent landscape, and the right choice depends on your workflow type, team composition, and deployment target. Strands is optimized for minimal code, fast iteration, and AWS-native deployment: it&rsquo;s the strongest choice for teams building on Bedrock, leveraging MCP tools, or needing to prototype and ship quickly. LangGraph is a production-grade state-machine framework — more verbose, but the most battle-tested option for complex deterministic workflows where execution order must be auditable and human-in-the-loop checkpoints are required at defined graph nodes. CrewAI uses a crew/role abstraction that maps well to organizational structures; product managers and non-engineers find it easier to reason about than graphs or loops. The critical differentiator for 2026: Strands is the only framework with a direct managed deployment target (Bedrock AgentCore) that eliminates infrastructure entirely, and it is the only one with MCP-first design that connects to community tool servers without custom code. Strands reached 1 million downloads and 3,000+ GitHub stars within just four months of its preview launch, a pace that signals strong developer adoption and active community investment.</p>
<table>
  <thead>
      <tr>
          <th>Feature</th>
          <th>Strands Agents</th>
          <th>LangGraph</th>
          <th>CrewAI</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Learning curve</td>
          <td>Low (4 lines to start)</td>
          <td>High (graph topology)</td>
          <td>Medium (crew/role model)</td>
      </tr>
      <tr>
          <td>Model agnostic</td>
          <td>Yes (9+ providers)</td>
          <td>Yes</td>
          <td>Yes</td>
      </tr>
      <tr>
          <td>MCP support</td>
          <td>Native, first-class</td>
          <td>Via LangChain tools</td>
          <td>Via custom adapters</td>
      </tr>
      <tr>
          <td>Managed deployment</td>
          <td>Bedrock AgentCore</td>
          <td>LangSmith Cloud</td>
          <td>CrewAI Cloud</td>
      </tr>
      <tr>
          <td>Multi-agent patterns</td>
          <td>Graph, Swarm, Workflow</td>
          <td>Graph, subgraphs</td>
          <td>Crew, pipeline</td>
      </tr>
      <tr>
          <td>Built-in observability</td>
          <td>OTEL, X-Ray, CloudWatch</td>
          <td>LangSmith tracing</td>
          <td>Built-in logs</td>
      </tr>
      <tr>
          <td>Best for</td>
          <td>AWS workloads, fast iteration</td>
          <td>Complex deterministic flows</td>
          <td>Role-based team simulations</td>
      </tr>
  </tbody>
</table>
<h3 id="when-to-stick-with-langgraph">When to Stick With LangGraph</h3>
<p>LangGraph remains the strongest choice when your workflow has complex conditional branching that must be fully deterministic — financial compliance pipelines, multi-step approval workflows, or any scenario where the execution graph needs to be audited after the fact. Its visual debugger in LangSmith is more mature than Strands tooling as of mid-2026.</p>
<h2 id="real-world-use-cases-and-enterprise-adoption">Real-World Use Cases and Enterprise Adoption</h2>
<p>AWS Strands Agents has moved beyond early-adopter experimentation into enterprise production deployments. Amazon internal teams run it at scale: Amazon Q Developer uses Strands for IDE assistant workflows, AWS Glue uses it for automated ETL pipeline generation, and the VPC Reachability Analyzer uses Strands agents to diagnose complex network configurations. Outside AWS, Verisk Analytics — a Fortune 500 data analytics company serving the insurance industry — deployed a Strands-based RAG agent on Amazon Bedrock that reduced mean-time-to-resolution for data engineering incidents by 60%, entirely without manual coding pipelines. The pattern driving these results is consistent: Strands agents connect to existing data sources via MCP (replacing bespoke integrations), reason over retrieved context, and take actions through typed tools with full OTEL tracing on every step. Common enterprise patterns in production as of 2026 include DevOps agents that triage CloudWatch alarms and auto-remediate known failure modes, data engineering agents that generate and deploy Glue jobs from natural-language specs, and customer support agents that query CRM systems and escalate to humans when confidence is below threshold. The SDK&rsquo;s 14 million+ total downloads indicate these patterns are being replicated broadly across organizations already running on AWS infrastructure.</p>
<h3 id="patterns-worth-reusing">Patterns Worth Reusing</h3>
<p>The most reusable Strands pattern is the &ldquo;read-reason-act&rdquo; loop backed by a Session Manager: the agent reads current state (from DynamoDB, S3, or an MCP database tool), reasons about the delta between current and desired state, and calls action tools to close the gap. Session state is persisted after each loop iteration, so the agent can be interrupted and resumed across Lambda invocations without losing context.</p>
<h2 id="frequently-asked-questions">Frequently Asked Questions</h2>
<p><strong>Q: Is AWS Strands Agents free and open source?</strong>
Yes. Strands Agents SDK is Apache-2.0 licensed and available on GitHub at <code>strands-agents/sdk-python</code> and <code>strands-agents/sdk-js</code>. There is no cost to use the framework itself. You pay only for model inference (Bedrock, OpenAI, etc.) and any AWS infrastructure you deploy to (Lambda, Fargate, AgentCore).</p>
<p><strong>Q: Does Strands Agents require Amazon Bedrock?</strong>
No. Bedrock is one of 9+ supported model providers, but Strands works with Anthropic, OpenAI, Gemini, Ollama, and others out of the box. You can build and run agents entirely locally using Ollama without any AWS account or credentials.</p>
<p><strong>Q: How does Strands Agents handle long-running tasks that exceed Lambda&rsquo;s timeout?</strong>
Use Fargate for the agent loop (no timeout) and Lambda for individual tool executions. The recommended production architecture is API Gateway → Fargate (agent loop) → Lambda (tools), which lets you scale tool execution independently of the main loop. Alternatively, use the Session Manager to checkpoint state and continue across multiple Lambda invocations.</p>
<p><strong>Q: Can Strands Agents work with agents built on other frameworks like LangGraph or CrewAI?</strong>
Yes, via the A2A (Agent-to-Agent) protocol introduced in Strands 1.0. Any agent exposing an A2A-compliant endpoint can be invoked as a tool by a Strands orchestrator. This enables hybrid architectures where a Strands orchestrator coordinates LangGraph sub-agents for complex deterministic sub-tasks.</p>
<p><strong>Q: What&rsquo;s the difference between Strands Agents and Amazon Bedrock Agents (the managed console feature)?</strong>
Amazon Bedrock Agents is a no-code/low-code managed service configured through the AWS console. Strands Agents SDK is a code-first framework that you deploy yourself — it gives you full control over agent logic, tool definitions, and deployment targets. Strands can deploy to Bedrock AgentCore (the managed runtime), but it is a distinct product from the console-based Bedrock Agents service.</p>
]]></content:encoded></item></channel></rss>