<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Agent-Interoperability on RockB</title><link>https://baeseokjae.github.io/tags/agent-interoperability/</link><description>Recent content in Agent-Interoperability on RockB</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 23 Apr 2026 01:23:58 +0000</lastBuildDate><atom:link href="https://baeseokjae.github.io/tags/agent-interoperability/index.xml" rel="self" type="application/rss+xml"/><item><title>CrewAI A2A Protocol Tutorial: Build Interoperable Agents with Agent2Agent Support</title><link>https://baeseokjae.github.io/posts/crewai-a2a-protocol-tutorial-2026/</link><pubDate>Thu, 23 Apr 2026 01:23:58 +0000</pubDate><guid>https://baeseokjae.github.io/posts/crewai-a2a-protocol-tutorial-2026/</guid><description>Step-by-step tutorial on using the A2A protocol with CrewAI to build cross-framework multi-agent systems in 2026.</description><content:encoded><![CDATA[<p>The A2A (Agent2Agent) protocol lets you connect a CrewAI agent to a LangGraph agent — or any other compliant framework — over a standard HTTP interface, with no custom glue code. Setup takes about 15 minutes once your CrewAI environment is running.</p>
<h2 id="what-is-the-a2a-protocol">What Is the A2A Protocol?</h2>
<p>The A2A (Agent2Agent) protocol is an open HTTP-based standard that defines how AI agents from different frameworks discover each other, exchange tasks, and stream results — without requiring framework-specific integration code. Originally developed by Google and donated to the Linux Foundation in early 2026, A2A is now a vendor-neutral specification backed by Anthropic, Microsoft, Salesforce, and over 50 other organizations. Think of it as the HTTP of multi-agent systems: just as HTTP lets any browser talk to any web server regardless of their underlying technology, A2A lets any compliant agent talk to any other. The protocol uses JSON-RPC 2.0 over HTTPS, supports server-sent events for streaming, and mandates an <code>/.well-known/agent.json</code> discovery endpoint so agents can advertise their capabilities. CrewAI adopted A2A as a first-class feature in version 0.80, making it possible to delegate tasks from a CrewAI crew to a LangGraph graph, a Semantic Kernel agent, or a custom Python service — all with a single configuration block. For teams building composite AI systems in 2026, A2A removes the biggest integration pain point: the need to write and maintain bespoke adapter layers every time you add a new agent framework.</p>
<h2 id="why-crewai-adopted-a2a-as-a-first-class-primitive">Why CrewAI Adopted A2A as a First-Class Primitive</h2>
<p>CrewAI treats A2A as a delegation primitive, not an afterthought bolt-on. Rather than wrapping A2A in a special tool class or plugin, CrewAI exposes it directly as a configuration option on the <code>Agent</code> class — the same place you define an agent&rsquo;s role, backstory, and LLM. This design decision reflects a key insight: 85% of developers in 2026 regularly use AI tools across multiple frameworks, which means cross-framework agent communication is a mainstream problem, not an edge case. Before A2A, connecting a CrewAI research agent to a LangGraph execution agent meant writing custom REST adapters, managing serialization formats, and handling auth manually for each pair of frameworks. With A2A, CrewAI agents can delegate tasks to any compliant server agent by specifying a URL and auth config — the protocol handles capability negotiation, task lifecycle, and streaming. CrewAI also acts as an A2A server itself: you can expose any CrewAI crew as an A2A-compliant endpoint, making it accessible to agents built in other frameworks. This bidirectional support is what makes A2A genuinely useful in production: you can participate in a multi-framework ecosystem as both a consumer and a provider without rewriting your core crew logic.</p>
<h2 id="a2a-vs-direct-api-integration-key-differences">A2A vs Direct API Integration: Key Differences</h2>
<p>A2A protocol and direct API calls solve different problems, and picking the wrong one creates unnecessary complexity. A direct API call works best when you control both ends of the integration, both agents use the same framework version, and you only need request-response (no streaming). A2A is better when agents are built on different frameworks, owned by different teams, or need to stream results back incrementally.</p>
<table>
  <thead>
      <tr>
          <th>Dimension</th>
          <th>Direct API Call</th>
          <th>A2A Protocol</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Discovery</td>
          <td>Manual, hardcoded</td>
          <td>Automatic via <code>/.well-known/agent.json</code></td>
      </tr>
      <tr>
          <td>Auth</td>
          <td>Custom per integration</td>
          <td>Standardized: Bearer, OAuth2, API keys</td>
      </tr>
      <tr>
          <td>Streaming</td>
          <td>Manual SSE/WebSocket</td>
          <td>Built into the spec</td>
      </tr>
      <tr>
          <td>Framework coupling</td>
          <td>Tight</td>
          <td>None</td>
      </tr>
      <tr>
          <td>Schema validation</td>
          <td>Ad hoc</td>
          <td>JSON-RPC 2.0 enforced</td>
      </tr>
      <tr>
          <td>Task lifecycle</td>
          <td>Custom</td>
          <td>Standardized (<code>submitted</code>, <code>working</code>, <code>completed</code>)</td>
      </tr>
      <tr>
          <td>Interoperability</td>
          <td>Framework-specific</td>
          <td>Any A2A-compliant agent</td>
      </tr>
  </tbody>
</table>
<p>A direct API call has less overhead and fewer moving parts when you control both sides. But A2A wins in any scenario where you&rsquo;re connecting agents from different teams or frameworks — the standardization eliminates the negotiation cost of &ldquo;how do we send data between these two things?&rdquo; every time you add a new agent to the system.</p>
<p>A2A also defines a formal task lifecycle (<code>submitted</code> → <code>working</code> → <code>completed</code> / <code>failed</code>) that makes it easy to poll for status or stream progress without reinventing that machinery for each integration. This is particularly valuable for long-running tasks like research synthesis or code generation, where you want intermediate updates rather than a single blocking response.</p>
<h2 id="setting-up-crewai-with-a2a-support">Setting Up CrewAI with A2A Support</h2>
<p>Installing A2A support in CrewAI requires the <code>crewai[a2a]</code> extra, which pulls in the <code>a2a-sdk</code> package alongside the base framework. Here is the full setup sequence:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Create and activate a virtual environment</span>
</span></span><span style="display:flex;"><span>python -m venv .venv
</span></span><span style="display:flex;"><span>source .venv/bin/activate
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Install CrewAI with A2A support</span>
</span></span><span style="display:flex;"><span>pip install <span style="color:#e6db74">&#34;crewai[a2a]&gt;=0.80.0&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Verify the installation</span>
</span></span><span style="display:flex;"><span>python -c <span style="color:#e6db74">&#34;import crewai; print(crewai.__version__)&#34;</span>
</span></span></code></pre></div><p>You also need an LLM API key. CrewAI defaults to OpenAI but works with any LiteLLM-compatible provider:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>export OPENAI_API_KEY<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;sk-...&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Or use a different provider</span>
</span></span><span style="display:flex;"><span>export ANTHROPIC_API_KEY<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;sk-ant-...&#34;</span>
</span></span></code></pre></div><p>If you are connecting to an existing A2A server agent (a LangGraph endpoint, for example), you need the server&rsquo;s base URL and its auth credentials. Collect these before writing any code — you will pass them in the <code>A2AClientConfig</code> object shown in the next section.</p>
<p>Your project structure for a minimal A2A-enabled crew looks like this:</p>



<div class="goat svg-container ">
  
    <svg
      xmlns="http://www.w3.org/2000/svg"
      font-family="Menlo,Lucida Console,monospace"
      
        viewBox="0 0 112 89"
      >
      <g transform='translate(8,16)'>
<path d='M 84,8 L 52,88' fill='none' stroke='currentColor'></path>
<text text-anchor='middle' x='0' y='4' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='0' y='20' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='0' y='36' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='0' y='52' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='0' y='68' fill='currentColor' style='font-size:1em'>└</text>
<text text-anchor='middle' x='8' y='4' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='8' y='20' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='8' y='36' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='8' y='52' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='8' y='68' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='4' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='16' y='20' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='36' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='52' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='68' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='24' y='4' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='32' y='4' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='32' y='20' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='32' y='36' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='32' y='52' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='32' y='68' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='40' y='4' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='40' y='20' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='40' y='36' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='40' y='52' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='40' y='68' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='48' y='4' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='48' y='20' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='48' y='36' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='48' y='52' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='48' y='68' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='56' y='4' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='56' y='20' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='56' y='36' fill='currentColor' style='font-size:1em'>k</text>
<text text-anchor='middle' x='56' y='52' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='56' y='68' fill='currentColor' style='font-size:1em'>v</text>
<text text-anchor='middle' x='64' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='64' y='20' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='64' y='36' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='64' y='52' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='72' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='72' y='20' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='72' y='36' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='72' y='52' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='80' y='4' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='80' y='20' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='80' y='36' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='80' y='52' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='88' y='20' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='88' y='36' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='96' y='20' fill='currentColor' style='font-size:1em'>y</text>
</g>

    </svg>
  
</div>
<p>Keep credentials in <code>.env</code> and load them with <code>python-dotenv</code>. Never hardcode tokens directly in agent configuration files.</p>
<h2 id="building-your-first-a2a-agent-in-crewai">Building Your First A2A Agent in CrewAI</h2>
<p>A2A agents in CrewAI work by wrapping a remote A2A server in an <code>A2AClientConfig</code> and attaching it to a standard <code>Agent</code> definition. The local agent delegates tasks to the remote server automatically when assigned tasks that require it. Here is a complete working example:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai <span style="color:#f92672">import</span> Agent, Task, Crew
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai.a2a <span style="color:#f92672">import</span> A2AClientConfig, A2AAuthConfig
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Configure the A2A client — points to a remote compliant agent</span>
</span></span><span style="display:flex;"><span>a2a_config <span style="color:#f92672">=</span> A2AClientConfig(
</span></span><span style="display:flex;"><span>    server_url<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;https://my-langgraph-agent.example.com&#34;</span>,
</span></span><span style="display:flex;"><span>    auth<span style="color:#f92672">=</span>A2AAuthConfig(
</span></span><span style="display:flex;"><span>        type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;bearer&#34;</span>,
</span></span><span style="display:flex;"><span>        token<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;your-bearer-token-here&#34;</span>
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Define a CrewAI agent that delegates to the A2A server</span>
</span></span><span style="display:flex;"><span>researcher <span style="color:#f92672">=</span> Agent(
</span></span><span style="display:flex;"><span>    role<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Research Specialist&#34;</span>,
</span></span><span style="display:flex;"><span>    goal<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Gather and summarize technical information on any topic&#34;</span>,
</span></span><span style="display:flex;"><span>    backstory<span style="color:#f92672">=</span>(
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;You are a senior researcher with access to specialized &#34;</span>
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;analysis tools hosted on external agent servers.&#34;</span>
</span></span><span style="display:flex;"><span>    ),
</span></span><span style="display:flex;"><span>    a2a_client<span style="color:#f92672">=</span>a2a_config,
</span></span><span style="display:flex;"><span>    verbose<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Task will be delegated through A2A to the remote agent</span>
</span></span><span style="display:flex;"><span>research_task <span style="color:#f92672">=</span> Task(
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Research the latest advances in transformer architecture pruning&#34;</span>,
</span></span><span style="display:flex;"><span>    expected_output<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;A 500-word technical summary with three key findings&#34;</span>,
</span></span><span style="display:flex;"><span>    agent<span style="color:#f92672">=</span>researcher
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>crew <span style="color:#f92672">=</span> Crew(agents<span style="color:#f92672">=</span>[researcher], tasks<span style="color:#f92672">=</span>[research_task])
</span></span><span style="display:flex;"><span>result <span style="color:#f92672">=</span> crew<span style="color:#f92672">.</span>kickoff()
</span></span><span style="display:flex;"><span>print(result)
</span></span></code></pre></div><p>When <code>crew.kickoff()</code> runs, CrewAI sends the task payload to the remote A2A server via JSON-RPC, polls for status updates, and streams back the result. From the LLM&rsquo;s perspective, it simply receives the completed output — the A2A transport layer is invisible.</p>
<p>The <code>A2AClientConfig</code> supports four auth types out of the box: <code>bearer</code> (token in Authorization header), <code>oauth2</code> (client credentials flow), <code>api_key</code> (custom header), and <code>http_basic</code>. Match the type to what the server agent expects.</p>
<h2 id="cross-framework-communication-crewai--langgraph">Cross-Framework Communication: CrewAI + LangGraph</h2>
<p>The most practical A2A use case in 2026 is connecting a CrewAI crew to a LangGraph workflow — the two most popular agent frameworks with overlapping but non-identical strengths. CrewAI excels at role-based multi-agent orchestration; LangGraph excels at complex stateful workflows with explicit graph logic. Combining them via A2A lets you use each where it fits best. Google&rsquo;s purchasing concierge Codelab demonstrates exactly this pattern: a CrewAI &ldquo;Burger agent&rdquo; and a LangGraph &ldquo;Pizza agent&rdquo; work together via A2A to fulfill a composite food order, with each framework handling the domain it knows best.</p>
<p>First, expose your LangGraph graph as an A2A server:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># langgraph_server.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> langgraph.a2a <span style="color:#f92672">import</span> A2AServer
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> my_graph <span style="color:#f92672">import</span> compiled_graph
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>app <span style="color:#f92672">=</span> A2AServer(
</span></span><span style="display:flex;"><span>    graph<span style="color:#f92672">=</span>compiled_graph,
</span></span><span style="display:flex;"><span>    agent_card<span style="color:#f92672">=</span>{
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;name&#34;</span>: <span style="color:#e6db74">&#34;LangGraph Execution Agent&#34;</span>,
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;description&#34;</span>: <span style="color:#e6db74">&#34;Executes structured workflows with state management&#34;</span>,
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;version&#34;</span>: <span style="color:#e6db74">&#34;1.0.0&#34;</span>,
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;capabilities&#34;</span>: [<span style="color:#e6db74">&#34;streaming&#34;</span>, <span style="color:#e6db74">&#34;stateful-execution&#34;</span>]
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Run with: uvicorn langgraph_server:app --port 8001</span>
</span></span></code></pre></div><p>Then point your CrewAI agent at that server:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai.a2a <span style="color:#f92672">import</span> A2AClientConfig, A2AAuthConfig
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>langgraph_config <span style="color:#f92672">=</span> A2AClientConfig(
</span></span><span style="display:flex;"><span>    server_url<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://localhost:8001&#34;</span>,
</span></span><span style="display:flex;"><span>    auth<span style="color:#f92672">=</span>A2AAuthConfig(type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;bearer&#34;</span>, token<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;dev-token&#34;</span>)
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>executor_agent <span style="color:#f92672">=</span> Agent(
</span></span><span style="display:flex;"><span>    role<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Workflow Executor&#34;</span>,
</span></span><span style="display:flex;"><span>    goal<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Execute structured multi-step workflows via LangGraph&#34;</span>,
</span></span><span style="display:flex;"><span>    backstory<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Specialist in stateful task execution with rollback support&#34;</span>,
</span></span><span style="display:flex;"><span>    a2a_client<span style="color:#f92672">=</span>langgraph_config
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>The two agents now communicate over A2A: CrewAI sends tasks as JSON-RPC messages, LangGraph processes them through its state graph, and results stream back to the CrewAI crew. Neither framework needs to know anything about the other&rsquo;s internals.</p>
<h2 id="authentication-and-security-for-a2a-agents">Authentication and Security for A2A Agents</h2>
<p>A2A&rsquo;s authentication model mirrors enterprise API security patterns, which makes it straightforward to integrate with existing identity infrastructure. CrewAI&rsquo;s <code>A2AAuthConfig</code> supports four auth strategies, each suited to different deployment scenarios. The bearer token approach is simplest and works for internal services; OAuth2 client credentials is the right choice for production B2B agent communication where you need short-lived, auditable tokens.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai.a2a <span style="color:#f92672">import</span> A2AAuthConfig
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Bearer token (simple, good for internal/dev)</span>
</span></span><span style="display:flex;"><span>bearer_auth <span style="color:#f92672">=</span> A2AAuthConfig(
</span></span><span style="display:flex;"><span>    type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;bearer&#34;</span>,
</span></span><span style="display:flex;"><span>    token<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;eyJhbGciOiJSUzI1NiJ9...&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># OAuth2 client credentials (production cross-org communication)</span>
</span></span><span style="display:flex;"><span>oauth2_auth <span style="color:#f92672">=</span> A2AAuthConfig(
</span></span><span style="display:flex;"><span>    type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;oauth2&#34;</span>,
</span></span><span style="display:flex;"><span>    client_id<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;crewai-crew-prod&#34;</span>,
</span></span><span style="display:flex;"><span>    client_secret<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;your-secret&#34;</span>,
</span></span><span style="display:flex;"><span>    token_url<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;https://auth.example.com/oauth/token&#34;</span>,
</span></span><span style="display:flex;"><span>    scopes<span style="color:#f92672">=</span>[<span style="color:#e6db74">&#34;agent:read&#34;</span>, <span style="color:#e6db74">&#34;agent:execute&#34;</span>]
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># API key via custom header (common for SaaS agent platforms)</span>
</span></span><span style="display:flex;"><span>api_key_auth <span style="color:#f92672">=</span> A2AAuthConfig(
</span></span><span style="display:flex;"><span>    type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;api_key&#34;</span>,
</span></span><span style="display:flex;"><span>    header_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;X-Agent-API-Key&#34;</span>,
</span></span><span style="display:flex;"><span>    api_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;ak_live_...&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># HTTP Basic (legacy systems or internal proxies)</span>
</span></span><span style="display:flex;"><span>basic_auth <span style="color:#f92672">=</span> A2AAuthConfig(
</span></span><span style="display:flex;"><span>    type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http_basic&#34;</span>,
</span></span><span style="display:flex;"><span>    username<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;crew-service&#34;</span>,
</span></span><span style="display:flex;"><span>    password<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;internal-password&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>Beyond auth, enforce these security practices for production A2A deployments:</p>
<ul>
<li><strong>TLS everywhere</strong>: Only connect to <code>https://</code> endpoints in production. A2A over plain HTTP leaks task payloads.</li>
<li><strong>Rotate bearer tokens</strong>: Set a short TTL (1–24 hours) and automate rotation via your secrets manager.</li>
<li><strong>Scope OAuth2 tokens tightly</strong>: Request only the scopes an agent needs (<code>agent:execute</code> not <code>agent:admin</code>).</li>
<li><strong>Validate the agent card</strong>: Before delegating tasks to a discovered agent, verify its <code>/.well-known/agent.json</code> matches your expected capabilities and version.</li>
<li><strong>Rate-limit outbound requests</strong>: Wrap your <code>A2AClientConfig</code> in a circuit breaker to prevent a misbehaving remote agent from cascading failures into your crew.</li>
</ul>
<h2 id="debugging-with-a2a-inspector">Debugging with A2A Inspector</h2>
<p>The A2A Inspector is a browser-based tool for observing the JSON-RPC messages that flow between A2A agents in real time. It works by proxying traffic between your client and server, displaying each request and response in a structured UI. This is the fastest way to diagnose why a delegation is failing or why results are arriving malformed.</p>
<p>To use it during local development:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Install the inspector CLI</span>
</span></span><span style="display:flex;"><span>pip install a2a-inspector
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Start the inspector proxy (listens on 8080, forwards to your server on 8001)</span>
</span></span><span style="display:flex;"><span>a2a-inspector proxy --target http://localhost:8001 --port <span style="color:#ae81ff">8080</span>
</span></span></code></pre></div><p>Then point your <code>A2AClientConfig</code> at the inspector proxy instead of directly at the server:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span>a2a_config <span style="color:#f92672">=</span> A2AClientConfig(
</span></span><span style="display:flex;"><span>    server_url<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://localhost:8080&#34;</span>,  <span style="color:#75715e"># Inspector proxy</span>
</span></span><span style="display:flex;"><span>    auth<span style="color:#f92672">=</span>A2AAuthConfig(type<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;bearer&#34;</span>, token<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;dev-token&#34;</span>)
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>Open <code>http://localhost:8080/inspector</code> in your browser. You will see each JSON-RPC call as it happens, including the task payload sent to the server, status update messages streamed back, and the final result envelope.</p>
<p>Common issues the Inspector surfaces immediately:</p>
<table>
  <thead>
      <tr>
          <th>Symptom</th>
          <th>Root cause</th>
          <th>Fix</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>401 on every request</td>
          <td>Wrong auth type or expired token</td>
          <td>Check <code>A2AAuthConfig.type</code> matches server expectation</td>
      </tr>
      <tr>
          <td>Task stuck in <code>working</code></td>
          <td>Remote agent timeout or exception</td>
          <td>Check server logs; add timeout to <code>A2AClientConfig</code></td>
      </tr>
      <tr>
          <td>Empty result payload</td>
          <td>Schema mismatch in response</td>
          <td>Inspect raw JSON; check <code>expected_output</code> format</td>
      </tr>
      <tr>
          <td>404 on agent card</td>
          <td>Wrong <code>server_url</code> or server not started</td>
          <td>Curl <code>{server_url}/.well-known/agent.json</code> manually</td>
      </tr>
  </tbody>
</table>
<p>The Inspector also records a session log you can export as JSON — useful for sharing a debugging trace with a teammate or filing a bug report against a third-party agent service.</p>
<h2 id="deploying-a2a-agents-to-production">Deploying A2A Agents to Production</h2>
<p>Production A2A deployments require three infrastructure decisions: where to host your A2A server agents, how to handle discovery, and how to manage agent card versioning. Google Cloud Run is the most common choice for A2A server agents in 2026 because it scales to zero between requests, supports streaming (required for A2A SSE), and has native integration with Google&rsquo;s Agent Engine for multi-agent orchestration at scale.</p>
<p>Here is a minimal <code>Dockerfile</code> for a CrewAI A2A server agent:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-dockerfile" data-lang="dockerfile"><span style="display:flex;"><span><span style="color:#66d9ef">FROM</span><span style="color:#e6db74"> python:3.12-slim</span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010"></span><span style="color:#66d9ef">WORKDIR</span><span style="color:#e6db74"> /app</span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010"></span><span style="color:#66d9ef">COPY</span> requirements.txt .<span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010"></span><span style="color:#66d9ef">RUN</span> pip install --no-cache-dir -r requirements.txt<span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010"></span><span style="color:#66d9ef">COPY</span> . .<span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010"></span><span style="color:#66d9ef">ENV</span> PORT<span style="color:#f92672">=</span><span style="color:#ae81ff">8080</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">EXPOSE</span><span style="color:#e6db74"> 8080</span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010">
</span></span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010"></span><span style="color:#66d9ef">CMD</span> [<span style="color:#e6db74">&#34;uvicorn&#34;</span>, <span style="color:#e6db74">&#34;main:app&#34;</span>, <span style="color:#e6db74">&#34;--host&#34;</span>, <span style="color:#e6db74">&#34;0.0.0.0&#34;</span>, <span style="color:#e6db74">&#34;--port&#34;</span>, <span style="color:#e6db74">&#34;8080&#34;</span>]<span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><p>Deploy to Cloud Run:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>gcloud run deploy crewai-a2a-agent <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>  --source . <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>  --region us-central1 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>  --allow-unauthenticated <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>  --set-env-vars OPENAI_API_KEY<span style="color:#f92672">=</span>$OPENAI_API_KEY
</span></span></code></pre></div><p>For the discovery endpoint, host your <code>agent.json</code> at the service URL&rsquo;s <code>/.well-known/agent.json</code> path. Cloud Run handles HTTPS automatically, satisfying A2A&rsquo;s TLS requirement.</p>
<p>For multi-region deployments, put a load balancer in front of your Cloud Run services and ensure the agent card&rsquo;s <code>url</code> field points to the load balancer endpoint, not individual regional endpoints. This keeps the A2A discovery address stable across deployments.</p>
<p>Agent card versioning matters when you update an agent&rsquo;s capabilities. Increment the <code>version</code> field in your agent card and maintain backward compatibility for at least one version cycle — client agents may cache the card and not pick up new capabilities immediately.</p>
<h2 id="real-world-use-cases-for-interoperable-agents">Real-World Use Cases for Interoperable Agents</h2>
<p>A2A interoperability unlocks composite AI architectures that weren&rsquo;t practical before a common standard existed. The most valuable real-world patterns in 2026 fall into three categories: cross-team agent composition, cross-vendor delegation, and capability specialization. In enterprise settings, different teams often own different AI systems — the data team has LangGraph pipelines, the product team has CrewAI crews, and a vendor provides a specialized analysis agent. A2A lets all three participate in a single workflow without requiring any team to rewrite their agent stack. A cross-team research pipeline, for example, might have a CrewAI orchestrator crew dispatching subtasks to a LangGraph data retrieval graph and a third-party A2A-compliant summarization service, then assembling the results into a final report. Each component is independently deployed, versioned, and scaled. Cross-vendor delegation covers scenarios like routing complex financial analysis to a specialized vendor agent while keeping sensitive data processing on your own infrastructure — A2A&rsquo;s standardized auth makes this auditable and controllable. Capability specialization is the most granular pattern: one agent per specialized skill (web search, code execution, database query), all exposed as A2A servers, composed by a lightweight orchestrator that routes tasks to the right specialist. This &ldquo;skill bus&rdquo; architecture scales well because you add new capabilities by deploying a new A2A server — no changes to the orchestrator.</p>
<p>Concrete examples already running in production:</p>
<ul>
<li><strong>E-commerce fulfillment</strong>: CrewAI order management crew + LangGraph inventory agent + vendor shipping A2A service</li>
<li><strong>Legal document review</strong>: CrewAI case coordinator + specialized contract parsing A2A agent + compliance checker</li>
<li><strong>Software delivery</strong>: CrewAI project manager + GitHub Copilot Workspace A2A endpoint + test runner agent</li>
</ul>
<h2 id="faq">FAQ</h2>
<p><strong>What Python version does CrewAI A2A support?</strong>
CrewAI with A2A support requires Python 3.10 or higher. Python 3.12 is recommended for production deployments because it includes performance improvements that reduce latency for SSE streaming in high-throughput agent pipelines.</p>
<p><strong>Can I use CrewAI as an A2A server, not just a client?</strong>
Yes. CrewAI 0.80+ includes an <code>A2AServer</code> class that wraps your crew and exposes it as an A2A-compliant HTTP endpoint, including the <code>/.well-known/agent.json</code> discovery file. Any A2A-compliant client — LangGraph, Semantic Kernel, or a custom agent — can then delegate tasks to your crew.</p>
<p><strong>Is A2A the same as MCP (Model Context Protocol)?</strong>
No. MCP (developed by Anthropic) standardizes how agents access tools and resources. A2A standardizes how agents communicate with other agents. They are complementary: an agent might use MCP to access a database tool and A2A to delegate a subtask to another agent. Many frameworks, including CrewAI, support both.</p>
<p><strong>How does A2A handle long-running tasks that take minutes to complete?</strong>
A2A defines a task lifecycle (<code>submitted</code> → <code>working</code> → <code>completed</code>) with optional server-sent events for streaming intermediate updates. For tasks that take more than a few seconds, configure your <code>A2AClientConfig</code> with a <code>streaming=True</code> flag and handle the event stream in your crew&rsquo;s task callback. This prevents HTTP timeouts and gives users incremental progress feedback.</p>
<p><strong>What happens when an A2A server agent is unavailable?</strong>
CrewAI surfaces an <code>A2AConnectionError</code> or <code>A2ATaskError</code> depending on whether the failure is at the connection or task execution layer. Wrap your crew&rsquo;s <code>kickoff()</code> in a try/except block and implement retry logic with exponential backoff. For production systems, add a circuit breaker — if the remote agent fails more than N times in a window, stop sending requests and alert the on-call engineer.</p>
]]></content:encoded></item></channel></rss>