<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Vertex AI on RockB</title><link>https://baeseokjae.github.io/tags/vertex-ai/</link><description>Recent content in Vertex AI on RockB</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 09 May 2026 18:04:05 +0000</lastBuildDate><atom:link href="https://baeseokjae.github.io/tags/vertex-ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Google ADK Tutorial: Build Multi-Agent Systems with Python (2026)</title><link>https://baeseokjae.github.io/posts/google-adk-python-tutorial-2026/</link><pubDate>Sat, 09 May 2026 18:04:05 +0000</pubDate><guid>https://baeseokjae.github.io/posts/google-adk-python-tutorial-2026/</guid><description>Step-by-step Google ADK tutorial: install google-adk, build LlmAgent pipelines, run parallel agents, and deploy to Vertex AI in 2026.</description><content:encoded><![CDATA[<p>Google ADK (Agent Development Kit) lets you build a working multi-agent Python system in under 30 minutes — with LlmAgent for reasoning, SequentialAgent and ParallelAgent for orchestration, and a built-in dev UI for debugging. This tutorial walks you from zero to a deployed multi-agent pipeline.</p>
<h2 id="what-is-google-adk-and-why-it-matters-in-2026">What Is Google ADK and Why It Matters in 2026</h2>
<p>Google ADK (Agent Development Kit) is an open-source, code-first Python framework released by Google at Cloud Next 2025 for building, orchestrating, and deploying AI agents. Unlike drag-and-drop tools, ADK is built for developers who want full control over agent logic, tool integration, and multi-agent coordination. ADK is optimized for Gemini models but is genuinely model-agnostic through LiteLLM integration, meaning you can run the same agent code against GPT-4, Claude, or any OpenAI-compatible endpoint. The framework reached stable v1.0.0 in May 2025, and ADK Python 2.0 Beta with agent teams and advanced workflows shipped in early 2026. With 13 million developers already building on Google&rsquo;s generative models and Gemini API active developers up 118% year-over-year as of Q3 2025, ADK has become the default path for Google Cloud-native agent development. The AI agents market itself hit USD 7.63 billion in 2025 and is projected to grow at 49.6% CAGR through 2033 — choosing the right framework now has long-term career implications.</p>
<p>The three reasons to pick ADK over LangGraph or CrewAI in 2026: first, native Vertex AI Agent Engine deployment with one command; second, the A2A (Agent-to-Agent) protocol for cross-framework orchestration; third, a built-in dev web UI that shows you exactly what each agent said, which tools it called, and what the session state looks like at every step.</p>
<h2 id="prerequisites-and-installation-pip-install-google-adk">Prerequisites and Installation (pip install google-adk)</h2>
<p>Google ADK requires Python 3.10 or higher and a Google API key or Vertex AI credentials. For local development, a free Gemini API key from Google AI Studio is sufficient — no billing account required. For production Vertex AI deployment, you&rsquo;ll need a Google Cloud project with the Vertex AI API enabled. Install ADK with a single pip command:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install google-adk
</span></span></code></pre></div><p>For the full tutorial, also install optional dependencies:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install google-adk<span style="color:#f92672">[</span>vertexai<span style="color:#f92672">]</span> litellm python-dotenv
</span></span></code></pre></div><p>Create a project directory and set up your environment:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>mkdir my-adk-project <span style="color:#f92672">&amp;&amp;</span> cd my-adk-project
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;GOOGLE_API_KEY=your_key_here&#34;</span> &gt; .env
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;GOOGLE_GENAI_USE_VERTEXAI=FALSE&#34;</span> &gt;&gt; .env
</span></span></code></pre></div><p>ADK projects follow a package convention: each agent lives in its own Python package directory with an <code>__init__.py</code> that exports a root <code>agent</code> variable. This structure is not optional — the <code>adk run</code> and <code>adk web</code> CLI commands discover agents by looking for this pattern. A minimal project looks like:</p>



<div class="goat svg-container ">
  
    <svg
      xmlns="http://www.w3.org/2000/svg"
      font-family="Menlo,Lucida Console,monospace"
      
        viewBox="0 0 128 105"
      >
      <g transform='translate(8,16)'>
<path d='M 32,56 L 40,56' fill='none' stroke='currentColor'></path>
<text text-anchor='middle' x='0' y='4' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='8' y='4' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='16' y='4' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='16' y='20' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='16' y='36' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='24' y='4' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='24' y='20' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='24' y='36' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='32' y='4' fill='currentColor' style='font-size:1em'>d</text>
<text text-anchor='middle' x='32' y='20' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='32' y='36' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='32' y='68' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='32' y='84' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='40' y='4' fill='currentColor' style='font-size:1em'>k</text>
<text text-anchor='middle' x='40' y='20' fill='currentColor' style='font-size:1em'>v</text>
<text text-anchor='middle' x='40' y='36' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='40' y='52' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='40' y='68' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='40' y='84' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='48' y='4' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='48' y='36' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='48' y='52' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='48' y='68' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='48' y='84' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='56' y='4' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='56' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='56' y='52' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='56' y='68' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='56' y='84' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='64' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='64' y='36' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='64' y='52' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='64' y='68' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='64' y='84' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='72' y='4' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='72' y='36' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='72' y='52' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='72' y='68' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='72' y='84' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='80' y='4' fill='currentColor' style='font-size:1em'>j</text>
<text text-anchor='middle' x='80' y='36' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='80' y='52' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='80' y='68' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='80' y='84' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='88' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='88' y='52' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='88' y='68' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='88' y='84' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='96' y='4' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='96' y='52' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='104' y='4' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='104' y='52' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='112' y='4' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='112' y='52' fill='currentColor' style='font-size:1em'>y</text>
</g>

    </svg>
  
</div>
<p>Verify installation with <code>adk --version</code>. If you see <code>google-adk 1.x.x</code> or later, you&rsquo;re ready.</p>
<h2 id="core-concepts-llmagent-sequentialagent-parallelagent-and-custom-agents">Core Concepts: LlmAgent, SequentialAgent, ParallelAgent, and Custom Agents</h2>
<p>Google ADK organizes agent logic into four primitive types that compose into hierarchical agent trees. The LlmAgent (also aliased as <code>Agent</code>) is the reasoning primitive — it receives a system prompt, a set of tools, and a Gemini model name, then autonomously decides which tools to call and what to return. The SequentialAgent is a workflow primitive: it runs a list of child agents one after another, passing session state between them. The ParallelAgent runs child agents concurrently, collecting results via separate output keys to avoid write conflicts. Custom agents extend <code>BaseAgent</code> directly and let you implement arbitrary logic — database lookups, if/else branching, loops — without an LLM in the loop.</p>
<p>The key architectural insight is that these types compose freely. A SequentialAgent can contain LlmAgents and ParallelAgents. A ParallelAgent&rsquo;s children can themselves be mini-pipelines with their own SequentialAgent structure. This hierarchical tree is how ADK scales from a single-agent chatbot to a 20-agent enterprise pipeline without changing the core pattern.</p>
<p>All state flows through <code>session.state</code>, a dictionary attached to each conversation session. An agent writes to <code>session.state[&quot;key&quot;]</code> via the <code>output_key</code> field or tool return values; downstream agents read from it. This shared-whiteboard design means agents don&rsquo;t need direct references to each other — they communicate through state, which is inspectable, serializable, and debuggable.</p>
<h2 id="building-your-first-agent-a-simple-llmagent-with-custom-tools">Building Your First Agent: A Simple LlmAgent with Custom Tools</h2>
<p>An LlmAgent with custom tools is the starting point for every ADK project. Below is a complete working example of a code analysis agent that uses two Python functions as tools:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># my_agent/tools.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> subprocess
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">run_pylint</span>(file_path: str) <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Run pylint on a Python file and return the output.&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    result <span style="color:#f92672">=</span> subprocess<span style="color:#f92672">.</span>run(
</span></span><span style="display:flex;"><span>        [<span style="color:#e6db74">&#34;python&#34;</span>, <span style="color:#e6db74">&#34;-m&#34;</span>, <span style="color:#e6db74">&#34;pylint&#34;</span>, file_path, <span style="color:#e6db74">&#34;--output-format=text&#34;</span>],
</span></span><span style="display:flex;"><span>        capture_output<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>, text<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>, timeout<span style="color:#f92672">=</span><span style="color:#ae81ff">30</span>
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> result<span style="color:#f92672">.</span>stdout <span style="color:#f92672">or</span> result<span style="color:#f92672">.</span>stderr
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">read_file</span>(file_path: str) <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Read a file and return its contents.&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">with</span> open(file_path) <span style="color:#66d9ef">as</span> f:
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span> f<span style="color:#f92672">.</span>read()
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">FileNotFoundError</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> <span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;File not found: </span><span style="color:#e6db74">{</span>file_path<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># my_agent/agent.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.agents <span style="color:#f92672">import</span> LlmAgent
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> .tools <span style="color:#f92672">import</span> run_pylint, read_file
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>root_agent <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;code_reviewer&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Reviews Python code for quality and style issues&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&#34;&#34;You are a senior Python developer doing code review.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    When given a file path, read the file and run pylint on it.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Summarize the issues found, grouping them by severity.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Always end with an overall quality score from 1-10.&#34;&#34;&#34;</span>,
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[read_file, run_pylint],
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># my_agent/__init__.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> .agent <span style="color:#f92672">import</span> root_agent
</span></span></code></pre></div><p>ADK automatically converts plain Python functions into tools — it reads the docstring as the tool description and the type annotations as parameter schemas. This means no decorator boilerplate. Run the agent interactively with <code>adk run my_agent</code> or open the dev UI with <code>adk web</code> to test it visually.</p>
<h2 id="building-a-multi-agent-pipeline-sequential-workflow-example">Building a Multi-Agent Pipeline: Sequential Workflow Example</h2>
<p>A SequentialAgent chains multiple specialized agents into a pipeline where each agent&rsquo;s output becomes the next agent&rsquo;s input. This pattern is ideal for code review → test generation → documentation workflows. Here&rsquo;s a three-agent code review pipeline:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># pipeline/agent.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.agents <span style="color:#f92672">import</span> LlmAgent, SequentialAgent
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Agent 1: reads and analyzes code quality</span>
</span></span><span style="display:flex;"><span>analyzer <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;code_analyzer&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Analyzes Python code and extracts quality metrics&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&#34;&#34;Read the file at the path in session state key &#39;file_path&#39;.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Analyze it for: complexity, naming, documentation, error handling.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Store your structured analysis in output_key=&#39;analysis&#39;.&#34;&#34;&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;analysis&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Agent 2: generates improvement suggestions based on analysis</span>
</span></span><span style="display:flex;"><span>suggester <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;improvement_suggester&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Generates specific improvement suggestions from code analysis&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&#34;&#34;Read the code analysis from session state key &#39;analysis&#39;.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Generate 5 specific, actionable improvement suggestions with code examples.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Store suggestions in output_key=&#39;suggestions&#39;.&#34;&#34;&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;suggestions&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Agent 3: writes the final review report</span>
</span></span><span style="display:flex;"><span>reporter <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;report_writer&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Writes a structured code review report&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&#34;&#34;Using &#39;analysis&#39; and &#39;suggestions&#39; from session state,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    write a professional code review report in Markdown format.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Include: executive summary, detailed findings, and action items.&#34;&#34;&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;final_report&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>root_agent <span style="color:#f92672">=</span> SequentialAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;code_review_pipeline&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Complete code review pipeline: analyze, suggest, report&#34;</span>,
</span></span><span style="display:flex;"><span>    sub_agents<span style="color:#f92672">=</span>[analyzer, suggester, reporter],
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>The <code>output_key</code> field is the key mechanism here: each LlmAgent writes its response text into <code>session.state[output_key]</code> automatically. Downstream agents reference these keys in their instruction prompts. The SequentialAgent guarantees execution order — analyzer finishes before suggester starts, suggester finishes before reporter starts.</p>
<p>To trigger the pipeline, pass the initial state when starting a session:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.runners <span style="color:#f92672">import</span> Runner
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.sessions <span style="color:#f92672">import</span> InMemorySessionService
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.genai.types <span style="color:#f92672">import</span> Content, Part
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>session_service <span style="color:#f92672">=</span> InMemorySessionService()
</span></span><span style="display:flex;"><span>runner <span style="color:#f92672">=</span> Runner(agent<span style="color:#f92672">=</span>root_agent, app_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;code_review&#34;</span>, session_service<span style="color:#f92672">=</span>session_service)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>session <span style="color:#f92672">=</span> session_service<span style="color:#f92672">.</span>create_session(
</span></span><span style="display:flex;"><span>    app_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;code_review&#34;</span>,
</span></span><span style="display:flex;"><span>    user_id<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;dev_user&#34;</span>,
</span></span><span style="display:flex;"><span>    state<span style="color:#f92672">=</span>{<span style="color:#e6db74">&#34;file_path&#34;</span>: <span style="color:#e6db74">&#34;my_module.py&#34;</span>}
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> event <span style="color:#f92672">in</span> runner<span style="color:#f92672">.</span>run(
</span></span><span style="display:flex;"><span>    user_id<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;dev_user&#34;</span>,
</span></span><span style="display:flex;"><span>    session_id<span style="color:#f92672">=</span>session<span style="color:#f92672">.</span>id,
</span></span><span style="display:flex;"><span>    new_message<span style="color:#f92672">=</span>Content(parts<span style="color:#f92672">=</span>[Part(text<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Review the file.&#34;</span>)])
</span></span><span style="display:flex;"><span>):
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> event<span style="color:#f92672">.</span>is_final_response():
</span></span><span style="display:flex;"><span>        print(event<span style="color:#f92672">.</span>content<span style="color:#f92672">.</span>parts[<span style="color:#ae81ff">0</span>]<span style="color:#f92672">.</span>text)
</span></span></code></pre></div><h2 id="running-agents-in-parallel-speeding-up-complex-workflows">Running Agents in Parallel: Speeding Up Complex Workflows</h2>
<p>ParallelAgent runs multiple child agents concurrently, which dramatically reduces latency for independent subtasks. The canonical use case is a research pipeline where you want to gather information from multiple sources simultaneously rather than waiting for each one sequentially. The critical constraint: each parallel child agent must write to a unique <code>output_key</code> — writing to the same key causes a race condition where the last-finishing agent overwrites earlier results.</p>
<p>Here&rsquo;s a parallel research agent that simultaneously searches documentation, checks GitHub issues, and scans blog posts:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.agents <span style="color:#f92672">import</span> LlmAgent, ParallelAgent, SequentialAgent
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># These three agents run concurrently</span>
</span></span><span style="display:flex;"><span>docs_researcher <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;docs_researcher&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Searches official documentation&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Search the official ADK documentation for information about: </span><span style="color:#e6db74">{topic}</span><span style="color:#e6db74">. &#34;</span>
</span></span><span style="display:flex;"><span>                <span style="color:#e6db74">&#34;Return key facts and code examples. Store in output_key=&#39;docs_findings&#39;.&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;docs_findings&#34;</span>,
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[search_docs_tool],  <span style="color:#75715e"># your search tool</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>issues_researcher <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;issues_researcher&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Searches GitHub issues for known problems and solutions&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Search GitHub issues for: </span><span style="color:#e6db74">{topic}</span><span style="color:#e6db74">. &#34;</span>
</span></span><span style="display:flex;"><span>                <span style="color:#e6db74">&#34;Return relevant issues and their solutions. Store in output_key=&#39;issues_findings&#39;.&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;issues_findings&#34;</span>,
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[search_github_tool],
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>blog_researcher <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;blog_researcher&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Searches developer blog posts and tutorials&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Find blog posts and tutorials about: </span><span style="color:#e6db74">{topic}</span><span style="color:#e6db74">. &#34;</span>
</span></span><span style="display:flex;"><span>                <span style="color:#e6db74">&#34;Summarize key insights. Store in output_key=&#39;blog_findings&#39;.&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;blog_findings&#34;</span>,
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[search_web_tool],
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># ParallelAgent runs all three simultaneously</span>
</span></span><span style="display:flex;"><span>parallel_research <span style="color:#f92672">=</span> ParallelAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;parallel_researcher&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Gathers information from multiple sources concurrently&#34;</span>,
</span></span><span style="display:flex;"><span>    sub_agents<span style="color:#f92672">=</span>[docs_researcher, issues_researcher, blog_researcher],
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># SequentialAgent wraps the parallel step, then synthesizes</span>
</span></span><span style="display:flex;"><span>synthesizer <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;synthesizer&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Synthesizes research from all sources into a final answer&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&#34;&#34;Synthesize the research from these session state keys:
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - docs_findings: official documentation results
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - issues_findings: GitHub issues findings
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - blog_findings: blog post findings
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Write a comprehensive, well-sourced answer.&#34;&#34;&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;final_answer&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>root_agent <span style="color:#f92672">=</span> SequentialAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;research_pipeline&#34;</span>,
</span></span><span style="display:flex;"><span>    sub_agents<span style="color:#f92672">=</span>[parallel_research, synthesizer],
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>In practice, parallel ADK agents reduce latency by the time of the slowest subtask rather than the sum of all subtasks. A three-way parallel research step taking 8 seconds total beats a sequential 24-second equivalent.</p>
<h2 id="agent-communication-session-state-and-the-shared-whiteboard-pattern">Agent Communication: Session State and the Shared Whiteboard Pattern</h2>
<p>Session state is the backbone of multi-agent communication in ADK. It functions as a shared whiteboard: any agent can read from or write to it, it persists for the lifetime of a session, and the ADK web UI shows you its full contents at every step. Understanding how state flows prevents the most common multi-agent bugs — race conditions, overwritten keys, and agents that reference state keys that haven&rsquo;t been populated yet.</p>
<p>The three ways agents interact with session state are: automatic output key writing (when you set <code>output_key=&quot;key&quot;</code>, the agent&rsquo;s final response text is stored there automatically); tool return values (tools can return dictionaries that get merged into state); and explicit state manipulation in custom agents (by accessing <code>ctx.session.state</code> directly in <code>BaseAgent</code> implementations).</p>
<p>A common mistake is having two ParallelAgent children write to the same key. The fix is to always use distinct output keys, then have a downstream agent merge them:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># WRONG: race condition</span>
</span></span><span style="display:flex;"><span>agent_a <span style="color:#f92672">=</span> LlmAgent(name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;a&#34;</span>, output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;result&#34;</span>, <span style="color:#f92672">...</span>)
</span></span><span style="display:flex;"><span>agent_b <span style="color:#f92672">=</span> LlmAgent(name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;b&#34;</span>, output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;result&#34;</span>, <span style="color:#f92672">...</span>)  <span style="color:#75715e"># overwrites agent_a!</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># CORRECT: unique keys, merged downstream</span>
</span></span><span style="display:flex;"><span>agent_a <span style="color:#f92672">=</span> LlmAgent(name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;a&#34;</span>, output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;result_a&#34;</span>, <span style="color:#f92672">...</span>)
</span></span><span style="display:flex;"><span>agent_b <span style="color:#f92672">=</span> LlmAgent(name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;b&#34;</span>, output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;result_b&#34;</span>, <span style="color:#f92672">...</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>merger <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;merger&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Combine &#39;result_a&#39; and &#39;result_b&#39; from session state into a unified output.&#34;</span>,
</span></span><span style="display:flex;"><span>    output_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;result&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>For complex pipelines, prefix output keys with the agent name: <code>&quot;analyzer_summary&quot;</code>, <code>&quot;validator_report&quot;</code>, <code>&quot;formatter_output&quot;</code>. This makes state inspection in the dev UI immediately readable and prevents accidental collisions across pipeline stages.</p>
<h2 id="integrating-a2a-protocol-for-cross-framework-agent-communication">Integrating A2A Protocol for Cross-Framework Agent Communication</h2>
<p>The Agent-to-Agent (A2A) protocol is ADK&rsquo;s answer to framework fragmentation: a standard HTTP-based protocol for agents built in different frameworks — ADK, LangGraph, CrewAI, AutoGen — to delegate tasks to each other. An ADK agent can call a LangGraph agent as if it were a local tool, and vice versa. A2A was introduced alongside ADK at Google Cloud Next 2025 and is now supported by over 50 enterprise software vendors including Salesforce, SAP, and Atlassian.</p>
<p>To expose an ADK agent as an A2A server, wrap it in an <code>A2AServer</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.a2a <span style="color:#f92672">import</span> A2AServer
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>server <span style="color:#f92672">=</span> A2AServer(agent<span style="color:#f92672">=</span>root_agent, port<span style="color:#f92672">=</span><span style="color:#ae81ff">8080</span>)
</span></span><span style="display:flex;"><span>server<span style="color:#f92672">.</span>start()
</span></span></code></pre></div><p>To call a remote A2A agent from within another ADK agent, use the <code>A2AClient</code> as a tool:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.a2a <span style="color:#f92672">import</span> A2AClient
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>remote_agent_tool <span style="color:#f92672">=</span> A2AClient(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;remote_validator&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Validates code against company style guidelines using the remote validator agent&#34;</span>,
</span></span><span style="display:flex;"><span>    url<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;http://validator-service:8080&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>orchestrator <span style="color:#f92672">=</span> LlmAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;orchestrator&#34;</span>,
</span></span><span style="display:flex;"><span>    model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gemini-2.0-flash&#34;</span>,
</span></span><span style="display:flex;"><span>    instruction<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Analyze the code, then use the remote_validator tool to check style compliance.&#34;</span>,
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[remote_agent_tool],
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p>The A2A protocol handles authentication, streaming, and error propagation transparently. From the orchestrator&rsquo;s perspective, the remote agent looks identical to a local Python function tool. This is the key architectural benefit: you can start with all agents local, then split specific high-load agents into microservices without changing any orchestration code.</p>
<h2 id="testing-and-debugging-with-adks-built-in-dev-ui">Testing and Debugging with ADK&rsquo;s Built-in Dev UI</h2>
<p>ADK ships with a local web UI that is the fastest way to test and debug multi-agent systems. Run <code>adk web</code> in your project directory and open <code>http://localhost:8000</code> in a browser. The UI shows real-time agent execution: which agent is active, what messages it sent to the model, which tools were called with what arguments, and the complete session state at every step.</p>
<p>The dev UI is far more useful than print statements for multi-agent debugging because it shows the agent tree visually. When a SequentialAgent runs, you see each child agent activate in sequence. When a ParallelAgent runs, you see all children fire simultaneously. Session state appears as a live JSON panel that updates after each agent writes its output key.</p>
<p>For automated testing, ADK provides <code>InMemorySessionService</code> and a synchronous <code>Runner</code> that makes unit testing individual agents straightforward:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">import</span> pytest
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.runners <span style="color:#f92672">import</span> Runner
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.adk.sessions <span style="color:#f92672">import</span> InMemorySessionService
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> google.genai.types <span style="color:#f92672">import</span> Content, Part
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> my_agent <span style="color:#f92672">import</span> root_agent
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">test_code_reviewer_returns_score</span>():
</span></span><span style="display:flex;"><span>    session_service <span style="color:#f92672">=</span> InMemorySessionService()
</span></span><span style="display:flex;"><span>    runner <span style="color:#f92672">=</span> Runner(
</span></span><span style="display:flex;"><span>        agent<span style="color:#f92672">=</span>root_agent,
</span></span><span style="display:flex;"><span>        app_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;test_app&#34;</span>,
</span></span><span style="display:flex;"><span>        session_service<span style="color:#f92672">=</span>session_service
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>    session <span style="color:#f92672">=</span> session_service<span style="color:#f92672">.</span>create_session(
</span></span><span style="display:flex;"><span>        app_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;test_app&#34;</span>, user_id<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;test_user&#34;</span>
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    events <span style="color:#f92672">=</span> list(runner<span style="color:#f92672">.</span>run(
</span></span><span style="display:flex;"><span>        user_id<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;test_user&#34;</span>,
</span></span><span style="display:flex;"><span>        session_id<span style="color:#f92672">=</span>session<span style="color:#f92672">.</span>id,
</span></span><span style="display:flex;"><span>        new_message<span style="color:#f92672">=</span>Content(parts<span style="color:#f92672">=</span>[Part(text<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Review my_module.py&#34;</span>)])
</span></span><span style="display:flex;"><span>    ))
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    final <span style="color:#f92672">=</span> next(e <span style="color:#66d9ef">for</span> e <span style="color:#f92672">in</span> events <span style="color:#66d9ef">if</span> e<span style="color:#f92672">.</span>is_final_response())
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">assert</span> <span style="color:#e6db74">&#34;score&#34;</span> <span style="color:#f92672">in</span> final<span style="color:#f92672">.</span>content<span style="color:#f92672">.</span>parts[<span style="color:#ae81ff">0</span>]<span style="color:#f92672">.</span>text<span style="color:#f92672">.</span>lower()
</span></span></code></pre></div><p>For integration testing multi-agent pipelines, assert on <code>session.state</code> after the run completes — this validates that each agent wrote its expected output key correctly.</p>
<h2 id="deploying-your-multi-agent-system-to-vertex-ai-agent-engine">Deploying Your Multi-Agent System to Vertex AI Agent Engine</h2>
<p>Vertex AI Agent Engine is the managed runtime for ADK agents in Google Cloud. It handles auto-scaling, session persistence, monitoring, and IAM integration — you ship the agent code; Google manages the infrastructure. The deployment path is a single CLI command after authenticating with Google Cloud:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>gcloud auth application-default login
</span></span><span style="display:flex;"><span>pip install google-cloud-aiplatform<span style="color:#f92672">[</span>adk,reasoningengine<span style="color:#f92672">]</span>
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># deploy.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> vertexai
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> vertexai.preview <span style="color:#f92672">import</span> reasoning_engines
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>vertexai<span style="color:#f92672">.</span>init(project<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;your-project-id&#34;</span>, location<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;us-central1&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>app <span style="color:#f92672">=</span> reasoning_engines<span style="color:#f92672">.</span>AdkApp(
</span></span><span style="display:flex;"><span>    agent<span style="color:#f92672">=</span>root_agent,
</span></span><span style="display:flex;"><span>    enable_tracing<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>remote_app <span style="color:#f92672">=</span> reasoning_engines<span style="color:#f92672">.</span>ReasoningEngine<span style="color:#f92672">.</span>create(
</span></span><span style="display:flex;"><span>    app,
</span></span><span style="display:flex;"><span>    requirements<span style="color:#f92672">=</span>[<span style="color:#e6db74">&#34;google-adk&gt;=1.0.0&#34;</span>, <span style="color:#e6db74">&#34;google-cloud-aiplatform&#34;</span>],
</span></span><span style="display:flex;"><span>    display_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Code Review Multi-Agent System&#34;</span>,
</span></span><span style="display:flex;"><span>    description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Multi-agent pipeline for automated code review&#34;</span>,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>print(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Deployed: </span><span style="color:#e6db74">{</span>remote_app<span style="color:#f92672">.</span>resource_name<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span></code></pre></div><p>Running <code>python deploy.py</code> packages your agent code, uploads it to Vertex AI, and returns a resource name you use for all future invocations. The deployed agent gets a REST endpoint that accepts the same message format as the local runner — no code changes between local dev and production.</p>
<p>For production deployments, enable Cloud Trace integration by setting <code>enable_tracing=True</code> (as above). This gives you Gemini model latency breakdowns, tool call timing, and agent step traces in the Google Cloud Console — critical for diagnosing where pipeline latency comes from at scale.</p>
<h2 id="google-adk-vs-langgraph-vs-crewai-when-to-choose-each">Google ADK vs LangGraph vs CrewAI: When to Choose Each</h2>
<p>Google ADK, LangGraph, and CrewAI serve overlapping but distinct use cases. ADK is the right choice when you&rsquo;re building on Google Cloud, need Vertex AI deployment, or want the A2A protocol for cross-framework orchestration. LangGraph is the right choice when you need fine-grained control over agent execution graphs with custom branching, loops, and human-in-the-loop checkpoints — its graph-based model is more flexible than ADK&rsquo;s tree model for complex conditional workflows. CrewAI is the right choice for rapid prototyping with a role-based team metaphor — its higher-level abstractions are faster to write but less controllable.</p>
<table>
  <thead>
      <tr>
          <th>Dimension</th>
          <th>Google ADK</th>
          <th>LangGraph</th>
          <th>CrewAI</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Primary abstraction</strong></td>
          <td>Agent tree (hierarchical)</td>
          <td>Directed graph</td>
          <td>Role-based crew</td>
      </tr>
      <tr>
          <td><strong>Gemini optimization</strong></td>
          <td>Native</td>
          <td>Via LangChain</td>
          <td>Via LangChain</td>
      </tr>
      <tr>
          <td><strong>Vertex AI deployment</strong></td>
          <td>One command</td>
          <td>Manual</td>
          <td>Manual</td>
      </tr>
      <tr>
          <td><strong>Dev UI built-in</strong></td>
          <td>Yes (<code>adk web</code>)</td>
          <td>No</td>
          <td>No</td>
      </tr>
      <tr>
          <td><strong>A2A protocol</strong></td>
          <td>Native</td>
          <td>Via adapter</td>
          <td>Via adapter</td>
      </tr>
      <tr>
          <td><strong>State management</strong></td>
          <td><code>session.state</code> dict</td>
          <td>State schema + reducers</td>
          <td>Context passing</td>
      </tr>
      <tr>
          <td><strong>Learning curve</strong></td>
          <td>Low (Python functions)</td>
          <td>Medium (graph concepts)</td>
          <td>Low (role prompts)</td>
      </tr>
      <tr>
          <td><strong>Best for</strong></td>
          <td>GCP-native, Gemini, enterprise</td>
          <td>Complex branching, stateful loops</td>
          <td>Quick prototypes, team simulation</td>
      </tr>
  </tbody>
</table>
<p>The practical decision rule: if your agents need to call each other across frameworks or you&rsquo;re deploying to GCP, use ADK. If your workflow has complex conditional branching or cycles, use LangGraph. If you want something running in a weekend, use CrewAI.</p>
<h2 id="real-world-use-cases-code-review-customer-support-and-research-pipelines">Real-World Use Cases: Code Review, Customer Support, and Research Pipelines</h2>
<p>Three production patterns demonstrate where multi-agent ADK systems deliver the most value.</p>
<p><strong>Code Review Pipeline</strong> — The most natural fit for a SequentialAgent: analyzer reads code and extracts metrics, security scanner checks for OWASP vulnerabilities, style checker validates against company conventions, and a report writer synthesizes all findings. Companies like Sourcegraph have reported 40% reduction in human code review time when AI pre-review runs on every PR. The ADK implementation handles this with four LlmAgents in a SequentialAgent, each writing to a distinct output key, with the final report agent reading all three upstream keys.</p>
<p><strong>Customer Support Escalation</strong> — A router LlmAgent classifies incoming tickets by type (billing, technical, account), then routes to a specialized agent for that category. Technical tickets can spawn a ParallelAgent that simultaneously searches the knowledge base, checks system status, and retrieves the customer&rsquo;s history. The specialized agents resolve or escalate based on their findings. ADK&rsquo;s session state persists conversation history, so escalated tickets carry full context to the human agent who picks them up.</p>
<p><strong>Research Summarization Workflow</strong> — A parallel research step fetches from arXiv, GitHub, and blog sources simultaneously. A deduplication agent removes redundant findings. A synthesis agent writes the final summary. This pattern compresses 45 minutes of manual research into under 3 minutes when combined with Gemini&rsquo;s 1M-token context window and web search tools. The A2A protocol enables adding specialized external agents — a statistics analyzer, a domain-specific validator — without refactoring the core pipeline.</p>
<hr>
<h2 id="faq">FAQ</h2>
<p>The questions below address the most common sticking points developers hit when starting with Google ADK in 2026. Google ADK (Agent Development Kit) is a production-ready, open-source Python framework for building multi-agent AI systems that reached stable v1.0.0 in May 2025. It is optimized for Gemini models and Google Cloud infrastructure, but works with any OpenAI-compatible model via LiteLLM. The framework uses a hierarchical agent tree model where LlmAgents handle reasoning, SequentialAgents and ParallelAgents orchestrate workflows, and session state acts as the shared communication layer. Deployment to Vertex AI Agent Engine requires minimal configuration and provides enterprise-grade scaling, monitoring, and IAM integration. With 13 million developers already on Google&rsquo;s generative model platform and the AI agents market growing at 49.6% CAGR, understanding ADK&rsquo;s core patterns — installation, agent composition, state management, and A2A cross-framework communication — is increasingly a baseline skill for Python developers building AI-powered applications in 2026.</p>
<h3 id="how-do-i-install-google-adk">How do I install Google ADK?</h3>
<p>Run <code>pip install google-adk</code> in a Python 3.10+ environment. For production Vertex AI deployment, use <code>pip install google-adk[vertexai]</code>. Set your <code>GOOGLE_API_KEY</code> environment variable or configure Application Default Credentials for Google Cloud. Verify with <code>adk --version</code>.</p>
<h3 id="what-is-the-difference-between-llmagent-and-sequentialagent-in-google-adk">What is the difference between LlmAgent and SequentialAgent in Google ADK?</h3>
<p>LlmAgent is a reasoning primitive — it calls a Gemini model to decide what to do, which tools to invoke, and what to return. SequentialAgent is a workflow primitive — it has no model of its own, it just runs a list of child agents one after another in order. You compose them: a SequentialAgent contains LlmAgents as children.</p>
<h3 id="how-does-session-state-work-in-google-adk-multi-agent-systems">How does session state work in Google ADK multi-agent systems?</h3>
<p>Session state (<code>session.state</code>) is a Python dictionary that persists for the lifetime of a conversation session. Agents write to it via <code>output_key</code> (the agent&rsquo;s response is automatically stored at that key) or by returning dictionaries from tool functions. All agents in the same pipeline share the same state object, so downstream agents read what upstream agents wrote.</p>
<h3 id="can-google-adk-work-with-models-other-than-gemini">Can Google ADK work with models other than Gemini?</h3>
<p>Yes. ADK supports non-Gemini models through LiteLLM integration. Install <code>litellm</code> and configure the model string as <code>&quot;litellm/gpt-4o&quot;</code> or <code>&quot;litellm/claude-opus-4-7&quot;</code> instead of a Gemini model name. This lets you run the same ADK agent code against any OpenAI-compatible model endpoint.</p>
<h3 id="how-do-i-deploy-a-google-adk-agent-to-production">How do I deploy a Google ADK agent to production?</h3>
<p>For Vertex AI deployment, use <code>reasoning_engines.AdkApp</code> with <code>reasoning_engines.ReasoningEngine.create()</code>. This packages your agent code and deploys it to Google Cloud with auto-scaling and session management. Alternatively, export ADK agents as FastAPI apps using <code>adk api_server</code> for deployment to Cloud Run or any container infrastructure.</p>
]]></content:encoded></item></channel></rss>