<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Multi-Agent on RockB</title><link>https://baeseokjae.github.io/tags/multi-agent/</link><description>Recent content in Multi-Agent on RockB</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 19 Apr 2026 16:31:58 +0000</lastBuildDate><atom:link href="https://baeseokjae.github.io/tags/multi-agent/index.xml" rel="self" type="application/rss+xml"/><item><title>AG2 (AutoGen v0.4) Guide: Event-Driven Multi-Agent Framework for Python Developers</title><link>https://baeseokjae.github.io/posts/ag2-autogen-v0-4-guide-2026/</link><pubDate>Sun, 19 Apr 2026 16:31:58 +0000</pubDate><guid>https://baeseokjae.github.io/posts/ag2-autogen-v0-4-guide-2026/</guid><description>Complete guide to AG2 (AutoGen v0.4): architecture, ConversableAgent, GroupChat, async messaging, and production best practices for Python developers.</description><content:encoded><![CDATA[<p>AG2 (formerly Microsoft AutoGen, now maintained by the ag2ai community) is a Python framework for building multi-agent AI systems where multiple LLM-powered agents collaborate, debate, and execute tasks autonomously. The v0.4 rewrite introduced an async-first, event-driven architecture that makes AG2 one of the most capable frameworks for complex conversational agent pipelines in 2026.</p>
<h2 id="what-is-ag2-autogen-v04-and-why-it-matters-in-2026">What Is AG2 (AutoGen v0.4) and Why It Matters in 2026</h2>
<p>AG2 is an open-source Python framework that enables developers to build networks of LLM-powered agents that communicate with each other through structured message passing to solve complex tasks collaboratively. Originally released as Microsoft AutoGen, the project transitioned to the independent ag2ai organization in November 2024 with over 54,000 GitHub stars and millions of cumulative downloads. The v0.4 release was a complete architectural redesign — not an incremental update — focused on async-first execution, improved code quality, robustness, and scalability for production workloads. In 2026, AG2 powers document review pipelines at enterprise scale, code generation workflows in CI/CD systems, and research automation for data teams. The framework supports Python 3.10 through 3.13 and integrates with OpenAI, Anthropic, Google Gemini, Alibaba DashScope, and local models via Ollama. What makes AG2 distinctive is its conversation-centric model: agents don&rsquo;t just call tools — they argue, critique, refine, and reach consensus through structured dialogue, which is fundamentally different from how LangGraph or CrewAI approach orchestration.</p>
<p>The shift from v0.2 to v0.4 wasn&rsquo;t just about adding features. The v0.2 API was synchronous by default and relied heavily on <code>initiate_chat()</code> as the entry point for everything. V0.4 separates concerns into three distinct layers — Core, AgentChat, and Extensions — and makes asynchronous execution the primary pattern. If you&rsquo;re running AutoGen in production on v0.2, migration requires meaningful refactoring. If you&rsquo;re starting fresh in 2026, use AG2 v0.4 from the beginning.</p>
<h3 id="why-the-community-fork-happened">Why the Community Fork Happened</h3>
<p>Microsoft Research originally developed AutoGen as a research project. When the ag2ai community took over maintenance, it signaled a shift toward production stability over research experimentation. The AG2 team committed to semantic versioning, a stable public API, and a clear deprecation policy — things the research-focused AutoGen lacked. The GitHub community responded: the ag2ai/ag2 repo accumulated 20,000+ Discord members and 3,000+ GitHub forks within months of the transition.</p>
<h2 id="ag2-architecture-deep-dive-core-agentchat-and-extensions-layers">AG2 Architecture Deep Dive: Core, AgentChat, and Extensions Layers</h2>
<p>AG2&rsquo;s v0.4 architecture is organized into three layers that each serve a distinct purpose, allowing developers to work at the abstraction level that fits their use case — from low-level message control to high-level team orchestration. The <strong>Core layer</strong> (<code>autogen_core</code>) provides the fundamental runtime: the actor model, message routing, async event loop, and subscription system. The <strong>AgentChat layer</strong> (<code>autogen_agentchat</code>) builds on Core with pre-built agent types — <code>AssistantAgent</code>, <code>UserProxyAgent</code>, <code>ConversableAgent</code> — and team coordination patterns like <code>RoundRobinGroupChat</code> and <code>SelectorGroupChat</code>. The <strong>Extensions layer</strong> (<code>autogen_ext</code>) provides integrations with external systems: vector databases, code executors, LLM clients for different providers, and tool adapters.</p>
<p>This layered design matters practically: if you need custom routing logic or want to implement a novel agent communication pattern, you work at the Core layer. If you&rsquo;re building a standard multi-agent pipeline, AgentChat has everything you need. If you&rsquo;re integrating with Qdrant, running code in Docker, or using Azure OpenAI, Extensions handles it. Most developers will work entirely within AgentChat with occasional dips into Extensions.</p>
<p>The Core layer implements the <strong>actor model</strong>: each agent is an independent actor with its own message inbox, local state, and processing loop. Agents don&rsquo;t call each other directly — they publish messages to a runtime that routes them based on topic subscriptions. This is what makes AG2&rsquo;s event-driven pattern different from simple function chaining. An agent can subscribe to multiple message types, emit messages that trigger other agents asynchronously, and handle failures without blocking the entire pipeline.</p>
<h3 id="understanding-the-runtime">Understanding the Runtime</h3>
<p>The <code>SingleThreadedAgentRuntime</code> is the default for local development. For production distributed systems, AG2 provides distributed runtime support. The runtime manages agent lifecycle, handles message queuing, and enforces the subscription model. You register agents with the runtime, define their topic subscriptions, and then publish events — the runtime handles the rest.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_core <span style="color:#f92672">import</span> SingleThreadedAgentRuntime
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>runtime <span style="color:#f92672">=</span> SingleThreadedAgentRuntime()
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">await</span> runtime<span style="color:#f92672">.</span>start()
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Register agents and publish messages</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">await</span> runtime<span style="color:#f92672">.</span>stop_when_idle()
</span></span></code></pre></div><h2 id="key-concepts-conversableagent-assistantagent-and-event-driven-messaging">Key Concepts: ConversableAgent, AssistantAgent, and Event-Driven Messaging</h2>
<p>AG2&rsquo;s agent model centers on <code>ConversableAgent</code> — the base class that every agent in the AgentChat layer inherits from — which implements the core protocol for sending, receiving, and responding to messages within a multi-agent conversation. Every agent in AG2 can initiate a conversation, respond to messages, call tools, and delegate subtasks to other agents. <code>AssistantAgent</code> extends <code>ConversableAgent</code> with LLM-backed reasoning: it takes messages, constructs prompts, calls the configured LLM, and returns structured responses. <code>UserProxyAgent</code> acts as a human-in-the-loop stand-in: it can execute code, request human input, or auto-reply based on configured rules.</p>
<p>The event-driven messaging model in v0.4 works differently from the synchronous <code>initiate_chat()</code> pattern in v0.2. Instead of one agent kicking off a blocking conversation, agents publish messages to typed topics. Other agents that have subscribed to those topic types receive the messages and process them in their own async loops. This enables genuinely parallel agent execution — multiple agents can process messages simultaneously without waiting for each other.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.agents <span style="color:#f92672">import</span> AssistantAgent
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.teams <span style="color:#f92672">import</span> RoundRobinGroupChat
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_ext.models.openai <span style="color:#f92672">import</span> OpenAIChatCompletionClient
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>model_client <span style="color:#f92672">=</span> OpenAIChatCompletionClient(model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gpt-4o&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>planner <span style="color:#f92672">=</span> AssistantAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;planner&#34;</span>,
</span></span><span style="display:flex;"><span>    model_client<span style="color:#f92672">=</span>model_client,
</span></span><span style="display:flex;"><span>    system_message<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You break complex tasks into actionable steps.&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>executor <span style="color:#f92672">=</span> AssistantAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;executor&#34;</span>,
</span></span><span style="display:flex;"><span>    model_client<span style="color:#f92672">=</span>model_client,
</span></span><span style="display:flex;"><span>    system_message<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You implement the steps provided by the planner.&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><h3 id="tools-and-function-calling">Tools and Function Calling</h3>
<p>AG2 agents call Python functions as tools through the standard function-calling API. You define tools as regular Python functions with type annotations, register them with an agent, and the agent decides when to call them based on conversation context. AG2 supports OpenAI&rsquo;s function calling format and automatically generates the JSON schema from Python type hints.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">search_docs</span>(query: str) <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Search internal documentation for the given query.&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># implementation here</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> results
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>agent <span style="color:#f92672">=</span> AssistantAgent(
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;researcher&#34;</span>,
</span></span><span style="display:flex;"><span>    model_client<span style="color:#f92672">=</span>model_client,
</span></span><span style="display:flex;"><span>    tools<span style="color:#f92672">=</span>[search_docs]
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><h2 id="getting-started-installing-ag2-and-your-first-multi-agent-system">Getting Started: Installing AG2 and Your First Multi-Agent System</h2>
<p>Installing AG2 and running your first multi-agent conversation requires Python 3.10+ and three pip packages — <code>autogen-agentchat</code> for the high-level agent API, <code>autogen-ext</code> for LLM provider clients, and optionally <code>autogen-core</code> if you need direct runtime access. The separation into multiple packages is intentional: it keeps dependency footprints small. A project that only needs OpenAI doesn&rsquo;t pull in Anthropic or Gemini client libraries.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install autogen-agentchat autogen-ext<span style="color:#f92672">[</span>openai<span style="color:#f92672">]</span>
</span></span></code></pre></div><p>For Anthropic Claude or Google Gemini:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install autogen-ext<span style="color:#f92672">[</span>anthropic<span style="color:#f92672">]</span>
</span></span><span style="display:flex;"><span>pip install autogen-ext<span style="color:#f92672">[</span>gemini<span style="color:#f92672">]</span>
</span></span></code></pre></div><p>For local models via Ollama:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install autogen-ext<span style="color:#f92672">[</span>ollama<span style="color:#f92672">]</span>
</span></span></code></pre></div><p>Here&rsquo;s a minimal two-agent system that solves a coding task:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">import</span> asyncio
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.agents <span style="color:#f92672">import</span> AssistantAgent, UserProxyAgent
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.teams <span style="color:#f92672">import</span> RoundRobinGroupChat
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.conditions <span style="color:#f92672">import</span> TextMentionTermination
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_ext.models.openai <span style="color:#f92672">import</span> OpenAIChatCompletionClient
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">main</span>():
</span></span><span style="display:flex;"><span>    model_client <span style="color:#f92672">=</span> OpenAIChatCompletionClient(model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gpt-4o-mini&#34;</span>)
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    assistant <span style="color:#f92672">=</span> AssistantAgent(
</span></span><span style="display:flex;"><span>        name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;assistant&#34;</span>,
</span></span><span style="display:flex;"><span>        model_client<span style="color:#f92672">=</span>model_client,
</span></span><span style="display:flex;"><span>        system_message<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;You are a helpful Python developer. Solve the task and say TERMINATE when done.&#34;</span>
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    user_proxy <span style="color:#f92672">=</span> UserProxyAgent(
</span></span><span style="display:flex;"><span>        name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;user_proxy&#34;</span>,
</span></span><span style="display:flex;"><span>        human_input_mode<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;NEVER&#34;</span>,
</span></span><span style="display:flex;"><span>        code_execution_config<span style="color:#f92672">=</span>{<span style="color:#e6db74">&#34;use_docker&#34;</span>: <span style="color:#66d9ef">False</span>}
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    termination <span style="color:#f92672">=</span> TextMentionTermination(<span style="color:#e6db74">&#34;TERMINATE&#34;</span>)
</span></span><span style="display:flex;"><span>    team <span style="color:#f92672">=</span> RoundRobinGroupChat([assistant, user_proxy], termination_condition<span style="color:#f92672">=</span>termination)
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    result <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> team<span style="color:#f92672">.</span>run(task<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Write a Python function that finds all prime numbers up to N using the Sieve of Eratosthenes.&#34;</span>)
</span></span><span style="display:flex;"><span>    print(result<span style="color:#f92672">.</span>messages[<span style="color:#f92672">-</span><span style="color:#ae81ff">1</span>]<span style="color:#f92672">.</span>content)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>asyncio<span style="color:#f92672">.</span>run(main())
</span></span></code></pre></div><h3 id="configuring-llm-providers">Configuring LLM Providers</h3>
<p>AG2 uses provider-specific client classes from <code>autogen_ext.models</code>. This is different from v0.2&rsquo;s config list approach. You instantiate a client for your provider and pass it to agents directly:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># OpenAI</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_ext.models.openai <span style="color:#f92672">import</span> OpenAIChatCompletionClient
</span></span><span style="display:flex;"><span>client <span style="color:#f92672">=</span> OpenAIChatCompletionClient(model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;gpt-4o&#34;</span>, api_key<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;sk-...&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Anthropic</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_ext.models.anthropic <span style="color:#f92672">import</span> AnthropicChatCompletionClient
</span></span><span style="display:flex;"><span>client <span style="color:#f92672">=</span> AnthropicChatCompletionClient(model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;claude-sonnet-4-6&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Ollama (local)</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_ext.models.ollama <span style="color:#f92672">import</span> OllamaChatCompletionClient
</span></span><span style="display:flex;"><span>client <span style="color:#f92672">=</span> OllamaChatCompletionClient(model<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;llama3.2&#34;</span>)
</span></span></code></pre></div><h2 id="building-real-world-pipelines-groupchat-swarms-and-nested-chats">Building Real-World Pipelines: GroupChat, Swarms, and Nested Chats</h2>
<p>AG2&rsquo;s power emerges in multi-agent orchestration patterns — GroupChat for turn-based collaboration, Swarms for dynamic handoffs, and nested chats for hierarchical task decomposition. These patterns let you build pipelines where agents specialize, delegate, and verify each other&rsquo;s work rather than relying on a single LLM to do everything. A 4-agent GroupChat with 5 rounds generates at least 20 LLM calls, so pattern selection has direct cost implications. Choosing the right orchestration pattern for your task type is one of the most important architectural decisions in an AG2 system.</p>
<p><strong>RoundRobinGroupChat</strong> cycles through agents in fixed order — simple, predictable, good for sequential workflows where each agent has a distinct phase:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.teams <span style="color:#f92672">import</span> RoundRobinGroupChat
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>team <span style="color:#f92672">=</span> RoundRobinGroupChat(
</span></span><span style="display:flex;"><span>    participants<span style="color:#f92672">=</span>[researcher, writer, reviewer],
</span></span><span style="display:flex;"><span>    termination_condition<span style="color:#f92672">=</span>TextMentionTermination(<span style="color:#e6db74">&#34;APPROVED&#34;</span>)
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p><strong>SelectorGroupChat</strong> uses an LLM to dynamically select the next speaker based on conversation context — better for complex workflows where the optimal next step depends on what&rsquo;s happened so far:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.teams <span style="color:#f92672">import</span> SelectorGroupChat
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>team <span style="color:#f92672">=</span> SelectorGroupChat(
</span></span><span style="display:flex;"><span>    participants<span style="color:#f92672">=</span>[planner, coder, tester, debugger],
</span></span><span style="display:flex;"><span>    model_client<span style="color:#f92672">=</span>model_client,
</span></span><span style="display:flex;"><span>    selector_prompt<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Based on the conversation, select the most appropriate next agent.&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span></code></pre></div><p><strong>Swarm</strong> implements handoff-based routing: agents pass control to each other explicitly using <code>HandoffMessage</code>. This is the pattern for customer service bots, triage systems, or any workflow where each agent knows when to escalate or delegate:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.teams <span style="color:#f92672">import</span> Swarm
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> autogen_agentchat.messages <span style="color:#f92672">import</span> HandoffMessage
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Agents use HandoffMessage to transfer control</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Swarm routes to the specified agent automatically</span>
</span></span><span style="display:flex;"><span>team <span style="color:#f92672">=</span> Swarm(participants<span style="color:#f92672">=</span>[triage_agent, billing_agent, support_agent])
</span></span></code></pre></div><h3 id="nested-chats-for-complex-decomposition">Nested Chats for Complex Decomposition</h3>
<p>Nested chats let a parent agent kick off an entire sub-conversation as part of its own reasoning. This is powerful for research tasks where an agent needs to gather information from multiple specialized sub-agents before synthesizing a response. In v0.4, you implement nested chats by having an agent&rsquo;s tool call <code>initiate_chat()</code> internally, creating a new conversation context.</p>
<h2 id="ag2-vs-langgraph-vs-crewai-choosing-the-right-framework-in-2026">AG2 vs LangGraph vs CrewAI: Choosing the Right Framework in 2026</h2>
<p>AG2 excels at multi-party conversational workflows, consensus-building, and scenarios where agents need to debate or critique each other — LangGraph is better for deterministic state machines with complex branching logic, and CrewAI is better for simple role-based pipelines where ease of setup matters more than flexibility. This is the practical decision guide based on actual production use patterns in 2026. All three frameworks are mature enough for production, but they optimize for fundamentally different problem shapes. The wrong choice means fighting the framework; the right choice means the framework amplifies your design.</p>
<table>
  <thead>
      <tr>
          <th>Criteria</th>
          <th>AG2</th>
          <th>LangGraph</th>
          <th>CrewAI</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Primary pattern</strong></td>
          <td>Conversational agents</td>
          <td>State machine graphs</td>
          <td>Role-based crews</td>
      </tr>
      <tr>
          <td><strong>Learning curve</strong></td>
          <td>Medium</td>
          <td>High</td>
          <td>Low</td>
      </tr>
      <tr>
          <td><strong>Async support</strong></td>
          <td>Native (v0.4)</td>
          <td>Yes</td>
          <td>Limited</td>
      </tr>
      <tr>
          <td><strong>Human-in-loop</strong></td>
          <td>Built-in</td>
          <td>Manual</td>
          <td>Basic</td>
      </tr>
      <tr>
          <td><strong>Debugging</strong></td>
          <td>Conversation logs</td>
          <td>Graph visualization</td>
          <td>Simple logs</td>
      </tr>
      <tr>
          <td><strong>Best for</strong></td>
          <td>Group debates, consensus</td>
          <td>Complex branching workflows</td>
          <td>Simple automation</td>
      </tr>
      <tr>
          <td><strong>Python skill needed</strong></td>
          <td>Intermediate</td>
          <td>Advanced</td>
          <td>Beginner-friendly</td>
      </tr>
      <tr>
          <td><strong>Cost per run</strong></td>
          <td>High (many LLM calls)</td>
          <td>Controllable</td>
          <td>Medium</td>
      </tr>
  </tbody>
</table>
<p><strong>Choose AG2 when:</strong></p>
<ul>
<li>Your task benefits from agents critiquing each other&rsquo;s work (code review, document editing, research validation)</li>
<li>You need flexible conversation routing that depends on semantic content</li>
<li>You&rsquo;re building customer service, tutoring, or debate-style applications</li>
<li>You want native async with multi-provider LLM support</li>
</ul>
<p><strong>Choose LangGraph when:</strong></p>
<ul>
<li>Your workflow has predictable branches with clear state transitions</li>
<li>You need fine-grained control over every execution step</li>
<li>You&rsquo;re building workflows where correctness is more important than flexibility</li>
<li>Your team has strong Python and graph-theory background</li>
</ul>
<p><strong>Choose CrewAI when:</strong></p>
<ul>
<li>You need to ship fast and the workflow is straightforward</li>
<li>Non-engineers are defining the agent roles and tasks</li>
<li>The task doesn&rsquo;t require complex inter-agent negotiation</li>
</ul>
<h3 id="migration-from-autogen-v02-to-ag2-v04">Migration from AutoGen v0.2 to AG2 v0.4</h3>
<p>The v0.2 to v0.4 migration involves breaking changes at every level. Key changes:</p>
<ol>
<li><strong>Import paths changed</strong>: <code>from autogen import AssistantAgent</code> → <code>from autogen_agentchat.agents import AssistantAgent</code></li>
<li><strong>Config list removed</strong>: Replace <code>llm_config={&quot;config_list&quot;: [...]}</code> with provider-specific client objects</li>
<li><strong><code>initiate_chat()</code> deprecated</strong>: Use team-based APIs with <code>await team.run(task=...)</code></li>
<li><strong>Synchronous code won&rsquo;t work</strong>: Everything is async — wrap with <code>asyncio.run()</code> or use <code>asyncio.get_event_loop()</code></li>
</ol>
<h2 id="production-best-practices-cost-control-state-management-and-observability">Production Best Practices: Cost Control, State Management, and Observability</h2>
<p>Running AG2 in production requires explicit strategies for controlling LLM costs, persisting conversation state across sessions, and observing agent behavior — because the default configuration optimizes for flexibility, not cost or reliability. A 4-agent GroupChat with 5 rounds generates at least 20 LLM calls, each sending the full conversation history as context. Without cost controls, a single complex task can consume $5–$20 in API calls. With the right patterns, you can cut that by 60–80% while maintaining output quality.</p>
<p><strong>Cost Control Strategies:</strong></p>
<ol>
<li>
<p><strong>Use cheaper models for simple agents</strong>: Route tool-calling agents to <code>gpt-4o-mini</code> or <code>claude-haiku-4-5</code> and reserve expensive models for reasoning-heavy agents</p>
</li>
<li>
<p><strong>Set max_turns explicitly</strong>: Always cap GroupChat rounds:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span>team <span style="color:#f92672">=</span> RoundRobinGroupChat(participants<span style="color:#f92672">=</span>[<span style="color:#f92672">...</span>], max_turns<span style="color:#f92672">=</span><span style="color:#ae81ff">5</span>)
</span></span></code></pre></div></li>
<li>
<p><strong>Cache LLM responses</strong>: For deterministic subtasks (document classification, entity extraction), cache results to avoid redundant LLM calls</p>
</li>
<li>
<p><strong>Use selective context</strong>: AG2 v0.4 supports message filtering — don&rsquo;t send the entire conversation history to every agent for every turn</p>
</li>
</ol>
<p><strong>State Persistence:</strong></p>
<p>AG2 v0.4 introduces <code>save_state()</code> and <code>load_state()</code> on team objects, enabling conversation checkpointing:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># Save after completion</span>
</span></span><span style="display:flex;"><span>state <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> team<span style="color:#f92672">.</span>save_state()
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">with</span> open(<span style="color:#e6db74">&#34;checkpoint.json&#34;</span>, <span style="color:#e6db74">&#34;w&#34;</span>) <span style="color:#66d9ef">as</span> f:
</span></span><span style="display:flex;"><span>    json<span style="color:#f92672">.</span>dump(state, f)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Resume from checkpoint</span>
</span></span><span style="display:flex;"><span>new_team <span style="color:#f92672">=</span> RoundRobinGroupChat(participants<span style="color:#f92672">=</span>[<span style="color:#f92672">...</span>])
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">with</span> open(<span style="color:#e6db74">&#34;checkpoint.json&#34;</span>) <span style="color:#66d9ef">as</span> f:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">await</span> new_team<span style="color:#f92672">.</span>load_state(json<span style="color:#f92672">.</span>load(f))
</span></span><span style="display:flex;"><span>result <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> new_team<span style="color:#f92672">.</span>run(task<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Continue from where we left off&#34;</span>)
</span></span></code></pre></div><p><strong>Observability:</strong></p>
<p>AG2 integrates with OpenTelemetry for distributed tracing. Each LLM call, tool invocation, and agent message is a traceable span. For production systems, connect to Jaeger, Datadog, or Honeycomb:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> opentelemetry <span style="color:#f92672">import</span> trace
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> opentelemetry.sdk.trace <span style="color:#f92672">import</span> TracerProvider
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>tracer_provider <span style="color:#f92672">=</span> TracerProvider()
</span></span><span style="display:flex;"><span>trace<span style="color:#f92672">.</span>set_tracer_provider(tracer_provider)
</span></span><span style="display:flex;"><span><span style="color:#75715e"># AG2 automatically instruments LLM calls and agent messages</span>
</span></span></code></pre></div><h3 id="error-handling-and-retries">Error Handling and Retries</h3>
<p>AG2 agents can fail silently if LLM calls time out or return malformed responses. Implement explicit retry logic at the team level and validate agent outputs before passing them downstream. The <code>on_messages_stream()</code> method lets you inspect messages in real-time and terminate early if an agent enters a failure loop.</p>
<h2 id="ag2-beta-and-the-road-to-v10-what-python-developers-need-to-know">AG2 Beta and the Road to v1.0: What Python Developers Need to Know</h2>
<p>AG2 Beta (<code>autogen.beta</code>) previews the v1.0 architecture, which introduces streaming-first agent responses, improved memory systems, and a unified tool registry that works across all agent types — changes that will affect how you build production systems starting in late 2026. The Beta track is importable today as <code>from autogen.beta import ...</code> alongside the stable v0.4 API. The ag2ai team has committed to not breaking stable v0.4 APIs before a 6-month deprecation window, but Beta APIs can change without notice. The most significant v1.0 changes for Python developers are:</p>
<p><strong>Streaming responses</strong>: V1.0 makes streaming the default for all LLM calls, enabling real-time output for user-facing applications. In v0.4, streaming requires explicit configuration per agent. In v1.0, it&rsquo;s automatic with a unified <code>on_token()</code> callback.</p>
<p><strong>Memory architecture</strong>: V1.0 introduces pluggable memory backends. Agents can store and retrieve context from vector databases (Qdrant, Pinecone, Chroma) without custom tool implementations. This replaces the manual retrieval patterns required in v0.4.</p>
<p><strong>Unified tool registry</strong>: In v0.4, each agent has its own tool list. V1.0 introduces a shared registry where tools can be discovered and used by any agent in the system, reducing code duplication in large multi-agent pipelines.</p>
<p><strong>What to do now</strong>: Build on stable v0.4 APIs for production systems. Experiment with <code>autogen.beta</code> in development to prepare for migration. Watch the ag2ai/ag2 GitHub releases for the v1.0 roadmap — the community is active and the release cadence is roughly quarterly.</p>
<hr>
<h2 id="faq">FAQ</h2>
<p><strong>Q: Is AG2 the same as AutoGen?</strong>
AG2 is the community continuation of Microsoft AutoGen. After the ag2ai organization took over in November 2024, they published the package as <code>ag2</code> on PyPI while maintaining the <code>autogen</code> namespace for backward compatibility. The codebase is the same project, now with community governance instead of Microsoft Research ownership.</p>
<p><strong>Q: Can I use AG2 with local LLMs?</strong>
Yes. AG2 v0.4 supports Ollama via <code>autogen_ext.models.ollama.OllamaChatCompletionClient</code>. Install <code>pip install autogen-ext[ollama]</code>, start Ollama locally with <code>ollama serve</code>, and configure an <code>OllamaChatCompletionClient</code> pointing to <code>http://localhost:11434</code>. This enables fully offline multi-agent systems with models like Llama 3.2 or Mistral.</p>
<p><strong>Q: How does AG2 v0.4 differ from v0.2 in practice?</strong>
V0.4 requires async code everywhere — you can&rsquo;t run <code>initiate_chat()</code> synchronously. The import paths changed (now <code>autogen_agentchat</code>, <code>autogen_core</code>, <code>autogen_ext</code> instead of just <code>autogen</code>). LLM configuration moved from config lists to provider-specific client objects. Team-based APIs replaced the direct <code>initiate_chat()</code> pattern. Plan for a meaningful refactoring effort when migrating from v0.2.</p>
<p><strong>Q: How much does running AG2 cost in production?</strong>
Cost depends heavily on model choice and GroupChat configuration. A 4-agent GroupChat with 5 rounds generates at least 20 LLM calls. Using <code>gpt-4o-mini</code> ($0.15/1M input tokens) instead of <code>gpt-4o</code> ($2.50/1M input tokens) can reduce costs by 94% for agents that don&rsquo;t require advanced reasoning. Budget for 50–200 tokens of conversation history per message multiplied by the number of agents and rounds.</p>
<p><strong>Q: Is AG2 ready for production in 2026?</strong>
Yes, with caveats. The stable v0.4 API is production-ready. The ag2ai community has implemented semantic versioning, a deprecation policy, and a stable public API contract. Large-scale enterprise deployment requires custom work for state persistence, observability, and cost management — AG2 provides the building blocks but doesn&rsquo;t solve these problems out of the box. For most teams building internal tools, automation pipelines, or customer-facing agents, v0.4 is stable enough to ship.</p>
]]></content:encoded></item><item><title>CrewAI Tutorial 2026: Build Multi-Agent Systems in Python Step by Step</title><link>https://baeseokjae.github.io/posts/crewai-tutorial-2026/</link><pubDate>Sun, 19 Apr 2026 11:46:58 +0000</pubDate><guid>https://baeseokjae.github.io/posts/crewai-tutorial-2026/</guid><description>Complete CrewAI tutorial for 2026: install, configure agents, add tools, implement memory, and deploy multi-agent Python systems to production.</description><content:encoded><![CDATA[<p>CrewAI is a Python framework for building multi-agent AI systems where each agent has a defined role, goal, and backstory — and agents collaborate to complete complex tasks. Install it with <code>pip install crewai</code>, define agents and tasks in YAML files, then wire them together with a Python class. As of April 2026, CrewAI has 49k GitHub stars and over 14,800 monthly searches, making it the fastest-growing multi-agent framework available.</p>
<h2 id="why-crewai-is-the-go-to-framework-for-multi-agent-systems-in-2026">Why CrewAI Is the Go-To Framework for Multi-Agent Systems in 2026</h2>
<p>CrewAI is a purpose-built multi-agent orchestration framework — not a wrapper around LangChain, but an independent implementation written from scratch. It models agents as collaborative team members with distinct roles (Researcher, Writer, Analyst), each equipped with specific tools and a clear goal. Unlike graph-based alternatives such as LangGraph, CrewAI uses a role-playing paradigm that maps closely to how real teams divide work. The GitHub repository hit 49,000 stars with 6,700+ forks as of April 2026, and version 1.14.2 ships with built-in support for OpenAI, Anthropic, Google Gemini, Azure OpenAI, and Ollama via LiteLLM. Teams running production workloads report 40-60% reduction in prompt engineering time compared to single-agent setups, because each agent only needs instructions relevant to its narrow specialization. The framework ships with two primary abstractions: <strong>Crews</strong> for collaborative single-workflow agent teams, and <strong>Flows</strong> for multi-stage orchestration pipelines with conditional branching and state management. This tutorial walks through all 13 steps from installation to production deployment.</p>
<h2 id="prerequisites-what-you-need-before-installing-crewai">Prerequisites: What You Need Before Installing CrewAI</h2>
<p>CrewAI requires Python 3.10 or higher (3.12 or 3.13 recommended), at least one LLM provider API key, and <code>pip</code> or <code>uv</code> for package management. No GPU or specialized hardware is needed — all model inference happens via remote API calls to your chosen provider. A development machine with 4GB RAM and a modern CPU handles everything from installation through local testing. For following the web-search tool examples, a free-tier Serper API or Tavily API key is sufficient. The project scaffold generates a <code>.env</code> template listing every environment variable you need before you write a single line of code. For production deployments, Docker and a cloud provider account (AWS, GCP, Railway, Fly.io) are helpful for containerizing and running your crew as a long-lived service, but neither is required to complete this tutorial. If you&rsquo;re new to Python environments, use <code>python -m venv .venv &amp;&amp; source .venv/bin/activate</code> before installing packages to keep your global Python installation clean.</p>
<h2 id="step-1-how-to-install-crewai-and-create-your-first-project">Step 1: How to Install CrewAI and Create Your First Project</h2>
<p>Installing CrewAI follows the same pattern as any modern Python CLI tool — you install the core package, optionally add tools, then scaffold a project directory. The CLI generates the complete project structure including YAML configuration files, so you don&rsquo;t need to hand-write boilerplate.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Install with uv (recommended) or pip</span>
</span></span><span style="display:flex;"><span>pip install crewai crewai-tools
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Scaffold a new project</span>
</span></span><span style="display:flex;"><span>crewai create crew my_research_crew
</span></span><span style="display:flex;"><span>cd my_research_crew
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Install project dependencies</span>
</span></span><span style="display:flex;"><span>pip install -r requirements.txt
</span></span></code></pre></div><p>The <code>crewai create crew</code> command generates this structure:</p>



<div class="goat svg-container ">
  
    <svg
      xmlns="http://www.w3.org/2000/svg"
      font-family="Menlo,Lucida Console,monospace"
      
        viewBox="0 0 248 201"
      >
      <g transform='translate(8,16)'>
<text text-anchor='middle' x='0' y='4' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='0' y='20' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='0' y='36' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='52' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='68' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='84' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='100' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='116' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='132' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='148' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='0' y='164' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='0' y='180' fill='currentColor' style='font-size:1em'>└</text>
<text text-anchor='middle' x='8' y='4' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='8' y='20' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='8' y='164' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='8' y='180' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='4' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='16' y='20' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='164' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='16' y='180' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='24' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='32' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='32' y='20' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='32' y='36' fill='currentColor' style='font-size:1em'>└</text>
<text text-anchor='middle' x='32' y='164' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='32' y='180' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='40' y='4' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='40' y='20' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='40' y='36' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='40' y='164' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='40' y='180' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='48' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='48' y='20' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='48' y='36' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='48' y='164' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='48' y='180' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='56' y='4' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='56' y='20' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='56' y='164' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='56' y='180' fill='currentColor' style='font-size:1em'>v</text>
<text text-anchor='middle' x='64' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='64' y='36' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='64' y='52' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='64' y='68' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='64' y='84' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='64' y='100' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='64' y='116' fill='currentColor' style='font-size:1em'>│</text>
<text text-anchor='middle' x='64' y='132' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='64' y='148' fill='currentColor' style='font-size:1em'>└</text>
<text text-anchor='middle' x='64' y='164' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='72' y='4' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='72' y='36' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='72' y='52' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='72' y='100' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='72' y='132' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='72' y='148' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='72' y='164' fill='currentColor' style='font-size:1em'>j</text>
<text text-anchor='middle' x='80' y='4' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='80' y='36' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='80' y='52' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='80' y='100' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='80' y='132' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='80' y='148' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='80' y='164' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='88' y='4' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='88' y='36' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='88' y='164' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='96' y='4' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='96' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='96' y='52' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='96' y='68' fill='currentColor' style='font-size:1em'>├</text>
<text text-anchor='middle' x='96' y='84' fill='currentColor' style='font-size:1em'>└</text>
<text text-anchor='middle' x='96' y='100' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='96' y='116' fill='currentColor' style='font-size:1em'>└</text>
<text text-anchor='middle' x='96' y='132' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='96' y='148' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='96' y='164' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='104' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='104' y='36' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='104' y='52' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='104' y='68' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='104' y='84' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='104' y='100' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='104' y='116' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='104' y='132' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='104' y='148' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='104' y='164' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='112' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='112' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='112' y='52' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='112' y='68' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='112' y='84' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='112' y='100' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='112' y='116' fill='currentColor' style='font-size:1em'>─</text>
<text text-anchor='middle' x='112' y='132' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='112' y='148' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='112' y='164' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='120' y='4' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='120' y='36' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='120' y='52' fill='currentColor' style='font-size:1em'>f</text>
<text text-anchor='middle' x='120' y='100' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='120' y='132' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='120' y='148' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='120' y='164' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='128' y='4' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='128' y='36' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='128' y='52' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='128' y='68' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='128' y='84' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='128' y='100' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='128' y='116' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='128' y='132' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='128' y='148' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='128' y='164' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='136' y='36' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='136' y='52' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='136' y='68' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='136' y='84' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='136' y='100' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='136' y='116' fill='currentColor' style='font-size:1em'>u</text>
<text text-anchor='middle' x='136' y='132' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='136' y='148' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='136' y='164' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='144' y='36' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='144' y='52' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='144' y='68' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='144' y='84' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='144' y='116' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='144' y='132' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='144' y='148' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='152' y='36' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='152' y='68' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='152' y='84' fill='currentColor' style='font-size:1em'>k</text>
<text text-anchor='middle' x='152' y='116' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='160' y='36' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='160' y='68' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='160' y='84' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='160' y='116' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='168' y='36' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='168' y='68' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='168' y='84' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='168' y='116' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='176' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='176' y='68' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='176' y='84' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='176' y='116' fill='currentColor' style='font-size:1em'>_</text>
<text text-anchor='middle' x='184' y='36' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='184' y='68' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='184' y='84' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='184' y='116' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='192' y='36' fill='currentColor' style='font-size:1em'>/</text>
<text text-anchor='middle' x='192' y='68' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='192' y='84' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='192' y='116' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='200' y='68' fill='currentColor' style='font-size:1em'>m</text>
<text text-anchor='middle' x='200' y='84' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='200' y='116' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='208' y='68' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='208' y='116' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='216' y='116' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='224' y='116' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='232' y='116' fill='currentColor' style='font-size:1em'>y</text>
</g>

    </svg>
  
</div>
<p>The <code>config/</code> directory holds your agent and task definitions in YAML — this is where most of your day-to-day configuration lives. The <code>crew.py</code> file wires everything together using Python decorators.</p>
<h2 id="step-2-how-to-configure-llm-providers-with-environment-variables">Step 2: How to Configure LLM Providers with Environment Variables</h2>
<p>CrewAI configures LLM providers entirely through environment variables, using LiteLLM as a universal adapter layer. This means switching from OpenAI GPT-4o to Anthropic Claude or Google Gemini requires changing two environment variables — no code changes. The default model is <code>gpt-4o</code> when <code>OPENAI_API_KEY</code> is set, but you can override this globally via <code>OPENAI_MODEL_NAME</code> or per-agent using the <code>llm</code> field in <code>agents.yaml</code>. In 2026, most production teams run mixed-model setups: a cheaper model like Claude Haiku or Gemini Flash for research and summarization agents, and a premium model like Claude Sonnet or GPT-4o only for the final synthesis or writing step. This hybrid approach reduces per-run cost by 50–70% without measurable quality loss for structured research workflows. The <code>.env</code> file pattern also makes it easy to rotate API keys or switch providers without touching agent or task definitions.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># .env file</span>
</span></span><span style="display:flex;"><span>OPENAI_API_KEY<span style="color:#f92672">=</span>sk-...              <span style="color:#75715e"># For OpenAI (default provider)</span>
</span></span><span style="display:flex;"><span>ANTHROPIC_API_KEY<span style="color:#f92672">=</span>sk-ant-...       <span style="color:#75715e"># For Claude models</span>
</span></span><span style="display:flex;"><span>GOOGLE_API_KEY<span style="color:#f92672">=</span>AIza...             <span style="color:#75715e"># For Gemini models</span>
</span></span><span style="display:flex;"><span>SERPER_API_KEY<span style="color:#f92672">=</span>...                 <span style="color:#75715e"># For web search tool</span>
</span></span></code></pre></div><p>To use Claude Sonnet as your default model:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>OPENAI_API_KEY<span style="color:#f92672">=</span>sk-ant-...
</span></span><span style="display:flex;"><span>OPENAI_MODEL_NAME<span style="color:#f92672">=</span>claude-sonnet-4-6
</span></span><span style="display:flex;"><span>OPENAI_API_BASE<span style="color:#f92672">=</span>https://api.anthropic.com/v1
</span></span></code></pre></div><p>Or configure it per-agent in YAML (shown in Step 3). For local models via Ollama, set <code>OPENAI_API_BASE=http://localhost:11434/v1</code> and use model names like <code>ollama/llama3.2</code>.</p>
<h2 id="step-3-how-to-define-agents-in-yaml-with-roles-goals-and-backstories">Step 3: How to Define Agents in YAML with Roles, Goals, and Backstories</h2>
<p>CrewAI agent definitions live in <code>config/agents.yaml</code>. Each agent has three required fields — <code>role</code>, <code>goal</code>, and <code>backstory</code> — plus optional fields for the LLM, tools, memory settings, and verbosity. The backstory is more important than it sounds: it provides the context that shapes how the LLM interprets ambiguous instructions within that agent&rsquo;s scope.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#75715e"># config/agents.yaml</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">researcher</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">role</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Senior Research Analyst</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">goal</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Gather comprehensive, accurate information about {topic}
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    from reliable sources. Identify key trends and data points.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">backstory</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    You are a veteran research analyst with 10 years of experience
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    in technology trends. You never cite sources you haven&#39;t verified,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    and you flag uncertainty explicitly when data is incomplete.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">verbose</span>: <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">memory</span>: <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">writer</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">role</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Technical Content Writer</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">goal</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Transform research findings into clear, engaging articles
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    that developers can immediately apply to their work.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">backstory</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    You write for senior developers who value precision over prose.
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    You use concrete examples, avoid marketing language, and always
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    include working code snippets when relevant.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">verbose</span>: <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">llm</span>: <span style="color:#ae81ff">claude-sonnet-4-6 </span> <span style="color:#75715e"># Override global LLM for this agent</span>
</span></span></code></pre></div><p>The <code>{topic}</code> placeholder is filled at runtime when you kick off the crew — this is CrewAI&rsquo;s template interpolation syntax.</p>
<h2 id="step-4-how-to-create-tasks-in-yaml-with-dependencies-and-context">Step 4: How to Create Tasks in YAML with Dependencies and Context</h2>
<p>Tasks in CrewAI map to discrete pieces of work assigned to specific agents. The key design decision is <code>context</code> — tasks can declare dependencies on other tasks, and their output gets injected into the dependent task&rsquo;s prompt automatically. This is how multi-agent collaboration actually happens in CrewAI.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#75715e"># config/tasks.yaml</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">research_task</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">description</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Research the following topic thoroughly: {topic}
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Focus on:
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    1. Current state and key statistics (with dates)
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    2. Leading tools, frameworks, or companies in this space
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    3. Common use cases and real-world examples
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    4. Known limitations or caveats
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Produce a structured research report with sources.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">expected_output</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    A detailed research report in markdown format with:
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - Executive summary (100 words)
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - Key findings with citations
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - Data tables where relevant
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    - Source list</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">agent</span>: <span style="color:#ae81ff">researcher</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">writing_task</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">description</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    Using the research report provided, write a technical blog post
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    targeting senior developers. The article should be 1,500+ words,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    include code examples, and have a practical focus.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">expected_output</span>: &gt;<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    A complete blog post in markdown format ready for publication.</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">agent</span>: <span style="color:#ae81ff">writer</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">context</span>:
</span></span><span style="display:flex;"><span>    - <span style="color:#ae81ff">research_task </span> <span style="color:#75715e"># Output from research_task is injected here</span>
</span></span></code></pre></div><p>The <code>context</code> field is the most powerful feature in task design. When <code>writing_task</code> runs, CrewAI automatically prepends the output of <code>research_task</code> to the writer agent&rsquo;s prompt.</p>
<h2 id="step-5-how-to-build-the-crew-definition-with-python-decorators">Step 5: How to Build the Crew Definition with Python Decorators</h2>
<p>The <code>crew.py</code> file uses Python decorators to bind YAML configurations to agent and task instances. The <code>@CrewBase</code> class decorator loads YAML files automatically. <code>@agent</code> methods return configured <code>Agent</code> instances. <code>@task</code> methods return configured <code>Task</code> instances. The <code>@crew</code> method assembles everything into a <code>Crew</code> object.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># src/my_research_crew/crew.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai <span style="color:#f92672">import</span> Agent, Crew, Process, Task
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai.project <span style="color:#f92672">import</span> CrewBase, agent, crew, task
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai_tools <span style="color:#f92672">import</span> SerperDevTool
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@CrewBase</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">MyResearchCrew</span>():
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Multi-agent research and writing crew&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    agents_config <span style="color:#f92672">=</span> <span style="color:#e6db74">&#39;config/agents.yaml&#39;</span>
</span></span><span style="display:flex;"><span>    tasks_config <span style="color:#f92672">=</span> <span style="color:#e6db74">&#39;config/tasks.yaml&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@agent</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">researcher</span>(self) <span style="color:#f92672">-&gt;</span> Agent:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> Agent(
</span></span><span style="display:flex;"><span>            config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>agents_config[<span style="color:#e6db74">&#39;researcher&#39;</span>],
</span></span><span style="display:flex;"><span>            tools<span style="color:#f92672">=</span>[SerperDevTool()],
</span></span><span style="display:flex;"><span>            verbose<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@agent</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">writer</span>(self) <span style="color:#f92672">-&gt;</span> Agent:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> Agent(
</span></span><span style="display:flex;"><span>            config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>agents_config[<span style="color:#e6db74">&#39;writer&#39;</span>],
</span></span><span style="display:flex;"><span>            verbose<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@task</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">research_task</span>(self) <span style="color:#f92672">-&gt;</span> Task:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> Task(
</span></span><span style="display:flex;"><span>            config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>tasks_config[<span style="color:#e6db74">&#39;research_task&#39;</span>],
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@task</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">writing_task</span>(self) <span style="color:#f92672">-&gt;</span> Task:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> Task(
</span></span><span style="display:flex;"><span>            config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>tasks_config[<span style="color:#e6db74">&#39;writing_task&#39;</span>],
</span></span><span style="display:flex;"><span>            output_file<span style="color:#f92672">=</span><span style="color:#e6db74">&#39;output/article.md&#39;</span>  <span style="color:#75715e"># Saves result to file</span>
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@crew</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">crew</span>(self) <span style="color:#f92672">-&gt;</span> Crew:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> Crew(
</span></span><span style="display:flex;"><span>            agents<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>agents,  <span style="color:#75715e"># Auto-collected from @agent methods</span>
</span></span><span style="display:flex;"><span>            tasks<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>tasks,    <span style="color:#75715e"># Auto-collected from @task methods</span>
</span></span><span style="display:flex;"><span>            process<span style="color:#f92672">=</span>Process<span style="color:#f92672">.</span>sequential,
</span></span><span style="display:flex;"><span>            verbose<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>
</span></span><span style="display:flex;"><span>        )
</span></span></code></pre></div><p>The <code>Process.sequential</code> setting means tasks execute in order. For parallel execution, use <code>Process.hierarchical</code> with a manager agent.</p>
<h2 id="step-6-how-to-run-the-crew-and-monitor-execution">Step 6: How to Run the Crew and Monitor Execution</h2>
<p>Running a CrewAI crew is a single <code>kickoff()</code> call that blocks until all tasks complete and returns a <code>CrewOutput</code> object containing the raw text result, structured Pydantic output (if configured), and a token usage summary. Pass a dictionary of inputs to <code>kickoff()</code> — the keys must match the <code>{placeholder}</code> variables in your YAML task descriptions. When <code>verbose=True</code> is set on agents, you&rsquo;ll see real-time output showing each agent&rsquo;s current task, reasoning steps, tool calls and results, and final output. This verbose output is essential during development for catching misconfigured tasks — for instance, you&rsquo;ll quickly spot if an agent is querying the wrong source or if a task description is too vague and producing off-topic results. In production, set <code>verbose=False</code> to suppress the console output and log <code>result.token_usage</code> to your metrics system to track costs per run. The <code>CrewOutput.token_usage</code> field includes total tokens, prompt tokens, and completion tokens broken down by model — critical for cost attribution in multi-model setups.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># src/my_research_crew/main.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> my_research_crew.crew <span style="color:#f92672">import</span> MyResearchCrew
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">run</span>():
</span></span><span style="display:flex;"><span>    inputs <span style="color:#f92672">=</span> {
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#39;topic&#39;</span>: <span style="color:#e6db74">&#39;CrewAI multi-agent framework in 2026&#39;</span>
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span>    result <span style="color:#f92672">=</span> MyResearchCrew()<span style="color:#f92672">.</span>crew()<span style="color:#f92672">.</span>kickoff(inputs<span style="color:#f92672">=</span>inputs)
</span></span><span style="display:flex;"><span>    print(result<span style="color:#f92672">.</span>raw)
</span></span><span style="display:flex;"><span>    print(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">Token usage: </span><span style="color:#e6db74">{</span>result<span style="color:#f92672">.</span>token_usage<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> __name__ <span style="color:#f92672">==</span> <span style="color:#e6db74">&#34;__main__&#34;</span>:
</span></span><span style="display:flex;"><span>    run()
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>crewai run
</span></span></code></pre></div><p>You&rsquo;ll see output like:</p>



<div class="goat svg-container ">
  
    <svg
      xmlns="http://www.w3.org/2000/svg"
      font-family="Menlo,Lucida Console,monospace"
      
        viewBox="0 0 608 73"
      >
      <g transform='translate(8,16)'>
<text text-anchor='middle' x='0' y='4' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='0' y='20' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='0' y='36' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='0' y='52' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='8' y='4' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='8' y='20' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='8' y='36' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='8' y='52' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='16' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='16' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='16' y='36' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='16' y='52' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='24' y='4' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='24' y='20' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='24' y='36' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='24' y='52' fill='currentColor' style='font-size:1em'>2</text>
<text text-anchor='middle' x='32' y='4' fill='currentColor' style='font-size:1em'>6</text>
<text text-anchor='middle' x='32' y='20' fill='currentColor' style='font-size:1em'>6</text>
<text text-anchor='middle' x='32' y='36' fill='currentColor' style='font-size:1em'>6</text>
<text text-anchor='middle' x='32' y='52' fill='currentColor' style='font-size:1em'>6</text>
<text text-anchor='middle' x='40' y='4' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='40' y='20' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='40' y='36' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='40' y='52' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='48' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='48' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='48' y='36' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='48' y='52' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='56' y='4' fill='currentColor' style='font-size:1em'>4</text>
<text text-anchor='middle' x='56' y='20' fill='currentColor' style='font-size:1em'>4</text>
<text text-anchor='middle' x='56' y='36' fill='currentColor' style='font-size:1em'>4</text>
<text text-anchor='middle' x='56' y='52' fill='currentColor' style='font-size:1em'>4</text>
<text text-anchor='middle' x='64' y='4' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='64' y='20' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='64' y='36' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='64' y='52' fill='currentColor' style='font-size:1em'>-</text>
<text text-anchor='middle' x='72' y='4' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='72' y='20' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='72' y='36' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='72' y='52' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='80' y='4' fill='currentColor' style='font-size:1em'>9</text>
<text text-anchor='middle' x='80' y='20' fill='currentColor' style='font-size:1em'>9</text>
<text text-anchor='middle' x='80' y='36' fill='currentColor' style='font-size:1em'>9</text>
<text text-anchor='middle' x='80' y='52' fill='currentColor' style='font-size:1em'>9</text>
<text text-anchor='middle' x='96' y='4' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='96' y='20' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='96' y='36' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='96' y='52' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='104' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='104' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='104' y='36' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='104' y='52' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='112' y='4' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='112' y='20' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='112' y='36' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='112' y='52' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='120' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='120' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='120' y='36' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='120' y='52' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='128' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='128' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='128' y='36' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='128' y='52' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='136' y='4' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='136' y='20' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='136' y='36' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='136' y='52' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='144' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='144' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='144' y='36' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='144' y='52' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='152' y='4' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='152' y='20' fill='currentColor' style='font-size:1em'>0</text>
<text text-anchor='middle' x='152' y='36' fill='currentColor' style='font-size:1em'>1</text>
<text text-anchor='middle' x='152' y='52' fill='currentColor' style='font-size:1em'>3</text>
<text text-anchor='middle' x='160' y='4' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='160' y='20' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='160' y='36' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='160' y='52' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='168' y='4' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='168' y='20' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='168' y='36' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='168' y='52' fill='currentColor' style='font-size:1em'>[</text>
<text text-anchor='middle' x='176' y='4' fill='currentColor' style='font-size:1em'>I</text>
<text text-anchor='middle' x='176' y='20' fill='currentColor' style='font-size:1em'>I</text>
<text text-anchor='middle' x='176' y='36' fill='currentColor' style='font-size:1em'>I</text>
<text text-anchor='middle' x='176' y='52' fill='currentColor' style='font-size:1em'>I</text>
<text text-anchor='middle' x='184' y='4' fill='currentColor' style='font-size:1em'>N</text>
<text text-anchor='middle' x='184' y='20' fill='currentColor' style='font-size:1em'>N</text>
<text text-anchor='middle' x='184' y='36' fill='currentColor' style='font-size:1em'>N</text>
<text text-anchor='middle' x='184' y='52' fill='currentColor' style='font-size:1em'>N</text>
<text text-anchor='middle' x='192' y='4' fill='currentColor' style='font-size:1em'>F</text>
<text text-anchor='middle' x='192' y='20' fill='currentColor' style='font-size:1em'>F</text>
<text text-anchor='middle' x='192' y='36' fill='currentColor' style='font-size:1em'>F</text>
<text text-anchor='middle' x='192' y='52' fill='currentColor' style='font-size:1em'>F</text>
<text text-anchor='middle' x='200' y='4' fill='currentColor' style='font-size:1em'>O</text>
<text text-anchor='middle' x='200' y='20' fill='currentColor' style='font-size:1em'>O</text>
<text text-anchor='middle' x='200' y='36' fill='currentColor' style='font-size:1em'>O</text>
<text text-anchor='middle' x='200' y='52' fill='currentColor' style='font-size:1em'>O</text>
<text text-anchor='middle' x='208' y='4' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='208' y='20' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='208' y='36' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='208' y='52' fill='currentColor' style='font-size:1em'>]</text>
<text text-anchor='middle' x='216' y='4' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='216' y='20' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='216' y='36' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='216' y='52' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='232' y='4' fill='currentColor' style='font-size:1em'>W</text>
<text text-anchor='middle' x='232' y='20' fill='currentColor' style='font-size:1em'>S</text>
<text text-anchor='middle' x='232' y='36' fill='currentColor' style='font-size:1em'>U</text>
<text text-anchor='middle' x='232' y='52' fill='currentColor' style='font-size:1em'>T</text>
<text text-anchor='middle' x='240' y='4' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='240' y='20' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='240' y='36' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='240' y='52' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='248' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='248' y='20' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='248' y='36' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='248' y='52' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='256' y='4' fill='currentColor' style='font-size:1em'>k</text>
<text text-anchor='middle' x='256' y='20' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='256' y='36' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='256' y='52' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='264' y='4' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='264' y='20' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='264' y='36' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='272' y='4' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='272' y='20' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='272' y='52' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='280' y='4' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='280' y='20' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='280' y='36' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='280' y='52' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='288' y='20' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='288' y='36' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='288' y='52' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='296' y='4' fill='currentColor' style='font-size:1em'>A</text>
<text text-anchor='middle' x='296' y='36' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='296' y='52' fill='currentColor' style='font-size:1em'>u</text>
<text text-anchor='middle' x='304' y='4' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='304' y='20' fill='currentColor' style='font-size:1em'>T</text>
<text text-anchor='middle' x='304' y='36' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='304' y='52' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='312' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='312' y='20' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='312' y='36' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='312' y='52' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='320' y='4' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='320' y='20' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='320' y='52' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='328' y='4' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='328' y='20' fill='currentColor' style='font-size:1em'>k</text>
<text text-anchor='middle' x='328' y='36' fill='currentColor' style='font-size:1em'>S</text>
<text text-anchor='middle' x='336' y='4' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='336' y='20' fill='currentColor' style='font-size:1em'>:</text>
<text text-anchor='middle' x='336' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='336' y='52' fill='currentColor' style='font-size:1em'>C</text>
<text text-anchor='middle' x='344' y='36' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='344' y='52' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='352' y='4' fill='currentColor' style='font-size:1em'>S</text>
<text text-anchor='middle' x='352' y='20' fill='currentColor' style='font-size:1em'>R</text>
<text text-anchor='middle' x='352' y='36' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='352' y='52' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='360' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='360' y='20' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='360' y='36' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='360' y='52' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='368' y='4' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='368' y='20' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='368' y='36' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='368' y='52' fill='currentColor' style='font-size:1em'>A</text>
<text text-anchor='middle' x='376' y='4' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='376' y='20' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='376' y='52' fill='currentColor' style='font-size:1em'>I</text>
<text text-anchor='middle' x='384' y='4' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='384' y='20' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='384' y='36' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='392' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='392' y='20' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='392' y='36' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='392' y='52' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='400' y='20' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='400' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='400' y='52' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='408' y='4' fill='currentColor' style='font-size:1em'>R</text>
<text text-anchor='middle' x='408' y='20' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='408' y='52' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='416' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='416' y='36' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='424' y='4' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='424' y='20' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='424' y='36' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='424' y='52' fill='currentColor' style='font-size:1em'>4</text>
<text text-anchor='middle' x='432' y='4' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='432' y='20' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='432' y='36' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='432' y='52' fill='currentColor' style='font-size:1em'>9</text>
<text text-anchor='middle' x='440' y='4' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='440' y='20' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='440' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='440' y='52' fill='currentColor' style='font-size:1em'>k</text>
<text text-anchor='middle' x='448' y='4' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='448' y='36' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='456' y='4' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='456' y='20' fill='currentColor' style='font-size:1em'>f</text>
<text text-anchor='middle' x='456' y='36' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='456' y='52' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='464' y='4' fill='currentColor' style='font-size:1em'>h</text>
<text text-anchor='middle' x='464' y='20' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='464' y='36' fill='currentColor' style='font-size:1em'>e</text>
<text text-anchor='middle' x='464' y='52' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='472' y='20' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='472' y='36' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='472' y='52' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='480' y='4' fill='currentColor' style='font-size:1em'>A</text>
<text text-anchor='middle' x='480' y='20' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='480' y='52' fill='currentColor' style='font-size:1em'>r</text>
<text text-anchor='middle' x='488' y='4' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='488' y='20' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='488' y='52' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='496' y='4' fill='currentColor' style='font-size:1em'>a</text>
<text text-anchor='middle' x='496' y='20' fill='currentColor' style='font-size:1em'>w</text>
<text text-anchor='middle' x='496' y='52' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='504' y='4' fill='currentColor' style='font-size:1em'>l</text>
<text text-anchor='middle' x='504' y='20' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='504' y='52' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='512' y='4' fill='currentColor' style='font-size:1em'>y</text>
<text text-anchor='middle' x='512' y='20' fill='currentColor' style='font-size:1em'>n</text>
<text text-anchor='middle' x='512' y='52' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='520' y='4' fill='currentColor' style='font-size:1em'>s</text>
<text text-anchor='middle' x='520' y='20' fill='currentColor' style='font-size:1em'>g</text>
<text text-anchor='middle' x='528' y='4' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='536' y='20' fill='currentColor' style='font-size:1em'>t</text>
<text text-anchor='middle' x='544' y='20' fill='currentColor' style='font-size:1em'>o</text>
<text text-anchor='middle' x='552' y='20' fill='currentColor' style='font-size:1em'>p</text>
<text text-anchor='middle' x='560' y='20' fill='currentColor' style='font-size:1em'>i</text>
<text text-anchor='middle' x='568' y='20' fill='currentColor' style='font-size:1em'>c</text>
<text text-anchor='middle' x='576' y='20' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='584' y='20' fill='currentColor' style='font-size:1em'>.</text>
<text text-anchor='middle' x='592' y='20' fill='currentColor' style='font-size:1em'>.</text>
</g>

    </svg>
  
</div>
<p>The <code>result</code> object contains <code>.raw</code> (string output), <code>.pydantic</code> (structured output if configured), and <code>.token_usage</code> (cost tracking).</p>
<h2 id="step-7-how-to-add-tools-for-web-search-scraping-and-file-operations">Step 7: How to Add Tools for Web Search, Scraping, and File Operations</h2>
<p>CrewAI ships with a rich tool library in <code>crewai-tools</code>. Tools are Python classes that agents can invoke during task execution. The most commonly used tools in production are <code>SerperDevTool</code> for web search, <code>ScrapeWebsiteTool</code> for scraping specific pages, <code>FileReadTool</code>/<code>FileWriteTool</code> for file I/O, and <code>PDFSearchTool</code> for document analysis.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai_tools <span style="color:#f92672">import</span> (
</span></span><span style="display:flex;"><span>    SerperDevTool,
</span></span><span style="display:flex;"><span>    ScrapeWebsiteTool,
</span></span><span style="display:flex;"><span>    FileReadTool,
</span></span><span style="display:flex;"><span>    FileWriteTool,
</span></span><span style="display:flex;"><span>    PDFSearchTool,
</span></span><span style="display:flex;"><span>    DirectoryReadTool,
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Assign tools to specific agents in crew.py</span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@agent</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">researcher</span>(self) <span style="color:#f92672">-&gt;</span> Agent:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> Agent(
</span></span><span style="display:flex;"><span>        config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>agents_config[<span style="color:#e6db74">&#39;researcher&#39;</span>],
</span></span><span style="display:flex;"><span>        tools<span style="color:#f92672">=</span>[
</span></span><span style="display:flex;"><span>            SerperDevTool(),           <span style="color:#75715e"># Web search via Serper API</span>
</span></span><span style="display:flex;"><span>            ScrapeWebsiteTool(),       <span style="color:#75715e"># Scrape any URL</span>
</span></span><span style="display:flex;"><span>            PDFSearchTool(),           <span style="color:#75715e"># Semantic search in PDFs</span>
</span></span><span style="display:flex;"><span>        ]
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@agent</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">data_analyst</span>(self) <span style="color:#f92672">-&gt;</span> Agent:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> Agent(
</span></span><span style="display:flex;"><span>        config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>agents_config[<span style="color:#e6db74">&#39;data_analyst&#39;</span>],
</span></span><span style="display:flex;"><span>        tools<span style="color:#f92672">=</span>[
</span></span><span style="display:flex;"><span>            FileReadTool(file_path<span style="color:#f92672">=</span><span style="color:#e6db74">&#39;./data/metrics.csv&#39;</span>),
</span></span><span style="display:flex;"><span>            DirectoryReadTool(directory<span style="color:#f92672">=</span><span style="color:#e6db74">&#39;./reports/&#39;</span>),
</span></span><span style="display:flex;"><span>        ]
</span></span><span style="display:flex;"><span>    )
</span></span></code></pre></div><p>Each tool handles its own error handling and retry logic. If a tool call fails, the agent receives the error message and can try an alternative approach or report the failure.</p>
<h2 id="step-8-how-to-create-custom-tools-for-business-specific-operations">Step 8: How to Create Custom Tools for Business-Specific Operations</h2>
<p>Custom tools in CrewAI are Python classes that inherit from <code>BaseTool</code>. You define the tool&rsquo;s name, description (what agents see when deciding to use it), and <code>_run</code> method (the actual logic). The description is critical — agents use it to decide which tool to use for a given subtask.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># src/my_research_crew/tools/database_tool.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai.tools <span style="color:#f92672">import</span> BaseTool
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> typing <span style="color:#f92672">import</span> Type
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> pydantic <span style="color:#f92672">import</span> BaseModel, Field
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> psycopg2
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">DatabaseQueryInput</span>(BaseModel):
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Input schema for DatabaseQueryTool&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    query: str <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;SQL SELECT query to execute (read-only)&#34;</span>)
</span></span><span style="display:flex;"><span>    database: str <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Database name to query&#34;</span>, default<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;analytics&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">DatabaseQueryTool</span>(BaseTool):
</span></span><span style="display:flex;"><span>    name: str <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;Query Analytics Database&#34;</span>
</span></span><span style="display:flex;"><span>    description: str <span style="color:#f92672">=</span> (
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;Execute read-only SQL queries against the analytics database. &#34;</span>
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;Use this to retrieve user metrics, conversion data, or event counts. &#34;</span>
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#34;Only SELECT statements are permitted.&#34;</span>
</span></span><span style="display:flex;"><span>    )
</span></span><span style="display:flex;"><span>    args_schema: Type[BaseModel] <span style="color:#f92672">=</span> DatabaseQueryInput
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">_run</span>(self, query: str, database: str <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;analytics&#34;</span>) <span style="color:#f92672">-&gt;</span> str:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">if</span> <span style="color:#f92672">not</span> query<span style="color:#f92672">.</span>strip()<span style="color:#f92672">.</span>upper()<span style="color:#f92672">.</span>startswith(<span style="color:#e6db74">&#34;SELECT&#34;</span>):
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span> <span style="color:#e6db74">&#34;Error: Only SELECT queries are permitted.&#34;</span>
</span></span><span style="display:flex;"><span>        
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>            conn <span style="color:#f92672">=</span> psycopg2<span style="color:#f92672">.</span>connect(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;dbname=</span><span style="color:#e6db74">{</span>database<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span><span style="display:flex;"><span>            cursor <span style="color:#f92672">=</span> conn<span style="color:#f92672">.</span>cursor()
</span></span><span style="display:flex;"><span>            cursor<span style="color:#f92672">.</span>execute(query)
</span></span><span style="display:flex;"><span>            results <span style="color:#f92672">=</span> cursor<span style="color:#f92672">.</span>fetchall()
</span></span><span style="display:flex;"><span>            columns <span style="color:#f92672">=</span> [desc[<span style="color:#ae81ff">0</span>] <span style="color:#66d9ef">for</span> desc <span style="color:#f92672">in</span> cursor<span style="color:#f92672">.</span>description]
</span></span><span style="display:flex;"><span>            conn<span style="color:#f92672">.</span>close()
</span></span><span style="display:flex;"><span>            
</span></span><span style="display:flex;"><span>            <span style="color:#75715e"># Format as markdown table</span>
</span></span><span style="display:flex;"><span>            header <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;| &#34;</span> <span style="color:#f92672">+</span> <span style="color:#e6db74">&#34; | &#34;</span><span style="color:#f92672">.</span>join(columns) <span style="color:#f92672">+</span> <span style="color:#e6db74">&#34; |&#34;</span>
</span></span><span style="display:flex;"><span>            separator <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;| &#34;</span> <span style="color:#f92672">+</span> <span style="color:#e6db74">&#34; | &#34;</span><span style="color:#f92672">.</span>join([<span style="color:#e6db74">&#34;---&#34;</span>] <span style="color:#f92672">*</span> len(columns)) <span style="color:#f92672">+</span> <span style="color:#e6db74">&#34; |&#34;</span>
</span></span><span style="display:flex;"><span>            rows <span style="color:#f92672">=</span> [<span style="color:#e6db74">&#34;| &#34;</span> <span style="color:#f92672">+</span> <span style="color:#e6db74">&#34; | &#34;</span><span style="color:#f92672">.</span>join(str(v) <span style="color:#66d9ef">for</span> v <span style="color:#f92672">in</span> row) <span style="color:#f92672">+</span> <span style="color:#e6db74">&#34; |&#34;</span> <span style="color:#66d9ef">for</span> row <span style="color:#f92672">in</span> results[:<span style="color:#ae81ff">20</span>]]
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span> <span style="color:#e6db74">&#34;</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">&#34;</span><span style="color:#f92672">.</span>join([header, separator] <span style="color:#f92672">+</span> rows)
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">Exception</span> <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span> <span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Database error: </span><span style="color:#e6db74">{</span>str(e)<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span></code></pre></div><p>Attach the custom tool to an agent the same way as built-in tools: <code>tools=[DatabaseQueryTool()]</code>.</p>
<h2 id="step-9-how-to-implement-structured-outputs-with-pydantic-models">Step 9: How to Implement Structured Outputs with Pydantic Models</h2>
<p>By default, CrewAI tasks return plain text. For production pipelines where downstream code needs to parse the output, you can enforce structured outputs using Pydantic models. When a task has <code>output_pydantic</code> set, CrewAI instructs the LLM to return JSON matching the schema and validates the result.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># src/my_research_crew/models.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> pydantic <span style="color:#f92672">import</span> BaseModel, Field
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> typing <span style="color:#f92672">import</span> List, Optional
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">ResearchFinding</span>(BaseModel):
</span></span><span style="display:flex;"><span>    category: str <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Category: trend, statistic, tool, limitation&#34;</span>)
</span></span><span style="display:flex;"><span>    content: str <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;The finding itself&#34;</span>)
</span></span><span style="display:flex;"><span>    source: Optional[str] <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;URL or publication name&#34;</span>)
</span></span><span style="display:flex;"><span>    confidence: str <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;high, medium, or low&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">ResearchReport</span>(BaseModel):
</span></span><span style="display:flex;"><span>    topic: str
</span></span><span style="display:flex;"><span>    summary: str <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Executive summary in 100 words&#34;</span>)
</span></span><span style="display:flex;"><span>    findings: List[ResearchFinding]
</span></span><span style="display:flex;"><span>    limitations: List[str] <span style="color:#f92672">=</span> Field(description<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Known gaps or caveats in the research&#34;</span>)
</span></span><span style="display:flex;"><span>    sources: List[str]
</span></span></code></pre></div><p>Apply it to a task in <code>crew.py</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> my_research_crew.models <span style="color:#f92672">import</span> ResearchReport
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@task</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">research_task</span>(self) <span style="color:#f92672">-&gt;</span> Task:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> Task(
</span></span><span style="display:flex;"><span>        config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>tasks_config[<span style="color:#e6db74">&#39;research_task&#39;</span>],
</span></span><span style="display:flex;"><span>        output_pydantic<span style="color:#f92672">=</span>ResearchReport  <span style="color:#75715e"># Enforce structured output</span>
</span></span><span style="display:flex;"><span>    )
</span></span></code></pre></div><p>Access the result in Python:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span>result <span style="color:#f92672">=</span> MyResearchCrew()<span style="color:#f92672">.</span>crew()<span style="color:#f92672">.</span>kickoff(inputs<span style="color:#f92672">=</span>inputs)
</span></span><span style="display:flex;"><span>report: ResearchReport <span style="color:#f92672">=</span> result<span style="color:#f92672">.</span>pydantic
</span></span><span style="display:flex;"><span>print(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Found </span><span style="color:#e6db74">{</span>len(report<span style="color:#f92672">.</span>findings)<span style="color:#e6db74">}</span><span style="color:#e6db74"> findings&#34;</span>)
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> finding <span style="color:#f92672">in</span> report<span style="color:#f92672">.</span>findings:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> finding<span style="color:#f92672">.</span>confidence <span style="color:#f92672">==</span> <span style="color:#e6db74">&#34;high&#34;</span>:
</span></span><span style="display:flex;"><span>        print(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;[</span><span style="color:#e6db74">{</span>finding<span style="color:#f92672">.</span>category<span style="color:#e6db74">}</span><span style="color:#e6db74">] </span><span style="color:#e6db74">{</span>finding<span style="color:#f92672">.</span>content<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span></code></pre></div><h2 id="step-10-how-to-use-crewai-flows-for-complex-multi-stage-orchestration">Step 10: How to Use CrewAI Flows for Complex Multi-Stage Orchestration</h2>
<p>CrewAI Flows extend the framework beyond single crews. A Flow coordinates multiple crews in sequence or parallel, passing state between them and enabling conditional branching. Flows are Python classes decorated with <code>@Flow</code> where methods decorated with <code>@start</code> and <code>@listen</code> define the execution graph.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai.flow.flow <span style="color:#f92672">import</span> Flow, listen, start, router
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> pydantic <span style="color:#f92672">import</span> BaseModel
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">ContentPipelineState</span>(BaseModel):
</span></span><span style="display:flex;"><span>    topic: str <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    research: str <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    article: str <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    approved: bool <span style="color:#f92672">=</span> <span style="color:#66d9ef">False</span>
</span></span><span style="display:flex;"><span>    revision_count: int <span style="color:#f92672">=</span> <span style="color:#ae81ff">0</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">ContentPipelineFlow</span>(Flow[ContentPipelineState]):
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@start</span>()
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">gather_topic</span>(self):
</span></span><span style="display:flex;"><span>        self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>topic <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;AI agent frameworks in 2026&#34;</span>
</span></span><span style="display:flex;"><span>        print(<span style="color:#e6db74">f</span><span style="color:#e6db74">&#34;Starting pipeline for: </span><span style="color:#e6db74">{</span>self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>topic<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@listen</span>(gather_topic)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">run_research</span>(self):
</span></span><span style="display:flex;"><span>        result <span style="color:#f92672">=</span> ResearchCrew()<span style="color:#f92672">.</span>crew()<span style="color:#f92672">.</span>kickoff(
</span></span><span style="display:flex;"><span>            inputs<span style="color:#f92672">=</span>{<span style="color:#e6db74">&#34;topic&#34;</span>: self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>topic}
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>        self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>research <span style="color:#f92672">=</span> result<span style="color:#f92672">.</span>raw
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@listen</span>(run_research)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">write_article</span>(self):
</span></span><span style="display:flex;"><span>        result <span style="color:#f92672">=</span> WritingCrew()<span style="color:#f92672">.</span>crew()<span style="color:#f92672">.</span>kickoff(
</span></span><span style="display:flex;"><span>            inputs<span style="color:#f92672">=</span>{
</span></span><span style="display:flex;"><span>                <span style="color:#e6db74">&#34;topic&#34;</span>: self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>topic,
</span></span><span style="display:flex;"><span>                <span style="color:#e6db74">&#34;research&#34;</span>: self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>research
</span></span><span style="display:flex;"><span>            }
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>        self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>article <span style="color:#f92672">=</span> result<span style="color:#f92672">.</span>raw
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@router</span>(write_article)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">review_article</span>(self):
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Decide whether to approve or revise</span>
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">if</span> len(self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>article) <span style="color:#f92672">&gt;</span> <span style="color:#ae81ff">2000</span> <span style="color:#f92672">and</span> self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>revision_count <span style="color:#f92672">&lt;</span> <span style="color:#ae81ff">2</span>:
</span></span><span style="display:flex;"><span>            <span style="color:#66d9ef">return</span> <span style="color:#e6db74">&#34;approved&#34;</span>
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> <span style="color:#e6db74">&#34;needs_revision&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@listen</span>(<span style="color:#e6db74">&#34;approved&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">publish</span>(self):
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">with</span> open(<span style="color:#e6db74">&#34;output/final_article.md&#34;</span>, <span style="color:#e6db74">&#34;w&#34;</span>) <span style="color:#66d9ef">as</span> f:
</span></span><span style="display:flex;"><span>            f<span style="color:#f92672">.</span>write(self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>article)
</span></span><span style="display:flex;"><span>        print(<span style="color:#e6db74">&#34;Article published!&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">@listen</span>(<span style="color:#e6db74">&#34;needs_revision&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">revise</span>(self):
</span></span><span style="display:flex;"><span>        self<span style="color:#f92672">.</span>state<span style="color:#f92672">.</span>revision_count <span style="color:#f92672">+=</span> <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>        self<span style="color:#f92672">.</span>write_article()  <span style="color:#75715e"># Re-run writing with accumulated state</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Run the flow</span>
</span></span><span style="display:flex;"><span>flow <span style="color:#f92672">=</span> ContentPipelineFlow()
</span></span><span style="display:flex;"><span>flow<span style="color:#f92672">.</span>kickoff()
</span></span></code></pre></div><p>The <code>@router</code> decorator enables branching — returning different string values routes execution to different <code>@listen</code> methods.</p>
<h2 id="step-11-how-to-add-memory-and-context-for-agent-persistence">Step 11: How to Add Memory and Context for Agent Persistence</h2>
<p>CrewAI supports three memory layers that persist agent context across task executions. Short-term memory stores recent conversation context within a session. Long-term memory persists key facts and outcomes across sessions using SQLite. Entity memory tracks people, organizations, and concepts mentioned across tasks. All three are enabled with a single flag.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#a6e22e">@crew</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">crew</span>(self) <span style="color:#f92672">-&gt;</span> Crew:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> Crew(
</span></span><span style="display:flex;"><span>        agents<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>agents,
</span></span><span style="display:flex;"><span>        tasks<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>tasks,
</span></span><span style="display:flex;"><span>        process<span style="color:#f92672">=</span>Process<span style="color:#f92672">.</span>sequential,
</span></span><span style="display:flex;"><span>        memory<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>,              <span style="color:#75715e"># Enables all memory layers</span>
</span></span><span style="display:flex;"><span>        verbose<span style="color:#f92672">=</span><span style="color:#66d9ef">True</span>,
</span></span><span style="display:flex;"><span>        <span style="color:#75715e"># Optional: customize memory storage</span>
</span></span><span style="display:flex;"><span>        memory_config<span style="color:#f92672">=</span>{
</span></span><span style="display:flex;"><span>            <span style="color:#e6db74">&#34;provider&#34;</span>: <span style="color:#e6db74">&#34;mem0&#34;</span>,  <span style="color:#75715e"># Default: built-in SQLite</span>
</span></span><span style="display:flex;"><span>        }
</span></span><span style="display:flex;"><span>    )
</span></span></code></pre></div><p>With memory enabled, a researcher agent that found &ldquo;CrewAI has 49k stars&rdquo; in one task will recall that fact in subsequent tasks without re-querying. For multi-session workflows (e.g., a daily report generator), long-term memory ensures agents build on previous runs rather than starting from scratch. The default SQLite storage is sufficient for development; for production, configure an external store like Mem0 or a PostgreSQL-backed solution.</p>
<h2 id="step-12-error-handling-guardrails-and-production-considerations">Step 12: Error Handling, Guardrails, and Production Considerations</h2>
<p>Production CrewAI deployments need guardrails around three failure modes: LLM API errors (rate limits, timeouts), tool failures (network errors, invalid responses), and agent loops (agents that cycle without making progress). CrewAI provides built-in retry logic for API errors, but tool errors and agent loops require explicit handling.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">from</span> crewai <span style="color:#f92672">import</span> Task
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> typing <span style="color:#f92672">import</span> Tuple
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">validate_research_output</span>(result) <span style="color:#f92672">-&gt;</span> Tuple[bool, str]:
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">&#34;&#34;&#34;Task guardrail - runs before output is passed to next task&#34;&#34;&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> len(result<span style="color:#f92672">.</span>raw) <span style="color:#f92672">&lt;</span> <span style="color:#ae81ff">500</span>:
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> (<span style="color:#66d9ef">False</span>, <span style="color:#e6db74">&#34;Research output too short — retry with broader search terms&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> <span style="color:#e6db74">&#34;error&#34;</span> <span style="color:#f92672">in</span> result<span style="color:#f92672">.</span>raw<span style="color:#f92672">.</span>lower() <span style="color:#f92672">and</span> <span style="color:#e6db74">&#34;source&#34;</span> <span style="color:#f92672">not</span> <span style="color:#f92672">in</span> result<span style="color:#f92672">.</span>raw<span style="color:#f92672">.</span>lower():
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> (<span style="color:#66d9ef">False</span>, <span style="color:#e6db74">&#34;Output appears to be an error message — retry&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> (<span style="color:#66d9ef">True</span>, <span style="color:#e6db74">&#34;&#34;</span>)  <span style="color:#75715e"># (valid, error_message_if_invalid)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@task</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">def</span> <span style="color:#a6e22e">research_task</span>(self) <span style="color:#f92672">-&gt;</span> Task:
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> Task(
</span></span><span style="display:flex;"><span>        config<span style="color:#f92672">=</span>self<span style="color:#f92672">.</span>tasks_config[<span style="color:#e6db74">&#39;research_task&#39;</span>],
</span></span><span style="display:flex;"><span>        guardrail<span style="color:#f92672">=</span>validate_research_output,  <span style="color:#75715e"># Validates output before proceeding</span>
</span></span><span style="display:flex;"><span>        max_retries<span style="color:#f92672">=</span><span style="color:#ae81ff">3</span>                         <span style="color:#75715e"># Retry up to 3 times if guardrail fails</span>
</span></span><span style="display:flex;"><span>    )
</span></span></code></pre></div><p>Additional production patterns:</p>
<ul>
<li>Set <code>max_iter=10</code> on agents to prevent infinite reasoning loops (default is 25)</li>
<li>Use <code>cache=True</code> on agents to cache identical tool calls within a session</li>
<li>Set task <code>async_execution=True</code> for tasks that can run in parallel</li>
<li>Log <code>result.token_usage</code> to a database to track costs per crew run</li>
</ul>
<h2 id="step-13-how-to-deploy-crewai-to-production-with-fastapi">Step 13: How to Deploy CrewAI to Production with FastAPI</h2>
<p>The simplest production deployment wraps the crew in a FastAPI endpoint. This gives you an HTTP API that downstream services can call, with async support for long-running crew executions.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># api.py</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> fastapi <span style="color:#f92672">import</span> FastAPI, BackgroundTasks
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> pydantic <span style="color:#f92672">import</span> BaseModel
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> uuid<span style="color:#f92672">,</span> asyncio
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> my_research_crew.crew <span style="color:#f92672">import</span> MyResearchCrew
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>app <span style="color:#f92672">=</span> FastAPI(title<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;Research Crew API&#34;</span>)
</span></span><span style="display:flex;"><span>jobs <span style="color:#f92672">=</span> {}  <span style="color:#75715e"># In production, use Redis or a database</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">ResearchRequest</span>(BaseModel):
</span></span><span style="display:flex;"><span>    topic: str
</span></span><span style="display:flex;"><span>    
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">class</span> <span style="color:#a6e22e">JobStatus</span>(BaseModel):
</span></span><span style="display:flex;"><span>    job_id: str
</span></span><span style="display:flex;"><span>    status: str
</span></span><span style="display:flex;"><span>    result: str <span style="color:#f92672">|</span> <span style="color:#66d9ef">None</span> <span style="color:#f92672">=</span> <span style="color:#66d9ef">None</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@app.post</span>(<span style="color:#e6db74">&#34;/research&#34;</span>, response_model<span style="color:#f92672">=</span>JobStatus)
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">start_research</span>(request: ResearchRequest, background_tasks: BackgroundTasks):
</span></span><span style="display:flex;"><span>    job_id <span style="color:#f92672">=</span> str(uuid<span style="color:#f92672">.</span>uuid4())
</span></span><span style="display:flex;"><span>    jobs[job_id] <span style="color:#f92672">=</span> {<span style="color:#e6db74">&#34;status&#34;</span>: <span style="color:#e6db74">&#34;running&#34;</span>, <span style="color:#e6db74">&#34;result&#34;</span>: <span style="color:#66d9ef">None</span>}
</span></span><span style="display:flex;"><span>    background_tasks<span style="color:#f92672">.</span>add_task(run_crew, job_id, request<span style="color:#f92672">.</span>topic)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> JobStatus(job_id<span style="color:#f92672">=</span>job_id, status<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;running&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">run_crew</span>(job_id: str, topic: str):
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        result <span style="color:#f92672">=</span> <span style="color:#66d9ef">await</span> asyncio<span style="color:#f92672">.</span>to_thread(
</span></span><span style="display:flex;"><span>            MyResearchCrew()<span style="color:#f92672">.</span>crew()<span style="color:#f92672">.</span>kickoff,
</span></span><span style="display:flex;"><span>            inputs<span style="color:#f92672">=</span>{<span style="color:#e6db74">&#34;topic&#34;</span>: topic}
</span></span><span style="display:flex;"><span>        )
</span></span><span style="display:flex;"><span>        jobs[job_id] <span style="color:#f92672">=</span> {<span style="color:#e6db74">&#34;status&#34;</span>: <span style="color:#e6db74">&#34;complete&#34;</span>, <span style="color:#e6db74">&#34;result&#34;</span>: result<span style="color:#f92672">.</span>raw}
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">Exception</span> <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>        jobs[job_id] <span style="color:#f92672">=</span> {<span style="color:#e6db74">&#34;status&#34;</span>: <span style="color:#e6db74">&#34;failed&#34;</span>, <span style="color:#e6db74">&#34;result&#34;</span>: str(e)}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">@app.get</span>(<span style="color:#e6db74">&#34;/research/</span><span style="color:#e6db74">{job_id}</span><span style="color:#e6db74">&#34;</span>, response_model<span style="color:#f92672">=</span>JobStatus)
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">async</span> <span style="color:#66d9ef">def</span> <span style="color:#a6e22e">get_status</span>(job_id: str):
</span></span><span style="display:flex;"><span>    job <span style="color:#f92672">=</span> jobs<span style="color:#f92672">.</span>get(job_id, {<span style="color:#e6db74">&#34;status&#34;</span>: <span style="color:#e6db74">&#34;not_found&#34;</span>, <span style="color:#e6db74">&#34;result&#34;</span>: <span style="color:#66d9ef">None</span>})
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">return</span> JobStatus(job_id<span style="color:#f92672">=</span>job_id, <span style="color:#f92672">**</span>job)
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># Deploy with Docker</span>
</span></span><span style="display:flex;"><span>docker build -t research-crew .
</span></span><span style="display:flex;"><span>docker run -p 8000:8000 --env-file .env research-crew
</span></span></code></pre></div><p>Use <code>asyncio.to_thread</code> to run the synchronous <code>kickoff()</code> call without blocking the FastAPI event loop.</p>
<h2 id="crewai-vs-langgraph-vs-autogen-which-framework-should-you-use">CrewAI vs LangGraph vs AutoGen: Which Framework Should You Use?</h2>
<p>CrewAI, LangGraph, and AutoGen are the three dominant multi-agent frameworks in 2026, and they serve different use cases. The right choice depends on whether you prioritize simplicity, control, or conversation patterns.</p>
<table>
  <thead>
      <tr>
          <th>Feature</th>
          <th>CrewAI</th>
          <th>LangGraph</th>
          <th>AutoGen</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Paradigm</td>
          <td>Role-based teams</td>
          <td>Stateful graphs</td>
          <td>Conversation agents</td>
      </tr>
      <tr>
          <td>Config style</td>
          <td>YAML + decorators</td>
          <td>Python code graph</td>
          <td>Python class hierarchy</td>
      </tr>
      <tr>
          <td>Learning curve</td>
          <td>Low</td>
          <td>High</td>
          <td>Medium</td>
      </tr>
      <tr>
          <td>Flexibility</td>
          <td>Medium</td>
          <td>High</td>
          <td>High</td>
      </tr>
      <tr>
          <td>Built-in tools</td>
          <td>Yes (crewai-tools)</td>
          <td>Via LangChain</td>
          <td>Via AutoGen tools</td>
      </tr>
      <tr>
          <td>Memory</td>
          <td>Built-in (3 layers)</td>
          <td>Manual via state</td>
          <td>Built-in (basic)</td>
      </tr>
      <tr>
          <td>Flows/orchestration</td>
          <td>Yes (Flows)</td>
          <td>Native (it&rsquo;s a graph)</td>
          <td>Nested chats</td>
      </tr>
      <tr>
          <td>Best for</td>
          <td>Structured workflows</td>
          <td>Complex logic trees</td>
          <td>R&amp;D, conversational</td>
      </tr>
      <tr>
          <td>GitHub stars (Apr 2026)</td>
          <td>49k</td>
          <td>11k</td>
          <td>38k</td>
      </tr>
  </tbody>
</table>
<p><strong>Choose CrewAI</strong> when you have a well-defined workflow where different roles map to different agents — content pipelines, research automation, data analysis workflows. The YAML configuration and role-playing metaphor make it the easiest to onboard a team to.</p>
<p><strong>Choose LangGraph</strong> when your workflow has complex conditional logic, cycles, or requires fine-grained control over state transitions. LangGraph is pure Python graph definition — no magic, no YAML — which makes debugging easier in complex scenarios.</p>
<p><strong>Choose AutoGen</strong> when your use case involves ongoing conversations between agents (like a code review loop between a coder and reviewer) or when you&rsquo;re doing R&amp;D that requires frequent configuration changes.</p>
<h2 id="troubleshooting-common-crewai-issues">Troubleshooting Common CrewAI Issues</h2>
<p><strong>Agent produces empty or very short output:</strong> Increase the <code>max_iter</code> parameter and add explicit length requirements to the task&rsquo;s <code>expected_output</code> field. Agents sometimes converge prematurely on simple answers.</p>
<p><strong>Tool not being used:</strong> Check the tool description — the LLM decides which tool to use based on the description text alone. If the description doesn&rsquo;t match the task wording, the agent won&rsquo;t select it. Make descriptions specific.</p>
<p><strong>High token costs:</strong> Enable agent-level caching (<code>cache=True</code>) to avoid repeat tool calls. Use a cheaper model for early research steps and a higher-quality model only for final synthesis.</p>
<p><strong>Rate limit errors:</strong> Wrap your <code>kickoff()</code> call in a retry loop with exponential backoff, or configure <code>max_rpm</code> on the Crew object to throttle requests.</p>
<p><strong>&ldquo;Agent stopped due to iteration limit&rdquo;:</strong> The agent hit <code>max_iter</code> without completing the task. Increase <code>max_iter</code>, simplify the task description, or break one large task into two smaller tasks.</p>
<h2 id="advanced-tips-for-production-crewai-systems">Advanced Tips for Production CrewAI Systems</h2>
<p>The following patterns separate proof-of-concept crews from production-grade systems:</p>
<p><strong>Use hierarchical process for parallel tasks.</strong> When independent tasks can run simultaneously, switch from <code>Process.sequential</code> to <code>Process.hierarchical</code> and add a manager agent. CrewAI delegates tasks in parallel automatically.</p>
<p><strong>Version your YAML configs.</strong> Treat <code>agents.yaml</code> and <code>tasks.yaml</code> like code — commit them to git, review changes in PRs. A one-word change in an agent backstory can significantly alter output quality.</p>
<p><strong>Build evaluation harnesses.</strong> Create a small test dataset of <code>(input, expected_output_characteristics)</code> pairs and run your crew against it after every config change. CrewAI has no built-in eval tooling, so this is manual — but essential for production.</p>
<p><strong>Use structured outputs everywhere.</strong> Even if you don&rsquo;t need machine-readable output, Pydantic models act as self-documenting contracts and catch model hallucinations early (e.g., an agent returning a list when your code expects a string).</p>
<p><strong>Instrument with LangSmith or Langfuse.</strong> Add <code>LANGCHAIN_TRACING_V2=true</code> and <code>LANGCHAIN_API_KEY</code> to your <code>.env</code> to get full trace visibility into every agent decision and tool call. This is invaluable for debugging production issues.</p>
<h2 id="faq-common-questions-about-building-with-crewai">FAQ: Common Questions About Building with CrewAI</h2>
<p>The following questions cover the most frequent issues developers encounter when learning CrewAI — from environment setup and cost management to async deployment and local model support. Each answer is self-contained so you can jump directly to the question relevant to your situation without reading the full tutorial. If your question isn&rsquo;t covered here, the official documentation at docs.crewai.com has comprehensive API references, and the GitHub Discussions tab has an active community answering framework-specific questions. CrewAI releases updates frequently (version 1.14.2 shipped in April 2026), so always check the changelog when upgrading to understand breaking changes in agent configuration or task output handling. Common stumbling points for new users include missing environment variables, agent outputs that are too short due to vague <code>expected_output</code> definitions, and rate limit errors when running multiple tool-heavy agents simultaneously. The answers below address each of these scenarios with concrete fixes you can apply immediately.</p>
<h3 id="what-is-crewai-and-how-does-it-differ-from-langchain">What is CrewAI and how does it differ from LangChain?</h3>
<p>CrewAI is a multi-agent orchestration framework where agents have defined roles, goals, and backstories and collaborate to complete tasks. It is built from scratch — not on top of LangChain — making it lighter and simpler to configure. LangChain is a lower-level toolkit for building LLM applications; CrewAI operates at a higher abstraction level specifically optimized for agent coordination.</p>
<h3 id="what-python-version-does-crewai-require">What Python version does CrewAI require?</h3>
<p>CrewAI requires Python 3.10 or higher. The recommended versions are Python 3.12 or 3.13 for best compatibility with the latest package dependencies. Python 3.9 and below are not supported.</p>
<h3 id="can-i-use-crewai-with-local-llms-like-ollama">Can I use CrewAI with local LLMs like Ollama?</h3>
<p>Yes. Set <code>OPENAI_API_BASE=http://localhost:11434/v1</code> in your <code>.env</code> file and use model names prefixed with <code>ollama/</code> (e.g., <code>ollama/llama3.2</code>). Crew AI uses LiteLLM internally, so any provider supported by LiteLLM works. For local models, expect slower execution and potentially lower-quality reasoning compared to GPT-4o or Claude Sonnet.</p>
<h3 id="how-much-does-running-a-crewai-system-cost">How much does running a CrewAI system cost?</h3>
<p>Cost depends on the number of agents, task complexity, and chosen LLM. A simple two-agent research-and-write crew using GPT-4o typically costs $0.05–$0.20 per run. Using Claude Haiku for research and Claude Sonnet only for final writing can reduce costs by 60–70%. Enable agent caching to avoid paying for repeated tool call summaries within a session.</p>
<h3 id="does-crewai-support-async-execution">Does CrewAI support async execution?</h3>
<p>Crew AI&rsquo;s <code>kickoff()</code> method is synchronous. For async usage, wrap it in <code>asyncio.to_thread()</code> as shown in the FastAPI deployment step. CrewAI does support <code>async_execution=True</code> on individual tasks, which enables tasks without dependencies to run in parallel within a sequential process — but the overall <code>kickoff()</code> call still blocks until all tasks complete.</p>
]]></content:encoded></item><item><title>LangGraph vs CrewAI vs AutoGen 2026: Which AI Agent Framework Should You Use?</title><link>https://baeseokjae.github.io/posts/langgraph-vs-crewai-vs-autogen-2026/</link><pubDate>Sun, 19 Apr 2026 02:37:14 +0000</pubDate><guid>https://baeseokjae.github.io/posts/langgraph-vs-crewai-vs-autogen-2026/</guid><description>LangGraph, CrewAI, AutoGen: 2026년 AI 에이전트 프레임워크 비교. 프로젝트 유형별 최적 선택을 데이터로 안내합니다.</description><content:encoded><![CDATA[<p>In 2026, choosing an AI agent framework is one of the most consequential architectural decisions you can make. LangGraph dominates stateful production systems; CrewAI ships faster for role-based business workflows; and AutoGen — effectively deprecated by Microsoft — has fractured into AG2 and the new Microsoft Agent Framework, leaving developers to pick up the pieces.</p>
<h2 id="tldr--which-framework-should-you-use-in-2026">TL;DR — Which Framework Should You Use in 2026?</h2>
<p>The right AI agent framework in 2026 depends on one question: how much control do you actually need? LangGraph is best for production-grade stateful pipelines where precision matters — think fraud detection, multi-step legal workflows, or retrieval systems that need time-travel debugging. It has 29,500 GitHub stars and is trusted in production by Klarna, Replit, and Elastic. CrewAI is the fastest path from idea to working prototype: its role-based model (a &ldquo;Researcher Agent,&rdquo; a &ldquo;Writer Agent,&rdquo; a &ldquo;QA Agent&rdquo;) maps naturally to how non-ML engineers think about business processes, and it enables 40% faster time-to-production vs LangGraph for standard workflows. AutoGen is in a genuinely confusing state: Microsoft placed it in maintenance mode in October 2025 and merged its future into the Microsoft Agent Framework (public preview Oct 2025, GA Q1 2026), while the original creators forked it into AG2 (November 2024). If you&rsquo;re starting a new project today, avoid vanilla AutoGen. Pick LangGraph for control, CrewAI for speed, or AG2 if you&rsquo;re already invested in the AutoGen ecosystem.</p>
<table>
  <thead>
      <tr>
          <th>Framework</th>
          <th>Best For</th>
          <th>Learning Curve</th>
          <th>Monthly Downloads</th>
          <th>Stars (Jan 2026)</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>LangGraph</td>
          <td>Stateful production pipelines</td>
          <td>Steep</td>
          <td>—</td>
          <td>29,500</td>
      </tr>
      <tr>
          <td>CrewAI</td>
          <td>Business workflow automation</td>
          <td>Easy</td>
          <td>~1.3M PyPI installs</td>
          <td>15,200</td>
      </tr>
      <tr>
          <td>AutoGen / AG2</td>
          <td>Conversational multi-agent</td>
          <td>Medium</td>
          <td>~450K (AutoGen) / ~100K (AG2)</td>
          <td>28,400</td>
      </tr>
  </tbody>
</table>
<h2 id="the-state-of-ai-agent-frameworks-in-2026-what-changed">The State of AI Agent Frameworks in 2026 (What Changed)</h2>
<p>The AI agent framework landscape shifted dramatically between mid-2025 and early 2026, making any comparison article from 2024 effectively obsolete. Three major events reshaped the field: Microsoft deprecated AutoGen and launched the Microsoft Agent Framework as its successor (October 2025); AutoGen&rsquo;s original founding team left Microsoft and forked the project as AG2 (November 2024), creating a community-maintained alternative with its own roadmap; and the rise of the Model Context Protocol (MCP) as a standardized tool interface changed how frameworks integrate with external systems. CrewAI emerged as the PyPI download leader with approximately 1.3 million monthly installs — vastly outpacing AG2&rsquo;s 100,000 — despite having fewer GitHub stars (15,200) than AutoGen (28,400) or LangGraph (29,500). The key takeaway: raw star counts no longer predict adoption. Download volume, active community maintenance, and MCP compatibility are now the metrics that matter most when evaluating a framework for production use in 2026.</p>
<h3 id="why-the-autogen-situation-matters">Why the AutoGen Situation Matters</h3>
<p>AutoGen&rsquo;s deprecation is the most disruptive event of the past 18 months. Microsoft&rsquo;s decision to merge AutoGen into the broader Microsoft Agent Framework — which bundles Semantic Kernel, Copilot Studio connectors, and Azure AI integrations — leaves existing AutoGen users with three choices: migrate to AG2 (the community fork, backward-compatible with most existing code), adopt the Microsoft Agent Framework (more locked-in, better Azure integration), or switch frameworks entirely. For teams without Azure dependencies, AG2 is the safer migration path. For teams already running on Azure AI services, the Microsoft Agent Framework offers tighter toolchain integration that may justify the migration cost.</p>
<h2 id="langgraph--graph-based-orchestration-for-production">LangGraph — Graph-Based Orchestration for Production</h2>
<p>LangGraph is the most mature AI agent framework for stateful, complex workflows in 2026, built on top of LangChain and designed around a directed acyclic graph (DAG) model where each node is an agent or tool, and edges represent conditional transitions between them. This graph-first architecture gives you precise control over execution order, branching logic, and state management — capabilities that role-based frameworks like CrewAI deliberately abstract away. In production, Klarna uses LangGraph for customer service automation across millions of interactions; Replit integrates it for AI coding assistant workflows; Elastic runs it for security analytics pipelines. The framework&rsquo;s signature features — time-travel debugging (replay any prior state), checkpointing (persist and resume long-running workflows), and built-in human-in-the-loop support — address real production pain points that simpler frameworks ignore. LangGraph has 29,500 GitHub stars and is the default choice for engineering teams that need auditability, fault tolerance, and deterministic behavior at scale.</p>
<h3 id="what-langgraph-does-well">What LangGraph Does Well</h3>
<p>LangGraph excels at three specific scenarios: workflows with complex branching logic that changes based on intermediate results, systems requiring state persistence across sessions or across human review checkpoints, and pipelines where you need to debug and replay specific execution paths. Its conditional edge system lets you write logic like &ldquo;if the research agent returns low-confidence results, route to a secondary verification agent before proceeding&rdquo; — the kind of nuanced control that CrewAI&rsquo;s role-based model can&rsquo;t express cleanly. The tradeoff is real: LangGraph has the steepest learning curve of the three frameworks. Expect 2–3x more boilerplate compared to CrewAI for equivalent workflows, and plan for a longer onboarding ramp for engineers who aren&rsquo;t already comfortable with graph abstractions and the LangChain ecosystem.</p>
<h3 id="langgraphs-limitations">LangGraph&rsquo;s Limitations</h3>
<p>LangGraph is not the right tool when speed of iteration is more important than control. The graph abstraction that makes production deployments so reliable also makes rapid prototyping slower. For standard business automation tasks — report generation, content pipelines, data enrichment — you&rsquo;ll write significantly more code to achieve the same result as CrewAI. LangGraph also inherits LangChain&rsquo;s complexity, which can feel like fighting the framework when your use case is straightforward. If you don&rsquo;t need time-travel debugging or fine-grained state control, you&rsquo;re paying an abstraction tax you don&rsquo;t need to pay.</p>
<h2 id="crewai--role-based-simplicity-for-fast-delivery">CrewAI — Role-Based Simplicity for Fast Delivery</h2>
<p>CrewAI is an AI agent framework that organizes agents around roles, goals, and tasks — deliberately modeled on how human teams work rather than how distributed systems engineers think. You define a &ldquo;crew&rdquo; of agents (each with a role like &ldquo;Senior Researcher,&rdquo; &ldquo;Data Analyst,&rdquo; or &ldquo;Content Writer&rdquo;), assign them tasks, and let the framework manage coordination. This role-based model is CrewAI&rsquo;s core insight: non-ML engineers immediately understand what a &ldquo;Researcher Agent&rdquo; does, which dramatically reduces the barrier to building useful multi-agent systems. In 2026, CrewAI leads on adoption metrics that actually matter — approximately 1.3 million monthly PyPI installs, compared to AG2&rsquo;s 100,000 — and benchmarks show it is 48% faster and uses 34% fewer tokens than AutoGen on structured tasks. For teams optimizing for time-to-delivery, CrewAI enables 40% faster time-to-production versus LangGraph for standard business workflows.</p>
<h3 id="where-crewai-wins">Where CrewAI Wins</h3>
<p>CrewAI&rsquo;s sweet spot is any workflow that maps naturally to &ldquo;assign this job to this type of expert.&rdquo; Content generation pipelines (research → draft → review → edit), competitive analysis workflows, report automation, customer support triage, and sales intelligence gathering all fit the role-based model well. The framework&rsquo;s Flow feature (introduced in 2025) added structured, event-driven orchestration for more complex scenarios — narrowing the gap with LangGraph for certain use cases. CrewAI also integrates directly with MCP, meaning you get access to a standardized ecosystem of tools without custom connector code. For teams that want to ship something working in a day rather than a week, CrewAI is the default answer.</p>
<h3 id="where-crewai-falls-short">Where CrewAI Falls Short</h3>
<p>CrewAI abstracts away control — and sometimes you need that control back. When workflows have complex conditional logic that depends on intermediate outputs, when you need deterministic state management across long-running operations, or when auditability requires replaying specific execution paths, CrewAI&rsquo;s role-based model becomes a constraint rather than a convenience. You can work around many limitations with CrewAI&rsquo;s Flow system, but at some point you&rsquo;re fighting the framework to get LangGraph-style behavior out of it. The other limitation is cost predictability: while CrewAI is more token-efficient than AutoGen, complex crews can still accumulate significant LLM call costs in production if tasks aren&rsquo;t scoped carefully.</p>
<h2 id="autogen--ag2--conversational-agents-and-the-microsoft-split">AutoGen / AG2 — Conversational Agents and the Microsoft Split</h2>
<p>AutoGen is an AI agent framework originally developed by Microsoft Research that pioneered conversational multi-agent systems — where agents coordinate by passing messages to each other in natural language, rather than through explicit state graphs or role assignments. The framework&rsquo;s conversational architecture made it easy to build systems where agents debate, critique each other&rsquo;s outputs, and iteratively refine results. AutoGen has approximately 28,400 GitHub stars as of January 2026 and averaged 450,000 downloads per month in late 2025. However, the framework is now functionally deprecated: Microsoft moved AutoGen into maintenance mode in October 2025 and redirected resources toward the Microsoft Agent Framework, a broader platform that integrates AutoGen&rsquo;s ideas with Semantic Kernel and Azure AI services. The original AutoGen research team left Microsoft and forked the project as AG2 in November 2024, creating a backward-compatible community alternative with active development. For new projects in 2026, the choice between these AutoGen successors matters more than the original framework.</p>
<h3 id="ag2-vs-microsoft-agent-framework">AG2 vs Microsoft Agent Framework</h3>
<p>AG2 is the continuity choice for existing AutoGen users: it maintains backward compatibility with most AutoGen code, has an active open-source community, and continues the original research direction around conversational multi-agent systems. AG2 downloads (~100,000/month) are significantly lower than AutoGen&rsquo;s historical peak, reflecting the fragmentation of the community post-fork. The Microsoft Agent Framework (MAF) is the enterprise choice for teams already invested in Azure: it integrates with Azure AI Foundry, Copilot Studio, and the Microsoft 365 ecosystem, offering managed infrastructure for deploying agent workflows at scale. If your organization runs on Azure, MAF&rsquo;s toolchain integration may justify the migration cost. If you&rsquo;re cloud-agnostic or open-source-first, AG2 is the safer path. One concrete warning: AutoGen&rsquo;s conversational architecture uses 20+ LLM calls per task by design, making it significantly more expensive at scale than LangGraph or CrewAI for equivalent workflows. This cost profile is inherited by both AG2 and MAF, and is a real consideration before committing to the AutoGen approach.</p>
<h2 id="head-to-head-comparison-performance-cost-and-developer-experience">Head-to-Head Comparison: Performance, Cost, and Developer Experience</h2>
<p>A direct comparison across LangGraph, CrewAI, and AutoGen/AG2 reveals distinct trade-offs across every dimension that matters in production environments. On raw performance, CrewAI runs 48% faster than AutoGen on structured tasks and uses 34% fewer tokens — a meaningful cost difference at scale. LangGraph sits between them: more overhead than CrewAI due to graph state management, but significantly more token-efficient than AutoGen&rsquo;s conversational loop (which averages 20+ LLM calls per task). On developer experience, CrewAI wins on time-to-first-working-agent, LangGraph wins on debuggability and long-term maintainability, and AutoGen/AG2 wins for teams that want agents to reason in natural language without writing explicit coordination logic. On MCP integration (the standardized tool protocol that&rsquo;s increasingly table-stakes in 2026), all three frameworks have added support, but CrewAI&rsquo;s integration is most mature and production-tested.</p>
<table>
  <thead>
      <tr>
          <th>Dimension</th>
          <th>LangGraph</th>
          <th>CrewAI</th>
          <th>AutoGen / AG2</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Time to prototype</td>
          <td>Slow (steep curve)</td>
          <td>Fast</td>
          <td>Medium</td>
      </tr>
      <tr>
          <td>Token efficiency</td>
          <td>High</td>
          <td>High</td>
          <td>Low (20+ calls/task)</td>
      </tr>
      <tr>
          <td>Production reliability</td>
          <td>Highest</td>
          <td>High</td>
          <td>Medium</td>
      </tr>
      <tr>
          <td>Debugging tools</td>
          <td>Best (time-travel)</td>
          <td>Basic</td>
          <td>Limited</td>
      </tr>
      <tr>
          <td>MCP support</td>
          <td>Yes</td>
          <td>Yes (mature)</td>
          <td>Yes</td>
      </tr>
      <tr>
          <td>LLM provider flexibility</td>
          <td>Any</td>
          <td>Any</td>
          <td>Any</td>
      </tr>
      <tr>
          <td>Azure integration</td>
          <td>None built-in</td>
          <td>None built-in</td>
          <td>Deep (MAF)</td>
      </tr>
      <tr>
          <td>Learning curve</td>
          <td>Steepest</td>
          <td>Easiest</td>
          <td>Medium</td>
      </tr>
      <tr>
          <td>Active maintenance</td>
          <td>Active</td>
          <td>Active</td>
          <td>AG2 active; AutoGen maintenance-only</td>
      </tr>
  </tbody>
</table>
<h3 id="cost-reality-at-scale">Cost Reality at Scale</h3>
<p>The cost difference between frameworks is significant and often underestimated during prototyping. AutoGen&rsquo;s conversational architecture, where agents discuss and debate results across 20+ LLM calls per task, can cost 5–10x more per workflow than a well-designed LangGraph or CrewAI pipeline accomplishing the same outcome. At 10,000 workflow executions per month, this difference is the gap between a $500 LLM bill and a $5,000 LLM bill. CrewAI&rsquo;s token efficiency advantage (34% fewer tokens than AutoGen on structured tasks) compounds at scale. LangGraph&rsquo;s overhead is primarily in graph state management, not in redundant LLM calls — making it cost-competitive with CrewAI for most workloads.</p>
<h2 id="use-case-decision-guide--which-framework-fits-your-project">Use Case Decision Guide — Which Framework Fits Your Project?</h2>
<p>Selecting the right AI agent framework in 2026 requires matching the framework&rsquo;s core abstraction to your project&rsquo;s core constraint. If your primary constraint is speed of delivery — getting a working system in front of stakeholders in days, not weeks — CrewAI is your answer. Its role-based model, large community of pre-built agents, and extensive documentation make it the fastest path to a functional multi-agent system for business process automation, content pipelines, and data enrichment workflows. If your primary constraint is reliability and control — you&rsquo;re building something that runs in production at scale, needs auditability, or handles errors that require human review — LangGraph is your answer. Its graph-based state management, time-travel debugging, and explicit control flow are precisely what production systems need. If you&rsquo;re already invested in AutoGen and need to migrate, AG2 is the continuity path; if you&rsquo;re on Azure and want managed infrastructure, the Microsoft Agent Framework is the enterprise path.</p>
<h3 id="decision-tree-by-use-case">Decision Tree by Use Case</h3>
<p><strong>Use LangGraph when:</strong></p>
<ul>
<li>Building pipelines where failure recovery and state replay matter (financial workflows, legal document processing, security operations)</li>
<li>Your workflow has complex conditional branching that depends on intermediate agent outputs</li>
<li>You need human-in-the-loop review at specific checkpoints</li>
<li>You&rsquo;re running on LangChain already and want the tightest integration</li>
</ul>
<p><strong>Use CrewAI when:</strong></p>
<ul>
<li>You need a working prototype in 1–2 days</li>
<li>Your team includes non-ML engineers who need to understand and maintain the agent logic</li>
<li>The workflow maps to a team of specialized roles (researcher, analyst, writer, reviewer)</li>
<li>You want the largest ecosystem of pre-built agent templates and tools</li>
</ul>
<p><strong>Use AG2 when:</strong></p>
<ul>
<li>You have existing AutoGen code that you need to migrate with minimal changes</li>
<li>Your use case genuinely benefits from conversational agent coordination (agents debating to improve output quality)</li>
<li>You want an open-source framework with active community maintenance and no vendor lock-in</li>
</ul>
<p><strong>Use Microsoft Agent Framework when:</strong></p>
<ul>
<li>You&rsquo;re deploying on Azure and want native integration with Azure AI Foundry, Copilot Studio, and Microsoft 365</li>
<li>Enterprise SLAs and managed infrastructure are priorities over open-source flexibility</li>
</ul>
<h2 id="verdict-langgraph-vs-crewai-vs-autogen-in-2026">Verdict: LangGraph vs CrewAI vs AutoGen in 2026</h2>
<p>The landscape has consolidated around a clear hierarchy by use case: LangGraph for production, CrewAI for prototyping and business workflows, and AG2 (not AutoGen) for conversational multi-agent systems. The AutoGen situation is the biggest change from a year ago — Microsoft&rsquo;s deprecation and the AG2 fork mean that choosing &ldquo;AutoGen&rdquo; in 2026 requires first deciding which AutoGen successor you&rsquo;re actually adopting. For most teams starting fresh, the decision is binary: start with CrewAI to validate your use case quickly, then evaluate whether LangGraph&rsquo;s control is worth the migration cost once you hit the limits of role-based orchestration. The frameworks serve different masters — speed vs control — and the right choice is the one that matches your project&rsquo;s actual constraint, not the one with the most GitHub stars.</p>
<h2 id="faq">FAQ</h2>
<p>The five most common questions developers ask when choosing between LangGraph, CrewAI, and AutoGen in 2026 — answered directly based on the current state of each framework&rsquo;s maintenance status, performance benchmarks, and real-world adoption data. These answers reflect the post-October 2025 landscape after Microsoft deprecated AutoGen and the AG2 fork matured into a viable alternative. Short answer: use LangGraph for stateful production systems (29,500 GitHub stars, trusted by Klarna and Elastic), CrewAI for fast delivery and business automation (1.3M monthly PyPI installs, 48% faster than AutoGen on structured tasks), and AG2 if you&rsquo;re migrating from AutoGen and want open-source continuity. Avoid starting new projects on vanilla AutoGen — it&rsquo;s in maintenance mode and its community has split between AG2 and the Microsoft Agent Framework. MCP compatibility, token cost at scale, and your team&rsquo;s existing expertise are the three factors that should drive the final decision alongside the architecture match.</p>
<h3 id="is-autogen-still-being-actively-developed-in-2026">Is AutoGen still being actively developed in 2026?</h3>
<p>AutoGen itself is in maintenance mode as of October 2025 — Microsoft placed it there when launching the Microsoft Agent Framework. Active development has split into two paths: AG2 (the open-source community fork by AutoGen&rsquo;s original creators, launched November 2024) and the Microsoft Agent Framework (Microsoft&rsquo;s enterprise platform successor). For new projects, don&rsquo;t start with vanilla AutoGen. Choose AG2 for open-source continuity or the Microsoft Agent Framework for Azure-integrated enterprise deployments.</p>
<h3 id="which-ai-agent-framework-is-easiest-to-learn-in-2026">Which AI agent framework is easiest to learn in 2026?</h3>
<p>CrewAI has the lowest learning curve of the three frameworks. Its role-based model (agents with explicit roles, goals, and tasks) maps to how non-technical stakeholders already think about workflows, making it accessible to product managers and business engineers — not just ML specialists. Most developers can build a working multi-agent system with CrewAI in under a day. LangGraph has the steepest learning curve, requiring familiarity with graph abstractions and the LangChain ecosystem. AutoGen/AG2 falls in between.</p>
<h3 id="can-langgraph-and-crewai-be-used-together">Can LangGraph and CrewAI be used together?</h3>
<p>Yes — and the combination is increasingly common in production systems. CrewAI can be used for the higher-level role orchestration layer, with LangGraph managing specific subgraphs that require complex conditional logic or stateful execution. Both frameworks also support integration via MCP (Model Context Protocol), meaning tool ecosystems can be shared between them. That said, most teams choose one primary framework and use the other only for specific components where it&rsquo;s clearly superior.</p>
<h3 id="how-does-mcp-model-context-protocol-affect-framework-choice-in-2026">How does MCP (Model Context Protocol) affect framework choice in 2026?</h3>
<p>MCP has become the standard interface for connecting AI agents to external tools — databases, APIs, file systems, and SaaS platforms. All three frameworks (LangGraph, CrewAI, and AG2) have added MCP support, reducing one historical differentiator: you no longer need to pick a framework based on which one has connectors for your specific toolset. CrewAI&rsquo;s MCP integration is currently the most mature and production-tested. LangGraph&rsquo;s MCP support is solid but newer. AG2&rsquo;s MCP integration is in active development. The choice between frameworks should now be driven by orchestration model and control requirements, not tool ecosystem coverage.</p>
<h3 id="whats-the-token-cost-difference-between-langgraph-crewai-and-autogen">What&rsquo;s the token cost difference between LangGraph, CrewAI, and AutoGen?</h3>
<p>Benchmarks show CrewAI uses 34% fewer tokens than AutoGen on structured tasks, and LangGraph is generally competitive with CrewAI. AutoGen&rsquo;s conversational architecture, where agents coordinate by passing natural language messages, averages 20+ LLM calls per task — making it the most expensive option at scale. At high volumes (10,000+ workflow executions per month), the cost difference between AutoGen-style systems and LangGraph/CrewAI pipelines can be 5–10x. For cost-sensitive production deployments, AutoGen&rsquo;s architecture requires careful scoping to avoid runaway token costs.</p>
]]></content:encoded></item></channel></rss>