This is a cache of https://developer.ibm.com/articles/awb-implementing-ai-agents-crewai-langgraph-and-beeai/. It is a snapshot of the page as it appeared on 2026-02-09T18:52:27.156+0000.
Implementing AI agents with AI agent frameworks
IBM Developer

Article

Implementing AI agents with AI agent frameworks

Practical examples with CrewAI, LangGraph, and BeeAI

By Nishit Dembla

This previous article explored the fundamental differences between three leading AI agent frameworks: CrewAI, LangGraph, and BeeAI. It covered their architectural approaches, strengths, and limitations at a conceptual level. In this article, I demonstrate how to implement the same AI agent in each of these frameworks.

A quick comparison of the three AI agent frameworks

Before diving into implementations, let's briefly recap the key characteristics of each framework:

  • CrewAI

    • Pros: Human-like role-based collaboration, intuitive API for multi-agent systems, built-in process management
    • Cons: Less flexible for complex workflows, higher latency with multiple agents, steeper learning curve for custom tools
  • LangGraph

    • Pros: Precise control over execution flow, excellent state management, strong traceability and debugging
    • Cons: More complex setup, requires more boilerplate code, steeper learning curve for beginners
  • BeeAI

    • Pros: Highly declarative approach, excellent scalability, strong support for parallel processing
    • Cons: Newer framework with a smaller community, fewer integrations, documentation still developing

Implementing an AI agent

In this article, we'll implement a market research agent in each framework. This agent will:

  • Research a given company
  • Analyze its competitors
  • Summarize market opportunities
  • Generate a concise report

Let's see how each framework approaches this task differently.

CrewAI implementation example

CrewAI excels at creating collaborative agent systems where different roles interact to solve complex problems. Let's implement our market research system:

from crewai import Agent, Task, Crew, Process
from langchain.tools import DuckDuckGoSearchRun

# Initialize search tool
search_tool = DuckDuckGoSearchRun()

# Define agents with specific roles
researcher = Agent(
    role="Market Researcher",
    goal="Find comprehensive information about the target company and its market",
    backstory="You're an expert market researcher with 15 years of experience analyzing companies across various industries.",
    verbose=True,
    tools=[search_tool],
    allow_delegation=True
)

analyst = Agent(
    role="Competitive Analyst",
    goal="Analyze the competitive landscape and identify key differentiators",
    backstory="You specialize in competitive analysis with a deep understanding of market positioning and competitive advantages.",
    verbose=True,
    tools=[search_tool],
    allow_delegation=True
)

strategist = Agent(
    role="Market Strategist",
    goal="Identify market opportunities based on research and analysis",
    backstory="You're a veteran strategist who can spot market opportunities others miss.",
    verbose=True,
    allow_delegation=True
)

report_writer = Agent(
    role="Report Writer",
    goal="Create concise, actionable market reports",
    backstory="You excel at distilling complex information into clear, compelling reports for business decision-makers.",
    verbose=True,
    allow_delegation=True
)

# Define tasks
research_task = Task(
    description="Research {company_name} thoroughly. Identify their products/services, target market, business model, recent news, and financial performance if available.",
    expected_output="Comprehensive research document on the company",
    agent=researcher
)

competitor_task = Task(
    description="Analyze the top 3-5 competitors of {company_name}. Identify their strengths, weaknesses, and market positioning compared to {company_name}.",
    expected_output="Competitive analysis document",
    agent=analyst,
    context=[research_task]
)

opportunity_task = Task(
    description="Based on the research and competitive analysis, identify 3-5 key market opportunities for {company_name}.",
    expected_output="List of market opportunities with justification",
    agent=strategist,
    context=[research_task, competitor_task]
)

report_task = Task(
    description="Create a concise 1-page market report for {company_name} that summarizes the research, competitive analysis, and highlights the market opportunities.",
    expected_output="1-page market report in markdown format",
    agent=report_writer,
    context=[research_task, competitor_task, opportunity_task]
)

# Create crew
market_research_crew = Crew(
    agents=[researcher, analyst, strategist, report_writer],
    tasks=[research_task, competitor_task, opportunity_task, report_task],
    verbose=2,
    process=Process.sequential
)

# Execute the crew
result = market_research_crew.kickoff(inputs={"company_name": "Tesla"})
print(result)

Key Implementation Details:

  • Each agent has a clearly defined role, goal, and backstory
  • Tasks are sequentially dependent, with each task building on previous outputs
  • The context parameter ensures each agent has access to the results of prerequisite tasks
  • The process is set to sequential to ensure proper order of execution

This implementation demonstrates CrewAI's strength in role-based collaboration and straightforward task management.

LangGraph implementation example

LangGraph takes a more control-flow-oriented approach, allowing for precise state management and conditional branches. Here's how we'd implement the same market research agent:

from typing import Dict, TypedDict, Annotated, List
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from langchain_core.messages import HumanMessage, AIMessage
import operator
from langchain_community.tools import DuckDuckGoSearchRun

# State definition
class AgentState(TypedDict):
    company_data: Dict
    competitor_data: Dict
    opportunities: List[Dict]
    final_report: str
    current_step: str
    messages: List

# Initialize tools
search = DuckDuckGoSearchRun()

# Define model
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Research function
@tool
def research_company(company_name: str) -> Dict:
    """Research a company and return key information"""
    query = f"comprehensive information about {company_name} company, products, services, market"
    search_results = search.run(query)

    prompt = f"""
    Based on the following information about {company_name}, extract key details about:
    1. Products/services
    2. Target market
    3. Business model
    4. Recent news
    5. Financial performance (if available)

    Information: {search_results}

    Format as a structured JSON dictionary.
    """

    response = llm.invoke([HumanMessage(content=prompt)])
    # In a real implementation, we would parse this into proper JSON
    return {"company_name": company_name, "research": response.content}

# Competitor analysis function
@tool
def analyze_competitors(company_name: str, company_data: Dict) -> Dict:
    """Analyze competitors for a company"""
    query = f"top competitors of {company_name} company market comparison"
    search_results = search.run(query)

    prompt = f"""
    Based on the following information and what you know about {company_name}, identify and analyze 3-5 top competitors:

    Company information: {company_data}
    Search results: {search_results}

    For each competitor provide:
    1. Name
    2. Strengths
    3. Weaknesses
    4. Market positioning compared to {company_name}

    Format as a structured JSON array of competitors.
    """

    response = llm.invoke([HumanMessage(content=prompt)])
    return {"competitors": response.content}

# Opportunity identification function
@tool
def identify_opportunities(company_data: Dict, competitor_data: Dict) -> List[Dict]:
    """Identify market opportunities based on research and competitive analysis"""
    prompt = f"""
    Based on the following company information and competitive analysis, identify 3-5 key market opportunities:

    Company information: {company_data}
    Competitor analysis: {competitor_data}

    For each opportunity provide:
    1. Title
    2. Description
    3. Justification based on the research and competitive analysis

    Format as a structured JSON array of opportunities.
    """

    response = llm.invoke([HumanMessage(content=prompt)])
    return {"opportunities": response.content}

# Report generation function
@tool
def generate_report(company_data: Dict, competitor_data: Dict, opportunities: List[Dict]) -> str:
    """Generate a concise market report"""
    prompt = f"""
    Create a concise 1-page market report based on the following information:

    Company information: {company_data}
    Competitor analysis: {competitor_data}
    Market opportunities: {opportunities}

    The report should include:
    1. Brief company overview
    2. Summary of competitive landscape
    3. Key market opportunities
    4. Strategic recommendations

    Format the report in Markdown.
    """

    response = llm.invoke([HumanMessage(content=prompt)])
    return response.content

# Define the state graph
def build_market_research_graph():
    workflow = StateGraph(AgentState)

    # Define nodes for each step
    workflow.add_node("research", lambda state: {"company_data": research_company(state["company_name"]), "current_step": "research_complete"})
    workflow.add_node("competitors", lambda state: {"competitor_data": analyze_competitors(state["company_name"], state["company_data"]), "current_step": "competitors_complete"})
    workflow.add_node("opportunities", lambda state: {"opportunities": identify_opportunities(state["company_data"], state["competitor_data"]), "current_step": "opportunities_complete"})
    workflow.add_node("report", lambda state: {"final_report": generate_report(state["company_data"], state["competitor_data"], state["opportunities"]), "current_step": "report_complete"})

    # Define edges
    workflow.add_edge("research", "competitors")
    workflow.add_edge("competitors", "opportunities")
    workflow.add_edge("opportunities", "report")
    workflow.add_edge("report", END)

    # Set entry point
    workflow.set_entry_point("research")

    return workflow

# Initialize and run the graph
graph = build_market_research_graph()
app = graph.compile()

# Run the workflow
result = app.invoke({
    "company_name": "Tesla",
    "company_data": {},
    "competitor_data": {},
    "opportunities": [],
    "final_report": "",
    "current_step": "start",
    "messages": []
})

print(result["final_report"])

Key Implementation Details:

  • Clearly defined state management with the AgentState class
  • Each function is implemented as a tool with precise input/output types
  • The graph explicitly defines the flow between steps
  • Error handling and state transitions are more explicit
  • Each node in the graph processes one specific part of the workflow

This implementation showcases LangGraph's strength in structured workflow management and explicit state handling.

BeeAI implementation example

BeeAI takes a declarative approach, focusing on modularity and scalability. Here's how we'd implement our market research agent:

from beeai import Bee, BeeHive, Tool
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_openai import ChatOpenAI
import json

# Initialize search tool and LLM
search = DuckDuckGoSearchRun()
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Define tools
class CompanyResearchTool(Tool):
    name = "company_research"
    description = "Researches a company and provides comprehensive information"

    def run(self, company_name: str) -> dict:
        query = f"comprehensive information about {company_name} company, products, services, market"
        search_results = search.run(query)

        prompt = f"""
        Based on the following information about {company_name}, extract key details about:
        1. Products/services
        2. Target market
        3. Business model
        4. Recent news
        5. Financial performance (if available)

        Information: {search_results}

        Format as a structured JSON dictionary.
        """

        response = llm.invoke(prompt)
        # In a real implementation, we would validate this JSON
        return {"company_name": company_name, "research": response.content}

class CompetitorAnalysisTool(Tool):
    name = "competitor_analysis"
    description = "Analyzes competitors for a given company"

    def run(self, company_name: str, company_data: dict) -> dict:
        query = f"top competitors of {company_name} company market comparison"
        search_results = search.run(query)

        prompt = f"""
        Based on the following information and what you know about {company_name}, identify and analyze 3-5 top competitors:

        Company information: {company_data}
        Search results: {search_results}

        For each competitor provide:
        1. Name
        2. Strengths
        3. Weaknesses
        4. Market positioning compared to {company_name}

        Format as a structured JSON array of competitors.
        """

        response = llm.invoke(prompt)
        return {"competitors": response.content}

class OpportunityIdentificationTool(Tool):
    name = "identify_opportunities"
    description = "Identifies market opportunities based on research and competitive analysis"

    def run(self, company_data: dict, competitor_data: dict) -> dict:
        prompt = f"""
        Based on the following company information and competitive analysis, identify 3-5 key market opportunities:

        Company information: {company_data}
        Competitor analysis: {competitor_data}

        For each opportunity provide:
        1. Title
        2. Description
        3. Justification based on the research and competitive analysis

        Format as a structured JSON array of opportunities.
        """

        response = llm.invoke(prompt)
        return {"opportunities": response.content}

class ReportGenerationTool(Tool):
    name = "generate_report"
    description = "Generates a concise market report"

    def run(self, company_data: dict, competitor_data: dict, opportunities: dict) -> str:
        prompt = f"""
        Create a concise 1-page market report based on the following information:

        Company information: {company_data}
        Competitor analysis: {competitor_data}
        Market opportunities: {opportunities}

        The report should include:
        1. Brief company overview
        2. Summary of competitive landscape
        3. Key market opportunities
        4. Strategic recommendations

        Format the report in Markdown.
        """

        response = llm.invoke(prompt)
        return response.content

# Define Bees (Agents)
class ResearcherBee(Bee):
    name = "Market Researcher"
    description = "Researches companies and markets to gather comprehensive information"
    tools = [CompanyResearchTool()]

    def process(self, inputs):
        company_name = inputs["company_name"]
        return self.use_tool("company_research", company_name)

class AnalystBee(Bee):
    name = "Competitive Analyst"
    description = "Analyzes competitors to identify strengths, weaknesses, and market positioning"
    tools = [CompetitorAnalysisTool()]

    def process(self, inputs):
        company_name = inputs["company_name"]
        company_data = inputs["company_data"]
        return self.use_tool("competitor_analysis", company_name, company_data)

class StrategistBee(Bee):
    name = "Market Strategist"
    description = "Identifies market opportunities based on research and competitive analysis"
    tools = [OpportunityIdentificationTool()]

    def process(self, inputs):
        company_data = inputs["company_data"]
        competitor_data = inputs["competitor_data"]
        return self.use_tool("identify_opportunities", company_data, competitor_data)

class ReportWriterBee(Bee):
    name = "Report Writer"
    description = "Creates concise, actionable market reports"
    tools = [ReportGenerationTool()]

    def process(self, inputs):
        company_data = inputs["company_data"]
        competitor_data = inputs["competitor_data"]
        opportunities = inputs["opportunities"]
        return self.use_tool("generate_report", company_data, competitor_data, opportunities)

# Define the BeeHive (workflow)
class MarketResearchHive(BeeHive):
    name = "Market Research Workflow"
    description = "Performs comprehensive market research and generates actionable reports"

    flow = {
        "researcher": {
            "bee": ResearcherBee(),
            "next": "analyst"
        },
        "analyst": {
            "bee": AnalystBee(),
            "next": "strategist"
        },
        "strategist": {
            "bee": StrategistBee(),
            "next": "report_writer"
        },
        "report_writer": {
            "bee": ReportWriterBee(),
            "next": None
        }
    }

    input_schema = {
        "company_name": str
    }

    def initialize_state(self, inputs):
        return {
            "company_name": inputs["company_name"],
            "company_data": {},
            "competitor_data": {},
            "opportunities": {}
        }

    def process_outputs(self, outputs):
        # Format the final output
        return {
            "market_report": outputs["report_writer"]
        }

# Run the workflow
hive = MarketResearchHive()
result = hive.run({"company_name": "Tesla"})
print(result["market_report"])

Key Implementation Details:

  • BeeAI uses a declarative approach with clearly defined tools and bees
  • Each bee has a specific responsibility with minimal overlap
  • The flow is defined separately from the implementation logic
  • State management is handled automatically based on the defined flow
  • Tools are modular and can be reused across different bees

This implementation highlights BeeAI's strengths in modularity, declarative programming, and scalability.

Implementation considerations

Having implemented the same market research agent in all three frameworks, let's discuss the key considerations for each.

CrewAI implementation considerations

Development speed:

  • Fastest development time due to intuitive role-based API
  • Less boilerplate code required
  • Natural language task descriptions accelerate development

Customization:

  • Limited fine-grained control over execution flow
  • Best for systems that mimic human team dynamics
  • Consider custom tools for domain-specific functionality

Deployment:

  • Straightforward deployment with minimal dependencies
  • Memory usage scales with the number of agents
  • Consider process type (sequential vs hierarchical) based on task complexity

When to choose:

  • When simulating human team dynamics is important
  • For rapid prototyping of multi-agent systems
  • When task delegation and agent autonomy are priorities

LangGraph implementation considerations

Development speed:

  • More verbose implementation requiring explicit state definitions
  • Higher initial setup time
  • Strong typing improves long-term maintainability

Customization:

  • Excellent fine-grained control over execution flow
  • Supports complex conditional branching
  • Well-suited for stateful applications with complex logic

Deployment:

  • Efficient memory management
  • Strong traceability aids debugging in production
  • Consider separate nodes for computation-heavy tasks

When to choose:

  • For complex workflows with conditional branching
  • When precise state management is critical
  • When debugging and observability are priorities
  • For systems that need fine-grained control over execution flow

BeeAI implementation considerations

Development speed:

  • Declarative approach reduces cognitive load
  • Modular components improve reusability
  • Clear separation of concerns speeds development

Customization:

  • Highly modular architecture enables custom components
  • Strong support for parallel processing
  • Excellent for systems that need to scale horizontally

Deployment:

  • Excellent scalability for distributed workloads
  • Clear separation of concerns simplifies maintenance
  • Consider modular deployment for resource optimization

When to choose:

  • For systems that need to scale horizontally
  • When modularity and component reuse are priorities
  • For applications with clear, separate processing steps
  • When declarative programming approach is preferred

Conclusion

The choice between CrewAI, LangGraph, and BeeAI depends largely on your specific use case and priorities:

  • CrewAI excels at simulating human team dynamics with intuitive role-based collaboration, making it ideal for scenarios where agent autonomy and natural delegation are important.
  • LangGraph provides precise control over execution flow and state management, making it the best choice for complex workflows with conditional logic and situations requiring detailed traceability.
  • BeeAI takes a highly declarative approach with excellent modularity and scalability, making it well-suited for distributed systems and applications that need to scale horizontally.

By understanding the implementation considerations for each framework, you can make an informed decision about which one best suits your specific AI agent needs. The examples provided in this article serve as starting points that you can adapt and extend for your own use cases.

What's most important is aligning your choice of framework with your specific requirements, development team expertise, and long-term maintenance considerations. Each framework has its strengths, and the right choice depends on finding the best match for your particular use case.