This is a cache of https://developer.ibm.com/articles/build-an-agentic-framework-crewai/. It is a snapshot of the page as it appeared on 2025-11-14T12:39:28.990+0000.
Build an agentic framework with CrewAI memory, i18n, and IBM watsonx.ai - IBM Developer
In an era where autonomy and adaptability are paramount, the agentic framework has emerged as a transformative approach to technology and human-centric domains. Rooted in the concept of agency---the ability to act independently---this framework empowers entities, whether AI or human, to make proactive decisions. With advancements in AI and a shift towards decentralized organizational structures, understanding agentic frameworks is crucial for innovation and efficiency.
Agentic framework key components
Following are the main characteristics of an agentic framework:
Autonomy: Agents operate independently, making real-time
decisions without constant oversight. For example, self-driving cars
navigating traffic.
Goal-oriented design: Agents pursue specific objectives using strategies like machine learning or human intuition.
Adaptability: Continuous learning from environments ensures relevance. For example, chatbots refining responses through user interactions.
Interactivity: Agents collaborate with users, systems, or other agents, enhancing outcomes through teamwork.
Industry applications
An agentic framework can be applied across a wide variety of industries.
For example:
AI and robotics: Autonomous drones for delivery, predictive maintenance in manufacturing.
Business: Agile teams empowered to innovate rapidly, enhancing responsiveness.
Healthcare: AI systems managing personalized treatment plans, adjusting as patient needs evolve.
Education: Adaptive learning platforms tailoring content to student progress.
Benefits
There are numerous benefits to adopting an agentic framework, including:
Adopting an agentic framework does come with multiple challenges. These
include:
Ethical concerns: Bias in AI decisions and accountability for outcomes.
Control dynamics: Balancing autonomy with oversight to prevent misuse.
Technical hurdles: High computational demands and unpredictability in dynamic environments.
Future trends
What are the big issues and changes coming in the agentic framework
space? If you're considering adopting one, you need to be aware of and
prepared for the following trends:
Ethical AI: Development of frameworks ensuring transparency and fairness.
Multi-agent systems: Collaborative agents tackling global challenges like climate change.
Human-AI synergy: Enhanced tools augmenting human decision-making in fields such as finance.
CrewAI overview
CrewAI is an agentic framework that orchestrates autonomous AI agents to collaborate as a cohesive unit to accomplish complex tasks. It differentiates itself through several key advantages over other frameworks:
Hierarchical seam structure: CrewAI implements a human-like organizational structure with roles such as manager, executor, and reviewer, allowing for specialized responsibilities and clear chains of command.
Process-oriented workflow: CrewAI focuses on defining processes rather than just agent capabilities, enabling sequential execution paths that mirror how humans accomplish complex tasks.
Built-in collaboration patterns: The framework provides native support for common collaboration patterns (sequential, parallel, consensus-based) that streamline agent coordination.
Tool integration simplicity: CrewAI offers straightforward integration with external tools and APIs through a unified interface, reducing development complexity.
Flexible language model support: While optimized for OpenAI models, CrewAI can work with various LLM providers, giving developers flexibility.
Framework comparison
Unlike LangChain or AutoGPT, which focus primarily on chaining prompts
or creating single autonomous agents, CrewAI specializes in multi-agent
coordination with defined social dynamics and workflow management.
CrewAI also differs from frameworks like BabyAGI by prioritizing
parallel execution and role specialization rather than recursive task
decomposition alone.
The framework is particularly effective for business process automation,
research tasks requiring multiple perspectives, and creative
collaboration scenarios where diverse expertise needs coordination.
IBM watsonx.ai offers powerful large language models (LLMs) tailored for
enterprise use cases. When combined with the CrewAI agentic framework,
you can build sophisticated workflows.
Goals
In this tutorial, you'll learn how to:
Integrate LLMs from watsonx provider with CrewAI efficiently
Implement internationalization/customization in CrewAI architecture
Implement CrewAI memory by embedding a watsonx model into CrewAI using customized python file named as watsonx_llm.py
Integrate an embedding model and LLM from watsonx providers with CrewAI Framework
Step 1: Configure the watsonx LLM and embedding model in .env file
Define your watsonx.ai credentials and model settings in a .env
file:
To configure the watsonx.ai url and api_key, you can use any of the following naming conventions:
Naming convention for watsonx api_key
WATSONX_APIKEY
WATSONX_API_KEY
WX_API_KEY
Naming convention for watsonx url
WATSONX_API_BASE
WATSONX_URL
WX_URL
WML_URL
Step 2: Watsonx LLM Configuration
The watsonx_llm.py file will help you to dynamically configure the LLM
based on environment variables. This code features dynamic model
selection, where the model prefix (watsonx/ or watsonx_text/)
determines whether API credentials are required.
Notes:
When you are using watsonx as a provider, api_key and url are required by the LLM configuration in the llm_config object.
When you are using watsonx_text as a provider, api_key and url are not required by the LLM configuration in the llm_config object.
The watsonx_llm.py file helps you to configure the LLM (from the crewai package component) and gives you flexibility whether you're using watsonx and watsonx_text.
import os from crewai
import LLM from dotenv
import load_dotenv from pydantic
import BaseModel
class watsonxConfig(): load_dotenv()
# Load environment variables
apikey = os.getenv("WATSONX_API_KEY")
base_url = os.getenv("WATSONX_URL")
projId = os.getenv("WATSONX_PROJECT_ID")
model = os.getenv("MODEL")
max_tokens=int(os.getenv("MAX_TOKENS"))
temperature=float(os.getenv("TEMPERATURE"))
top_p=float(os.getenv("TOP_P"))
seed=int(os.getenv("SEED"))
#embedding_model_configurations
embedding_model_watsonx = os.getenv("WATSONX_EMBEDDER_MODEL_ID")
#crewai_memory_configurations
crewai_storage_dir = os.getenv("CREWAI_STORAGE_DIR")
# Determine provider from model prefixif model.startswith("watsonx/"):
# Configuration for watsonx provider
llm_config = LLM(
model=model,
api_key=apikey,
base_url=base_url,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_p,
seed=seed
)
elif model.startswith("watsonx_text/"):
# Configuration for watsonx_text provider (no API credentials)
llm_config = LLM(
model=model,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_p,
seed=seed
)
else:
raise ValueError(f"Unsupported model provider for: {model}")
Copy codeCopied!Show more
Step 3: Using watsonx.ai in CrewAI agents
In your crew.py file, assign the configured LLM to agents:
import os from crewai import Agent, Crew, Process, Task, LLM from crewai.project import CrewBase, agent, crew, task from dotenv import load_dotenv from wx_crewai_embedd.watsonx_llm import watsonxConfig load_dotenv()
@CrewBase class WxCrewaiEmbedd(): """WxCrewaiEmbedd crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml' @agentdefresearcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
llm=watsonxConfig.llm_config,
verbose=True
)
@agentdefreporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
llm=watsonxConfig.llm_config,
verbose=True
)
Copy codeCopied!
Import llm_config from the watsonxConfig class and assign the watsonxConfig.llm_config to the attribute llm in @agent methods.
Leverage CrewAI's memory systems using an embedding model from watsonx provider
To take advantage of CrewAI's memory system with the watsonx embedding
model, you first need to initialize the memory and configure the storage
locations.
How does CrewAI create its crew's memory?
When you initialize a Crew with memory=True, CrewAI automatically sets up three memory systems. This process is handled by the create_crew_memory method in the Crew class. Although not called directly in your code, the create_crew_memory method is triggered automatically by Pydantic during object initialization due to the @model_validator(mode=\"after\") decorator. This validator ensures memory systems are instantiated after the Crew object is created.
Notes:
The memory=True flag activates all three memory types.
Pydantic's validator ensures seamless initialization without manual intervention.
The embedder_config ties memory systems to your chosen embeddings (for example, watsonx).
How does CrewAI configure its storage locations?
CrewAI uses the db_storage_path() function to determine where memories are stored. This function leverages the appdirs library to follow platform-specific conventions:
Following are the default storage paths, depending on your operating
system:
Location: Set using db_storage_path(), which respects CREWAI_STORAGE_DIR.
Entity memory
Purpose: Tracks entities (for example, people, organizations) mentioned in tasks.
Storage: Like short-term memory, using Chroma db.
By configuring memory in AI, you gain several advantages:
Flexibility: You can override defaults with CREWAI_STORAGE_DIR for centralized storage.
Platform compliance: CrewAI memory uses OS-recommended paths for clean, maintainable data management.
Persistence: Long-term memory survives across sessions, while short-term and entity memory resets with workflows.
Steps to configure memory in a CrewAI agentic framework
To configure CrewAI memory, complete the following steps:
In the crew.py file, enable memory during crew initialization. The following code configures watsonx embeddings for semantic search in memory operations:
Reset the memory using the command line. Following are the options for CrewAI reset memories:
Set the custom storage path in the CREWAI_STORAGE_DIR variable in the .env file to override the default paths.
Note: If CREWAI_STORAGE_DIR=\"/custom/path\" in your .env file, all memories will be stored under that /custom/path/.
CrewAI memory workflow
The embedder configuration can be customized when initializing the Crew.
The RAG storage used for short-term and entity memory uses vector embeddings for efficient semantic search capabilities.
The LTMSQLiteStorage for long-term memory uses an SQLite database, which can be queried using standard SQLite tools if needed.
The storage location for short-term and entity memory (using embedchain) can be customized through configuration.
Short-term and entity memory storage is not a single file, but a directory structure containing multiple files managed by Chroma.
Notes
Long-term memory stores persistent information across sessions.
Short-term and entity memories handle more dynamic, context-specific information during task execution.
These memory systems are automatically used by the Crew to provide context for tasks and store information from task executions.
Examining memory workflow for the current CrewAI implementation
Research task: The researcher agent uses short-term memory to store bullet points about \"AI LLMs\".
Report generation: The reporting analyst retrieves this data from memory and expands it into a detailed report.
Long-term storage: The final report is saved to long-term memory for future reference.
When memory = true, CrewAI uses all types of memory, such as long-term, short-term, and entity memory. The following image shows the
CrewAI storage that has been created.
Implement internationalization and customization in CrewAI
CrewAI enables you to internationalize and customize the language used
in the prompts.
To implement this language customization in CrewAI, you need to
configure the prompt_file.
The prompt_file will look like the one in the following image. You can
download the prompt file for English language from this GitHub
repository.
{
"hierarchical_manager_agent": {
"role": "Crew Manager",
"goal": "Manage the team to complete the task in the best way possible.",
"backstory": "You are a seasoned manager with a knack for getting the best out of your team.\nYou are also known for your ability to delegate work to the right people, and to ask the right questions to get the best out of your team.\nEven though you don't perform tasks by yourself, you have a lot of experience in the field, which allows you to properly evaluate the work of your team members."
},
"slices": {
"observation": "\nObservation:",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
"no_tools": "\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. When responding, I must use the following format:\n\n```\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\n```\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n```",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nHere is the expected format I must follow:\n\n```\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n This Thought/Action/Action Input/Result process can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
"summarize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.",
"formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.",
"human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\"",
"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals.",
"feedback_instructions": "User feedback: {feedback}\nInstructions: Use this feedback to enhance the next output iteration.\nNote: Do not respond or add commentary."
},
"errors": {
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}",
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}",
"validation_error": "### Previous attempt failed validation: {guardrail_result_error}\n\n\n### Previous result:\n{task_output}\n\n\nTry again, making sure to address the validation error."
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.",
"add_image": {
"name": "Add image to content",
"description": "See image to understand its content, you can optionally ask a question about the image",
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
}
}
}
Copy codeCopied!Show more
The default prompt file created in CrewAI is en.json. If you decide to change the language configuration in the CrewAI architecture, you are
required to translate the en.json file to the language you are switching to.
For example, if you want to change the language configuration to Japanese, you must translate the en.json file to to ja.json.
Note: You must not translate or change the .json tags and interpolation placeholders; they should remain as English language only.
For example: In the following ja.json file, the .json tags such as "role", "goal", and "delegate_work", and interpolation placeholders such as {input},{context},{coworkers} must remain in English and should not be translated to Japanese.
When you have converted the en.json file to your desired language, you need to integrate the translated prompt file into your CrewAI architecture:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_internationalization.watsonx_llm import watsonxConfig
from crewai.utilities import I18N
# If you want to run a snippet of code before or after the crew starts, # you can use the @before_kickoff and @after_kickoff decorators# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators# Initialize I18N with Japanese translations
prompt_file=watsonxConfig.prompt_file_japanese
i18n = I18N(prompt_file=prompt_file)
Copy codeCopied!
The attributes prompt_file and i18n must be updated to customize the language preferences. When you initialize these two attributes, you need to assign them to the @agent methods:
You now need to write the agents and task in the language you have translated to. For example, to customize CrewAI with the Japanese language, you need to write the agent.yaml and tasks.yaml files in Japanese. In the agents, you need to instruct the LLM to generate the results in the specific language that you want to customize.
Optional: You can also add the language's character specification in the agent's backstory. For example, in the Japanese language, there is a character specification such as the following:
Examples: Following are sample agents.yaml and tasks.yaml files:
Note: You will find additional prompt files for Russian, Portuguese,
and German languages in the GitHub repository.
Summary
In this article, you've explored the process of building a robust agentic AI framework using cutting-edge tools like CrewAI Memory, IBM watsonx LLM, and IBM watsonx Embedding Models, with a focus on internationalization and customization. By integrating CrewAI Memory, you empowered AI agents with dynamic, context-aware memory retention, enabling them to learn from interactions and refine responses over time. By including a watsonx LLM, you built a powerful foundation for natural language understanding and generation. The watsonx Embedding Model ensured precise semantic analysis for tasks like retrieval-augmented generation (RAG) and data clustering.
By focusing on internationalization and customization, you learned how to tailor AI workflows for global audiences—from multilingual support to region-specific adaptations—ensuring scalability and accessibility. Through practical examples and code snippets, you learned how to harmonize these components into a cohesive framework capable of tackling complex, real-world challenges, from enterprise automation to personalized user experiences.
This article not only equips developers with actionable steps to build intelligent, adaptive systems but also highlights the transformative potential of combining memory-driven learning, advanced language models, and customization strategies. Whether for customer support, content creation, or data analysis, the fusion of CrewAI Memory and the IBM watsonx ecosystem paves the way for AI solutions that are both intelligent and globally inclusive. Ready to innovate? Dive in and start building your own agentic framework today!
Next steps
You can continue to expand your skills with the follwoing resources:
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.