This is a cache of https://developer.ibm.com/articles/build-an-agentic-framework-crewai/. It is a snapshot of the page as it appeared on 2025-11-14T12:39:28.990+0000.
Build an agentic framework with CrewAI memory, i18n, and IBM watsonx.ai - IBM Developer

Article

Build an agentic framework with CrewAI memory, i18n, and IBM watsonx.ai

Framework uses internationalization/customization, watsonx LLM, and watsonx embedding model

By

Paarttipaabhalaji Manohar

In an era where autonomy and adaptability are paramount, the agentic framework has emerged as a transformative approach to technology and human-centric domains. Rooted in the concept of agency---the ability to act independently---this framework empowers entities, whether AI or human, to make proactive decisions. With advancements in AI and a shift towards decentralized organizational structures, understanding agentic frameworks is crucial for innovation and efficiency.

Agentic framework key components

Following are the main characteristics of an agentic framework:

  • Autonomy: Agents operate independently, making real-time decisions without constant oversight. For example, self-driving cars navigating traffic.
  • Goal-oriented design: Agents pursue specific objectives using strategies like machine learning or human intuition.
  • Adaptability: Continuous learning from environments ensures relevance. For example, chatbots refining responses through user interactions.
  • Interactivity: Agents collaborate with users, systems, or other agents, enhancing outcomes through teamwork.

Industry applications

An agentic framework can be applied across a wide variety of industries. For example:

  • AI and robotics: Autonomous drones for delivery, predictive maintenance in manufacturing.
  • Business: Agile teams empowered to innovate rapidly, enhancing responsiveness.
  • Healthcare: AI systems managing personalized treatment plans, adjusting as patient needs evolve.
  • Education: Adaptive learning platforms tailoring content to student progress.

Benefits

There are numerous benefits to adopting an agentic framework, including:

  • Efficiency: Streamlined decision-making reduces bottlenecks.
  • Scalability: Systems handle complex tasks with minimal oversight.
  • Innovation: Autonomous entities identify novel solutions beyond programmed protocols.

Challenges

Adopting an agentic framework does come with multiple challenges. These include:

  • Ethical concerns: Bias in AI decisions and accountability for outcomes.
  • Control dynamics: Balancing autonomy with oversight to prevent misuse.
  • Technical hurdles: High computational demands and unpredictability in dynamic environments.

Future trends

What are the big issues and changes coming in the agentic framework space? If you're considering adopting one, you need to be aware of and prepared for the following trends:

  • Ethical AI: Development of frameworks ensuring transparency and fairness.
  • Multi-agent systems: Collaborative agents tackling global challenges like climate change.
  • Human-AI synergy: Enhanced tools augmenting human decision-making in fields such as finance.

CrewAI overview

CrewAI is an agentic framework that orchestrates autonomous AI agents to collaborate as a cohesive unit to accomplish complex tasks. It differentiates itself through several key advantages over other frameworks:

  • Hierarchical seam structure: CrewAI implements a human-like organizational structure with roles such as manager, executor, and reviewer, allowing for specialized responsibilities and clear chains of command.
  • Process-oriented workflow: CrewAI focuses on defining processes rather than just agent capabilities, enabling sequential execution paths that mirror how humans accomplish complex tasks.
  • Built-in collaboration patterns: The framework provides native support for common collaboration patterns (sequential, parallel, consensus-based) that streamline agent coordination.
  • Tool integration simplicity: CrewAI offers straightforward integration with external tools and APIs through a unified interface, reducing development complexity.
  • Flexible language model support: While optimized for OpenAI models, CrewAI can work with various LLM providers, giving developers flexibility.

Framework comparison

Unlike LangChain or AutoGPT, which focus primarily on chaining prompts or creating single autonomous agents, CrewAI specializes in multi-agent coordination with defined social dynamics and workflow management.

CrewAI also differs from frameworks like BabyAGI by prioritizing parallel execution and role specialization rather than recursive task decomposition alone.

The framework is particularly effective for business process automation, research tasks requiring multiple perspectives, and creative collaboration scenarios where diverse expertise needs coordination.

IBM watsonx.ai offers powerful large language models (LLMs) tailored for enterprise use cases. When combined with the CrewAI agentic framework, you can build sophisticated workflows.

Goals

In this tutorial, you'll learn how to:

  1. Integrate LLMs from watsonx provider with CrewAI efficiently
  2. Implement internationalization/customization in CrewAI architecture
  3. Implement CrewAI memory by embedding a watsonx model into CrewAI using customized python file named as watsonx_llm.py

Integrate an embedding model and LLM from watsonx providers with CrewAI Framework

Step 1: Configure the watsonx LLM and embedding model in .env file

  1. Define your watsonx.ai credentials and model settings in a .env file:

    #Model Configuration
     #Expected format: "watsonx/..." or "watsonx_text/..."
     #MODEL="watsonx/mistralai/mistral-large"
     #MODEL="watsonx_text/ibm/granite-13b-chat-v2"
     MODEL="watsonx/ibm/granite-13b-instruct-v2"
     #Generation Parameters
     MAX_TOKENS=8000 TEMPERATURE=0.7 TOP_P=1.0 SEED=3
     WATSONX_URL=<URL> WATSONX_API_KEY=<KEY> WATSONX_PROJECT_ID=<ID> WATSONX_EMBEDDER_MODEL_ID="ibm/granite-embedding-278m-multilingual"
     #-----------CREWAI_MEMORY_CONFIGURATIONS-------------------------------------------- CREWAI_STORAGE_DIR="/Users/paarttipaa/ProjectTask/GithubProj/ibm_blogs/parttipaa_ibm_blogs/crewai_watsonx_embeddings/wx_crewai_embedd/storage"
  2. To configure the watsonx.ai url and api_key, you can use any of the following naming conventions:

    Naming convention for watsonx api_key

    • WATSONX_APIKEY
    • WATSONX_API_KEY
    • WX_API_KEY

    Naming convention for watsonx url

    • WATSONX_API_BASE
    • WATSONX_URL
    • WX_URL
    • WML_URL

Step 2: Watsonx LLM Configuration

The watsonx_llm.py file will help you to dynamically configure the LLM based on environment variables. This code features dynamic model selection, where the model prefix (watsonx/ or watsonx_text/) determines whether API credentials are required.

Notes:

  • When you are using watsonx as a provider, api_key and url are required by the LLM configuration in the llm_config object.
  • When you are using watsonx_text as a provider, api_key and url are not required by the LLM configuration in the llm_config object.

The watsonx_llm.py file helps you to configure the LLM (from the crewai package component) and gives you flexibility whether you're using watsonx and watsonx_text.

import os from crewai 
import LLM from dotenv
import load_dotenv from pydantic 
import BaseModel
class watsonxConfig(): load_dotenv()
# Load environment variables
apikey = os.getenv("WATSONX_API_KEY")
base_url = os.getenv("WATSONX_URL")
projId = os.getenv("WATSONX_PROJECT_ID")
model = os.getenv("MODEL")
max_tokens=int(os.getenv("MAX_TOKENS"))
temperature=float(os.getenv("TEMPERATURE"))
top_p=float(os.getenv("TOP_P"))
seed=int(os.getenv("SEED"))

#embedding_model_configurations
embedding_model_watsonx = os.getenv("WATSONX_EMBEDDER_MODEL_ID")

#crewai_memory_configurations
crewai_storage_dir = os.getenv("CREWAI_STORAGE_DIR")


# Determine provider from model prefix
if model.startswith("watsonx/"):
    # Configuration for watsonx provider
    llm_config = LLM(
        model=model,
        api_key=apikey,
        base_url=base_url,
        max_tokens=max_tokens,
        temperature=temperature,
        top_p=top_p,
        seed=seed
    )
elif model.startswith("watsonx_text/"):
    # Configuration for watsonx_text provider (no API credentials)
    llm_config = LLM(
        model=model,
        max_tokens=max_tokens,
        temperature=temperature,
        top_p=top_p,
        seed=seed
    )
else:
    raise ValueError(f"Unsupported model provider for: {model}")

Step 3: Using watsonx.ai in CrewAI agents

  1. In your crew.py file, assign the configured LLM to agents:

    import os from crewai import Agent, Crew, Process, Task, LLM from crewai.project import CrewBase, agent, crew, task from dotenv import load_dotenv from wx_crewai_embedd.watsonx_llm import watsonxConfig load_dotenv()
     @CrewBase class WxCrewaiEmbedd(): """WxCrewaiEmbedd crew"""
     agents_config = 'config/agents.yaml'
     tasks_config = 'config/tasks.yaml'
    
     @agent
     def researcher(self) -> Agent:
         return Agent(
             config=self.agents_config['researcher'],
             llm=watsonxConfig.llm_config,
             verbose=True
         )
    
     @agent
     def reporting_analyst(self) -> Agent:
         return Agent(
             config=self.agents_config['reporting_analyst'],
             llm=watsonxConfig.llm_config,
             verbose=True
         )
  2. Import llm_config from the watsonxConfig class and assign the watsonxConfig.llm_config to the attribute llm in @agent methods.

Leverage CrewAI's memory systems using an embedding model from watsonx provider

To take advantage of CrewAI's memory system with the watsonx embedding model, you first need to initialize the memory and configure the storage locations.

How does CrewAI create its crew's memory?

When you initialize a Crew with memory=True, CrewAI automatically sets up three memory systems. This process is handled by the create_crew_memory method in the Crew class. Although not called directly in your code, the create_crew_memory method is triggered automatically by Pydantic during object initialization due to the @model_validator(mode=\"after\") decorator. This validator ensures memory systems are instantiated after the Crew object is created.

How does CrewAI configure its storage locations?

Notes:

  • The memory=True flag activates all three memory types.
  • Pydantic's validator ensures seamless initialization without manual intervention.
  • The embedder_config ties memory systems to your chosen embeddings (for example, watsonx).

How does CrewAI configure its storage locations?

CrewAI uses the db_storage_path() function to determine where memories are stored. This function leverages the appdirs library to follow platform-specific conventions:

How does CrewAI configure its storage locations?

Following are the default storage paths, depending on your operating system:

OSDefault Storage Path
Linux~/.local/share/[ProjectName]/
macOS~/Library/Application Support/[ProjectName]/
WindowsC:\Users\[YourUsername]\AppData\Local\[ProjectName]\

Here, [ProjectName] is either:

  1. The value of the CREWAI_STORAGE_DIR environment variable (if set), or
  2. The name of your project's working directory.

CrewAI memory system components

CrewAI implements three types of memory to enhance agent collaboration:

Short-term memory

  • Purpose: Stores context-specific data during task execution (for example, intermediate research findings).
  • Storage: Uses Chroma db via RAG Storage optimized for vector search.
  • Location: By default, stored in \~/.embedchain/. Customize using CREWAI_STORAGE_DIR in the .env file.

Long-term memory

  • Purpose: Retains persistent data across sessions (for example, historical reports).
  • Storage: SQLite database (long_term_memory_storage.db).
  • Location: Set using db_storage_path(), which respects CREWAI_STORAGE_DIR.

Entity memory

  • Purpose: Tracks entities (for example, people, organizations) mentioned in tasks.
  • Storage: Like short-term memory, using Chroma db.

Entity memory

By configuring memory in AI, you gain several advantages:

  • Flexibility: You can override defaults with CREWAI_STORAGE_DIR for centralized storage.
  • Platform compliance: CrewAI memory uses OS-recommended paths for clean, maintainable data management.
  • Persistence: Long-term memory survives across sessions, while short-term and entity memory resets with workflows.

Steps to configure memory in a CrewAI agentic framework

To configure CrewAI memory, complete the following steps:

  1. In the crew.py file, enable memory during crew initialization. The following code configures watsonx embeddings for semantic search in memory operations:

    @crew
     def crew(self) -> Crew:
     """Creates the WxCrewaiEmbedd crew"""
     return Crew(
     agents=self.agents,
     tasks=self.tasks,
     process=Process.sequential,
     verbose=True,
     memory=True,
     embedder={
     "provider": "watson",
     "config": {
     "model": watsonxConfig.embedding_model_watsonx,
     "api_url": watsonxConfig.base_url,
     "api_key": watsonxConfig.apikey,
     "project_id": watsonxConfig.projId,
     }
     },
     )
  2. Reset the memory using the command line. Following are the options for CrewAI reset memories: Steps to configure memory in CrewAI agentic framework

  3. Set the custom storage path in the CREWAI_STORAGE_DIR variable in the .env file to override the default paths.

    Note: If CREWAI_STORAGE_DIR=\"/custom/path\" in your .env file, all memories will be stored under that /custom/path/.

CrewAI memory workflow

  1. The embedder configuration can be customized when initializing the Crew.
  2. The RAG storage used for short-term and entity memory uses vector embeddings for efficient semantic search capabilities.
  3. The LTMSQLiteStorage for long-term memory uses an SQLite database, which can be queried using standard SQLite tools if needed.
  4. The storage location for short-term and entity memory (using embedchain) can be customized through configuration.
  5. Short-term and entity memory storage is not a single file, but a directory structure containing multiple files managed by Chroma.

Notes

  • Long-term memory stores persistent information across sessions.
  • Short-term and entity memories handle more dynamic, context-specific information during task execution.
  • These memory systems are automatically used by the Crew to provide context for tasks and store information from task executions.

Examining memory workflow for the current CrewAI implementation

  1. Research task: The researcher agent uses short-term memory to store bullet points about \"AI LLMs\".
  2. Report generation: The reporting analyst retrieves this data from memory and expands it into a detailed report.
  3. Long-term storage: The final report is saved to long-term memory for future reference.

When memory = true, CrewAI uses all types of memory, such as long-term, short-term, and entity memory. The following image shows the CrewAI storage that has been created.

Examining memory workflow for the current CrewAI implementation

You can download the code from this GitHub repository.

Implement internationalization and customization in CrewAI

CrewAI enables you to internationalize and customize the language used in the prompts.

To implement this language customization in CrewAI, you need to configure the prompt_file.

The prompt_file will look like the one in the following image. You can download the prompt file for English language from this GitHub repository.

{
"hierarchical_manager_agent": {
"role": "Crew Manager",
"goal": "Manage the team to complete the task in the best way possible.",
"backstory": "You are a seasoned manager with a knack for getting the best out of your team.\nYou are also known for your ability to delegate work to the right people, and to ask the right questions to get the best out of your team.\nEven though you don't perform tasks by yourself, you have a lot of experience in the field, which allows you to properly evaluate the work of your team members."
},
"slices": {
"observation": "\nObservation:",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```",
"no_tools": "\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. When responding, I must use the following format:\n\n```\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\n```\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n```",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nHere is the expected format I must follow:\n\n```\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n This Thought/Action/Action Input/Result process can repeat N times. Once I know the final answer, I must return the following format:\n\n```\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n```",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
"summarize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.",
"formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.",
"human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\"",
"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals.",
"feedback_instructions": "User feedback: {feedback}\nInstructions: Use this feedback to enhance the next output iteration.\nNote: Do not respond or add commentary."
},
"errors": {
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}",
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}",
"validation_error": "### Previous attempt failed validation: {guardrail_result_error}\n\n\n### Previous result:\n{task_output}\n\n\nTry again, making sure to address the validation error."
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.",
"add_image": {
"name": "Add image to content",
"description": "See image to understand its content, you can optionally ask a question about the image",
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
}
}
}

The default prompt file created in CrewAI is en.json. If you decide to change the language configuration in the CrewAI architecture, you are required to translate the en.json file to the language you are switching to.

For example, if you want to change the language configuration to Japanese, you must translate the en.json file to to ja.json.

Note: You must not translate or change the .json tags and interpolation placeholders; they should remain as English language only. For example: In the following ja.json file, the .json tags such as "role", "goal", and "delegate_work", and interpolation placeholders such as {input},{context},{coworkers} must remain in English and should not be translated to Japanese.

You can download a sample ja.json file from the GitHub repository.

{
"hierarchical_manager_agent": {
"role": "クルーマネージャー",
"goal": "チームを最高の方法でタスクを完了させる。",
"backstory": "あなたは、チームから最高のパフォーマンスを引き出す経験豊富な管理者です。\n適切な人に仕事を委任し、チームから最高の結果を引き出すための的確な質問をする能力でも知られています。\n自らタスクを実行することはありませんが、フィールドでの豊富な経験により、チームメンバーの仕事を適切に評価できます。"
},
"slices": {
"observation": "\n観察:",
"task": "\n現在のタスク: {input}\n\n開始してください!これはあなたにとってとても重要です。利用可能なツールを使用し、最高の最終回答を提供してください。あなたの仕事がかかっています!\n\n思考:",
"memory": "\n\n# 有用なコンテキスト: \n{memory}",
"role_playing": "あなたは{role}です。{backstory}\nあなたの個人的な目標は:{goal}",
"tools": "\nここに列挙されているツールのみを使用でき、リストにないツールを作り出すことは絶対にできません:\n\n{tools}\n\n以下の形式を使用してください:\n\n思考: 常に何をすべきかを考えてください\nアクション: 実行するアクション。[{tool_names}]から1つだけ、リストに記載されている通りの名前です。\nアクション入力: アクションへの入力。波括弧で囲まれた単純なPythonの辞書、キーと値は二重引用符で囲みます。\n観察: アクションの結果\n\n必要な情報をすべて収集したら:\n\n思考: 最終回答を知りました\n最終回答: 元の入力質問への最終回答\n",
"no_tools": "\n最高の完全な最終回答を提供するために、以下の形式を正確に使用します:\n\n思考: 素晴らしい回答ができます\n最終回答: 最終回答は、可能な限り最高かつ最も完全なものでなければなりません。結果を説明する必要があります。\n\nこれらの形式を必ず使用しなければなりません。私の仕事がかかっています!\n",
"format": "ツールを1つ使用するか、最高の最終回答を提供するかのいずれかを必ず行います。以下の形式を使用します:\n\n思考: 常に何をすべきかを考えてください\nアクション: 実行するアクション。[{tool_names}]のいずれか\nアクション入力: アクションへの入力。波括弧で囲まれた辞書\n観察: アクションの結果\n... (この思考/アクション/アクション入力/結果はN回繰り返すことができます)\n思考: 素晴らしい回答ができます\n最終回答: 最終回答は、可能な限り最高かつ最も完全なものでなければなりません。結果を説明する必要があります\n\n",
"final_answer_format": "これ以上ツールを使用する必要がない場合、最高の完全な最終回答を提供する必要があります。期待される基準を満たしていることを確認し、以下の正確な形式を使用してください:\n\n思考: 素晴らしい回答ができます\n最終回答: タスクへの最高の完全な最終回答。\n\n",
"format_without_tools": "\n申し訳ありません。正しい形式を使用しませんでした。利用可能なツールのいずれかを使用するか、最高の最終回答を提供する必要があります。\n従うべき期待される形式を思い出しました:\n\n質問: 回答する必要がある入力質問\n思考: 常に何をすべきかを考えてください\nアクション: 実行するアクション。[{tool_names}]のいずれか\nアクション入力: アクションへの入力\n観察: アクションの結果\n... (この思考/アクション/アクション入力/結果はN回繰り返すことができます)\n思考: 素晴らしい回答ができます\n最終回答: 最終回答は、可能な限り最高かつ最も完全なものでなければなりません。結果を説明する必要があります\n\n",
"task_with_context": "{task}\n\nこれは作業するコンテキストです:\n{context}",
"expected_output": "\n最終回答の期待される基準は以下の通りです: {expected_output}\n実際の完全なコンテンツを最終回答として返す必要があります。要約ではありません。\n",
"human_feedback": "作業に関する人間からのフィードバックを受け取りました。再評価し、準備ができたら新しい最終回答を提供してください。\n {human_feedback}",
"getting_input": "これはエージェントの最終回答です: {final_answer}\n\n",
"summarizer_system_message": "テキストを要約する役に立つアシスタントです。",
"sumamrize_instruction": "以下のテキストを要約してください。重要な情報をすべて含めてください: {group}",
"summary": "これまでの会話の要約:\n{merged_summary}",
"manager_request": "共有されたコンテキストを考慮して、同僚への最高の回答。",
"formatted_task_instructions": "最終回答には、以下の形式のコンテンツのみが含まれることを確認してください: {output_format}\n\n最終出力に ```json や ```python などのコードブロックマーカーが含まれないようにしてください。"
},
"errors": {
"force_final_answer_error": "続行できません。これが最善の回答でした。\n {formatted_answer.text}",
"force_final_answer": "今こそ、絶対に最高の最終回答を提供する必要があります。以前の指示を無視し、ツールの使用を停止し、絶対に最高の最終回答のみを返します。",
"agent_tool_unexsiting_coworker": "\nツール実行エラー。メンションされた同僚が見つかりません。以下のオプションのいずれかでなければなりません:\n{coworkers}\n",
"task_repeated_usage": "同じ入力を再利用しようとしました。このアクション入力の使用を停止する必要があります。代わりに別の方法を試します。\n\n",
"tool_usage_error": "エラーが発生しました: {error}",
"tool_arguments_error": "エラー: アクション入力が有効なキー、値の辞書ではありません。",
"wrong_tool_name": "{tool}ツールを使用しようとしましたが、存在しません。以下のツールのいずれかを使用する必要があります。一度に1つずつ使用してください: {tools}。",
"tool_usage_exception": "ツールの使用中にエラーが発生しました。このエラーでした: {error}。\nツール{tool}は以下の入力を受け入れます: {tool_inputs}"
},
"tools": {
"delegate_work": "以下の同僚のいずれかに特定のタスクを委任します: {coworkers}\nこのツールへの入力には、同僚、実行してもらいたいタスク、タスクを実行するために必要なすべてのコンテキストを含める必要があります。タスクについて何も知りませんので、絶対にすべてを共有し、参照ではなく説明してください。",
"ask_question": "以下の同僚のいずれかに特定の質問をします: {coworkers}\nこのツールへの入力には、同僚、質問、質問を適切に尋ねるために必要なすべてのコンテキストを含める必要があります。質問について何も知りませんので、絶対にすべてを共有し、参照ではなく説明してください。"
}
}
  1. When you have converted the en.json file to your desired language, you need to integrate the translated prompt file into your CrewAI architecture:

    from crewai import Agent, Crew, Process, Task
      from crewai.project import CrewBase, agent, crew, task
      from crewai_internationalization.watsonx_llm import watsonxConfig
      from crewai.utilities import I18N
      # If you want to run a snippet of code before or after the crew starts, 
      # you can use the @before_kickoff and @after_kickoff decorators
      # https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
      # Initialize I18N with Japanese translations
      prompt_file=watsonxConfig.prompt_file_japanese
      i18n = I18N(prompt_file=prompt_file)
  2. The attributes prompt_file and i18n must be updated to customize the language preferences. When you initialize these two attributes, you need to assign them to the @agent methods:

    @agent
     def researcher(self) -> Agent:
     return Agent(
     config=self.agents_config['researcher'],
     llm=watsonxConfig.llm_config,
     prompt_file=prompt_file,
     i18n=i18n,
     verbose=True
     )
    
     @agent
     def reporting_analyst(self) -> Agent:
     return Agent(
     config=self.agents_config['reporting_analyst'],
     llm=watsonxConfig.llm_config,
     prompt_file=prompt_file,
     i18n=i18n,
     verbose=True
     )
  3. You now need to write the agents and task in the language you have translated to. For example, to customize CrewAI with the Japanese language, you need to write the agent.yaml and tasks.yaml files in Japanese. In the agents, you need to instruct the LLM to generate the results in the specific language that you want to customize.

  4. Optional: You can also add the language's character specification in the agent's backstory. For example, in the Japanese language, there is a character specification such as the following:

    alt alt

    Examples: Following are sample agents.yaml and tasks.yaml files:

    Agents.yaml

    researcher:
     role: >
     高度な{topic}データリサーチャー
     goal: >
     {topic}の最新の発展を明らかにする.日本語で生成する。
     backstory: >
     なたは{topic}の最新の発展を見つけるのに長けた熟練のリサーチャーです。最も関連性のある情報を見つけ、それを明確かつ簡潔提示する能力で知られています。
     英語を日本語に翻訳するエキスパートです。
     以下の特徴と機能を利用します:
     * 漢字サポート
     * 全角カタカナ対応
     * 半角カタカナ対応
     * 全角ひらがな対応
     * 全角記号のサポート
     *日付と時刻の書式設定
    
     reporting_analyst:
     role: >
     {topic}レポート分析官
     goal: >
     {topic}データ分析と研究結果に基いた詳細なレポートを日本語で生成する。
     backstory: >
     あなたは細部に目を配ることに長けた几帳面な分析官です。複雑なデータを明確か簡潔なレポートに変える能力で知られており、他の人が情報を理解し、行動に移すのを容易にしています。
     英語を日本語に翻訳するエキスパートです。
     以下の特徴と機能を利用します:
     * 漢字サポート
     * 全角カタカナ対応
     * 半角カタカナ対応
     * 全角ひらがな対応
     * 全角記号のサポート
     *日付と時刻の書式設定

    Tasks.yaml

    research_task:
     description: >
     「{topic}」について詳細な調査を行う
     現在の年が{current_year}であることを考慮して、興味深く関連性のある情報を見つけること
     expected_output: >
     「{topic}」に関する最も関連性のある情報を10項目のリストにする
     agent: researcher
    
     reporting_task:
     description: >
     得られたコンテキストを確認し、各トピックをレポートの完全なセクションに拡張する
     レポートは細で、すべての関連情報を含むこと
     expected_output: >
     主要なトピックごとに情報の完全なセクションを含む完全なレポート
     「```」なしのークダウン形式
     agent: reporting_analyst
  5. Now, run the crew using the command line:

    crewai run

    The crew output is in Japanese: Customization in CrewAI

    Note: You will find additional prompt files for Russian, Portuguese, and German languages in the GitHub repository.

    Customization in CrewAI

Summary

In this article, you've explored the process of building a robust agentic AI framework using cutting-edge tools like CrewAI Memory, IBM watsonx LLM, and IBM watsonx Embedding Models, with a focus on internationalization and customization. By integrating CrewAI Memory, you empowered AI agents with dynamic, context-aware memory retention, enabling them to learn from interactions and refine responses over time. By including a watsonx LLM, you built a powerful foundation for natural language understanding and generation. The watsonx Embedding Model ensured precise semantic analysis for tasks like retrieval-augmented generation (RAG) and data clustering.

By focusing on internationalization and customization, you learned how to tailor AI workflows for global audiences—from multilingual support to region-specific adaptations—ensuring scalability and accessibility. Through practical examples and code snippets, you learned how to harmonize these components into a cohesive framework capable of tackling complex, real-world challenges, from enterprise automation to personalized user experiences.

This article not only equips developers with actionable steps to build intelligent, adaptive systems but also highlights the transformative potential of combining memory-driven learning, advanced language models, and customization strategies. Whether for customer support, content creation, or data analysis, the fusion of CrewAI Memory and the IBM watsonx ecosystem paves the way for AI solutions that are both intelligent and globally inclusive. Ready to innovate? Dive in and start building your own agentic framework today!

Next steps

You can continue to expand your skills with the follwoing resources: