This is a cache of https://developer.ibm.com/tutorials/awb-create-langchain-ai-agent-python-watsonx/. It is a snapshot of the page as it appeared on 2026-02-09T20:17:54.463+0000.
Create a LangChain AI Agent in Python using watsonx
IBM Developer

Tutorial

Create a LangChain AI Agent in Python using watsonx

Discover how to create an AI agent with IBM Granite models that can answer questions

In this tutorial, we will use the LangChain Python package to build an AI agent that uses its custom tools to return a URL directing to NASA's Astronomy Picture of the Day.

An artificial intelligence (AI) agent is a system that performs tasks on behalf of a user or another system by designing its own workflow and utilizing available tools.

Overview of AI agents

One of the most common modalities of agentic AI approaches is chatbots. However, agentic technology can encompass a wide range of functions. These include planning, problem-solving, interacting with external environments and executing actions. These agents can be deployed to solve complex tasks in various enterprise contexts. From software design and IT automation to code-generation tools and conversational AI assistants, AI agents leverage the capability of Large Language Models (LLMs) to work step-by-step. For this reason, they are otherwise known as LLM agents.

Key processes that make AI agents unique in their autonomy are:

  • Goal initialization and planning. Although AI agents are autonomous in their planning of future actions, they require goals and environments defined by humans. For simple tasks, planning is not a necessary step. Instead, an agent can iteratively reflect on its responses and improve them without planning its next steps.

  • Reasoning using available tools. An AI agent’s plan of action is based on the information it perceives. Often, AI agents do not have the full knowledge base that is needed for tackling all subtasks within a complex goal. To remedy this, AI agents use their available tools. These tools can include external datasets, algorithms, search tools, APIs and even other agents. We can instruct agents to "think" slowly, plan ahead and display each "thought" by aligning the prompt structure with a ReAct (Reasoning and Action) framework. These loops of thinking, acting and responding are used to solve problems step by step. The verbal reasoning produced gives insight into how responses are being formulated.

  • Learning and reflection. AI agents use feedback mechanisms, such as other AI agents and human-in-the-loop (HITL), to improve the accuracy of their responses.

Traditional LLMs, like the OpenAI's GPT-3 (Generative Pre-trained Transformer) model, Meta's Llama models and the IBM Granite models that we will be using in this tutorial, are limited in their knowledge and reasoning. They produce their responses based on the data used to train them, which can often include out-of-date information. In contrast, agentic technology uses tool-calling on the backend to obtain up-to-date information, optimize workflow and create specific tasks autonomously to achieve complex goals. In this process, the autonomous agent is learning to adapt to user expectations over time, providing a personalized experience and comprehensive responses. This tool-calling can be achieved without human intervention and broadens the possibilities for real-world applications of these AI systems.

We encourage you to check out our AI Agents article for more in-depth information on the various AI agent types and their abstractions.

Prerequisites

You need an IBM Cloud account to create a watsonx.ai project.

Steps

Step 1. Set up your environment

While you can choose from several tools, this tutorial walks you through how to set up an IBM account to use a Jupyter Notebook. Jupyter Notebooks are widely used within data science to combine code, text, images, and data visualizations to formulate a well-formed analysis.

  1. Log in to watsonx.ai using your IBM Cloud account.

  2. Create a watsonx.ai project.

    Take note of the project ID in project > Manage > General > Project ID. You’ll need this ID for this tutorial.

  3. Create a Jupyter Notebook.

This step will open a Notebook environment where you can copy the code from this tutorial to implement an AI agent of your own. Alternatively, you can download this notebook to your local system and upload it to your watsonx.ai project as an asset.

Step 2. Set up a watsonx.ai Runtime instance and API key

  1. Create a watsonx.ai Runtime service instance (select your appropriate region and choose the Lite plan, which is a free instance).

  2. Generate an API Key.

  3. Associate the watsonx.ai Runtime service instance to the project that you created in watsonx.ai.

Step 3. Install and import relevant libraries and set up your credentials

We'll need a few libraries and modules for this tutorial. Make sure to import the ones below, and if they're not installed, you can resolve this with a quick pip install. LangChain will be the framework and developer toolkit used.

#installations
%pip install langchain
%pip install langchain_ibm
%pip install langchain_core
%pip install IPython
%pip install nasapy
#imports
import nasapy
import getpass

from langchain_core.tools import tool
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import JSONAgentOutputParser
from langchain.memory import ConversationBufferMemory
from langchain.tools.render import render_text_description_and_args
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough
from langchain_ibm import WatsonxLLM
from datetime import datetime

Set up your credentials. Input your API key and project ID.

credentials = {
    "url": "https://us-south.ml.cloud.ibm.com",
    "apikey": getpass.getpass("Please enter your watsonx.ai Runtime API key (hit enter): "),
    "project_id": getpass.getpass("Please enter your project ID (hit enter): ")
}

Let's establish our connection with the NASA API that we will be using later in this activity. This API does not require authentication. With a small rate limit, you can get started using 'DEMO_KEY' as your key. To avoid the low rate limitations, you can register for your own NASA API key and replace it as the key value below. Registering for a personal key is quick, free, and simple.

n = nasapy.Nasa(key='DEMO_KEY')

Step 4. Initialize a basic agent with no tools

This step is important as it will produce a clear example of an agent's behavior with and without external data sources. Let's start by setting our parameters.

The model parameters available can be found here. We experimented with various model parameters, including temperature, minimum and maximum new tokens and stop sequences. Learn more about model parameters and what they mean in the watsonx docs. It is important to set our stop_sequences here in order to limit agent hallucinations. This tells the agent to stop producing further output upon encountering particular substrings. In our case, we want the agent to end its response upon reaching an observation. Hence, our stop sequence is '\n\n' as this sequence usually indicates the final output.

param = {
    "decoding_method": "greedy",
    "temperature": 0,
    "min_new_tokens": 5,
    "max_new_tokens": 250,
    "stop_sequences": ["\n\n"]
}

For this tutorial, we suggest using IBM's Granite 13B Chat model as the LLM to achieve similar results. You are free to use any AI model of your choice. The foundation models available through watsonx can be found here. The purpose of these models in LLM applications is to serve as the reasoning engine that decides which actions to take.

model = WatsonxLLM(
  model_id = "ibm/granite-13b-chat-v2",
  url = credentials.get("url"),
  apikey = credentials.get("apikey"),
  project_id = credentials.get("project_id"),
  params = param
)

In the next step of this tutorial, we will be creating a tool that retrieves today's date. As we have covered, traditional LLMs cannot obtain the current date on their own. Let's verify this.

model.invoke("What is today's date?")

Output:

\n\nA: Today is [display current date].\n\n

Evidently, the LLM is unable to provide us with the current date. The training data used for this model contained information prior to today's date and without the appropriate tools, the agent does not have access to real-time information.

Step 5. Define the agent's tools

Unlike traditional LLMs, AI agents can provide more comprehensive responses to diverse tasks through their tool usage, memory, and planning. Agents can use built-in tools such as the Wikipedia API tool available through the langchain_community package. We can also build custom agents that load personalized tools. Our agent in this tutorial will have two custom tools available to use:

  • get_todays_date() - uses the Python datetime package to return today's date in YYYY-MM-DD format.
  • get_astronomy_image() - utilizes the NASA API to obtain the Astronomy Picture of the Day. After the tool acquires the image, its URL is returned.
@tool
def get_todays_date() -> str:
    """Get today's date in YYYY-MM-DD format."""
    date = datetime.now().strftime("%Y-%m-%d")
    return date


@tool(return_direct=True)
def get_astronomy_image(date: str):
    """Get NASA's Astronomy Picture of the Day on given date. The date is formatted as YYYY-MM-DD."""
    apod = n.picture_of_the_day(date, hd=True)
    return apod['url']


tools = [get_todays_date, get_astronomy_image]

Step 6. Establish the prompt template

Next, we will set up a new prompt template to ask multiple questions. This template is more complex. It is refered to as a structured chat prompt and is used for creating agents that have multiple tools available. It will be made up of a system_prompt, a human_prompt and the tools we defined in Step 5.

First, we will set up the system_prompt. This prompt instructs the agent to print its "thought process," which involves the agent's subtasks, the tools that were used and the final output. This gives us insight into the agent's function calling. The prompt also instructs the agent to return its responses in JSON Blob format and to consider information it has stored in its memory.

system_prompt = """Respond to the human as helpfully and accurately as possible. You have access to the following tools: {tools}
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
Valid "action" values: "Final Answer" or {tool_names}
Provide only ONE action per $JSON_BLOB, as shown:"
```
{{
  "action": $TOOL_NAME,
  "action_input": $INPUT
}}
```
Follow this format:
Question: input question to answer
Thought: consider previous and subsequent steps
Action:
```
$JSON_BLOB
```
Observation: action result
... (repeat Thought/Action/Observation N times)
Thought: I know what to respond
Action:
```
{{
  "action": "Final Answer",
  "action_input": "Final response to human"
}}
Begin! Reminder to ALWAYS respond with a valid json blob of a single action.
Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation"""

In the following code, we are establishing the human_prompt. This prompt tells the agent to display the user input followed by the intermediate steps taken by the agent as part of the agent_scratchpad.

human_prompt = """{input}
{agent_scratchpad}
(reminder to always respond in a JSON blob)"""

Next, we establish the order of our newly defined prompts. We create this new template to feature the system_prompt followed by an optional list of messages collected in the agent's memory, if any, and finally, the human_prompt which includes both the human input and agent_scratchpad.

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system_prompt),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", human_prompt),
    ]
)

Now, lets finalize our prompt template by adding the tool names, descriptions and arguments using a partial prompt template. This allows the agent to access the information pertaining to each tool including the intended use cases. This also means we can add and remove tools without altering our entire prompt template.

prompt = prompt.partial(
    tools=render_text_description_and_args(list(tools)),
    tool_names=", ".join([t.name for t in tools]),
)

Step 7. Set up the agent's memory and chain

An important feature of AI agents is their memory. Agents are able to store past conversations and past findings in their memory to improve the accuracy and relevance of their responses going forward. In our case, we will use LangChain's ConversationBufferMemory() as a means of memory storage.

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

And now we can set up a chain with our LangChain agent's scratchpad, memory, prompt and the LLM. We use the AgentExecutor class here to load tools for the agent to use along with the agent itself, an error handling approach, a verbose parameter and memory.

chain = ( RunnablePassthrough.assign(
        agent_scratchpad=lambda x: format_log_to_str(x["intermediate_steps"]),
        chat_history=lambda x: memory.chat_memory.messages,
    )
    | prompt | model | JSONAgentOutputParser()
)

agent_executor = AgentExecutor(agent=chain, tools=tools, verbose=True, memory=memory, handle_parsing_errors=True)

Step 8. Generate responses with the AI agent

We are now able to ask the agent questions. Recall the agent's previous inability to provide us with the current date. Now that the agent has its tools available to use, let's try asking the same question again.

agent_executor.invoke({"input": "What is today's date?"})

Output:

> Entering new AgentExecutor chain...
Action:
```
{
  "action": "get_todays_date",
  "action_input": {}
}
```
2024-09-27
Action:
```
{
  "action": "Final Answer",
  "action_input": "Today's date is 2024-09-27."
}
```
Finished chain.

{'input': "What is today's date?",
 'history': '',
 'output': "Today's date is 2024-09-27."}

Let's also test if the model can perform basic calculations in addition to using the date tool to retrieve previous dates.

agent_executor.invoke({"input": "What day was it 4 days ago?"})

Output:

> Entering new AgentExecutor chain...
    ```
    {
      "action": "get_todays_date",
      "action_input": {}
    }
    2024-09-27
    Action:
    ```
    {
      "action": "get_todays_date",
      "action_input": {}
    }
    2024-09-27
    Action:
    ```
    {
      "action": "Final Answer",
      "action_input": "Four days ago was on 2024-09-23."
    }
    ```
  > Finished chain.

    {'input': 'What day was it 4 days ago?',
     'history': "Human: What is today's date?\nAI: Today's date is 2024-09-27.",
     'output': 'Four days ago was on 2024-09-23.'}

Great! The agent is now able to tell us the current date and previous dates.

Now, let's try asking a more complex question.

agent_executor.invoke({"input": "What is NASA's Astronomy Picture of the Day for today?"})

Output:

> Entering new AgentExecutor chain...
    {
      "action": "get_astronomy_image",
      "action_input": {
        "date": "2024-09-27"
      }
    }

    https://apod.nasa.gov/apod/image/2409/SSSGreatestHits1024.png

  > Finished chain.

    {'input': "What is NASA's Astronomy Picture of the Day for today?",
      'history': "Human: What is today's date?\nAI: Today's date is 2024-09-27.\nHuman: What day was it 4 days ago?\nAI: Four days ago was on 2024-09-23.",
     'output': 'https://apod.nasa.gov/apod/image/2409/SSSGreatestHits1024.png'}

Neat! The agent used both of its available tools to return a URL that leads to an image of today's Astronomy Picture of the Day via NASA's API. Because we wanted to see the image from today, the agent used today's date stored in memory as input for the get_astronomy_image tool. Additionally, the agent is successfully updating its knowledge base as it learns new information and interactions with each iteration as seen by the history output.

To check out the image, click the URL your agent produces or copy and paste it into a browser. Please note that your agent will generate a different link than the one shown above since the dates will differ.

Let's also test whether the agent is able to produce the image using NASA's API from 2 days ago.

agent_executor.invoke({"input": "Show me NASA's Astronomy Picture using the date from 2 days ago."})

Output:

> Entering new AgentExecutor chain...
    {
      "action": "get_astronomy_image",
      "action_input": {"date": "2024-09-23"}
    }

    https://apod.nasa.gov/apod/image/2409/Comet23A3_Valente_960.jpg

  > Finished chain.

    {'input': "Show me NASA's Astronomy Picture using the date from 2 days ago.",
      'history': "Human: What is today's date?\nAI: Today's date is 2024-09-27.\nHuman: What day was it 4 days ago?\nAI: Four days ago was on 2024-09-23.\nHuman: What is NASA's Astronomy Picture of the Day for today?\nAI: https://apod.nasa.gov/apod/image/2409/SSSGreatestHits1024.png",
     'output': 'https://apod.nasa.gov/apod/image/2409/Comet23A3_Valente_960.jpg'}

Success!

Summary and next steps

In this tutorial, you created an AI agent using LangChain in Python with watsonx. You created a tool to return today's date and another tool to return the Astronomy Picture of the Day using NASA's open-source API on a given day.

The sample output is important as it shows the steps the agent took in creating its own agent workflow using available tools. In our case, the LLM on its own was not able to achieve the first subtask of the problem: finding the current date. Hence, the tools granted to the agent were vital for achieving the goal.

We encourage you to check out the LangChain documentation page for more information and tutorials on AI agents.

Try watsonx for free

Build an AI strategy for your business on one collaborative AI and data platform called IBM watsonx, which brings together new generative AI capabilities, powered by foundation models, and traditional machine learning into a powerful platform spanning the AI lifecycle. With watsonx.ai, you can train, validate, tune, and deploy models with ease and build AI applications in a fraction of the time with a fraction of the data.

Try watsonx.ai, the next-generation studio for AI builders.

Next steps

Explore more articles and tutorials about watsonx on IBM Developer.