This is a cache of https://developer.ibm.com/tutorials/develop-langflow-tools-watsonx-orchestrate-granite/. It is a snapshot of the page as it appeared on 2025-11-17T02:39:45.835+0000.
Build an AI agent with Langflow, Granite 4.0 models, and watsonx Orchestrate - IBM Developer
Build an AI agent with Langflow, Granite 4.0 models, and watsonx Orchestrate
A hands-on guide for creating Langflow tools, consuming them with watsonx Orchestrate Agent Development Kit, and using IBM Granite 4.0 Micro model for inference
In this tutorial, learn how to design an AI tool and workflow visually using Langflow in watsonx Orchestrate Agent Development Kit (ADK), run a compact open model (Granite-4.0-H-Micro, hereafter simply called Granite 4.0 Micro) locally using Ollama for efficient inference, and then orchestrate the agents with watsonx Orchestrate. You will learn how all of these parts fit together to turn a simple idea into an agent that is fast, efficient, and easy to operate.
Architecture of the AI agent system
In this tutorial, you will build the Coding Language Advisor Agent, an agent that accepts a code snippet, detects the programming language, routes to a fallback when detection fails, searches the web for language-specific best practices, summarizes JSON search results into a concise set of recommendations, and prints a clean final message. You will wire the components in Langflow, inference the Granite model using Ollama, add a simple If Else gate, package the flow as a watsonx Orchestrate tool, and then create the agent on watsonx Orchestrate.
The user interacts with the user interface UI, providing a query of code. This request is sent to the Language Advisor Agent in watsonx Orchestrate, which uses an LLM for tasks such as interpreting instructions, reasoning through the problem, planning the next steps, and deciding whether a tool call is necessary. When the Language Advisor Agent determines that the request involves getting best practices for programming language is needed, it communicates with Langflow tool.
The langflow tool performs the following:
It receives a coding language snippet as an input.
It then detects the coding language with Granite 4.0 Micro model through Ollama.
Then it checks if the coding language is detected.
If it’s not detected, it responds to the user with “Unable to detect language. Try again.".
If it’s detected, it performs a Web search using DuckDuckGo for any best practices for that identified programming language.
It then summarizes the best practices using Granite 4.0 Micro model through Ollama.
It sends the output back to the Language Advisor Agent.
Prerequisites
This tutorial assumes you have a running local environment of watsonx Agent Development Kit (ADK) version 1.12 or above. Check out the getting started with ADK tutorial if you don’t have an active instance. It has been tested and verified with ADK version 1.12.2. ADK version 1.12.2 comes with Langflow version 1.5.
Python version 3.11. It should work with later versions, but this tutorial has been tested with Python 3.11.
Ollama installed locally. Download Ollama from the Ollama site.
Steps
These are the steps you are going to follow in this tutorial:
Start watsonx Orchestrate ADK with Langflow
Run Granite 4.0 Micro locally using Ollama
Create the flow in Langflow to call the Granite 4.0 Micro model
Configure the conditional logic in Langflow
Configure web search in Langflow
Summarize the results using the Granite 4.0 Micro model
Import the Langflow flow into watsonx Orchestrate as a tool
Create an agent using watsonx Orchestrate that consumes Langflow tool
Test the watsonx Orchestrate agent
Step 1. Start watsonx Orchestrate ADK with Langflow
IBM watsonx Orchestrate Agent Development Kit includes Langflow. In this step, you are going to start watsonx Orchestrate ADK locally and access the Langflow user interface.
Make sure that you stop your watsonx Orchestrate server.
orchestrate serverstop
Copy codeCopied!
Start the server. Append the argument --with-langflow to the server start command. The first time you run the command, it pulls the required images for Langflow and performs the initial configuration so that you can run Langflow locally.
orchestrate server start -e {ENV_FILE} --with-langflow
Copy codeCopied!
Open the Langflow UI locally in your browser as instructed at the end of the previous command output: http://localhost:7861
Step 2. Run Granite 4.0 Micro locally using Ollama
In this step, you are going to run the Granite 4.0 Micro model (granite-4.0-h-micro) locally using Ollama, and optimize the prompt to detect the programming language.
Pull the Granite 4.0 Micro model
ollama pull granite4:micro-h
Copy codeCopied!
Verify that the model is installed locally.
ollama list
Copy codeCopied!
Verify that the model can detect the programming language using the Granite 4.0 Micro model.
ollama run granite4:micro-h "Detect the language of: print('hello')"
ollama run granite4:micro-h "Detect the language of: cout<<'hello'"
Copy codeCopied!
Optimize the prompt to only output the programming language.
ollama run granite4:micro-h "You are a language detector. Reply only with the language name. Detect the language of: print('hello')"
ollama run granite4:micro-h "You are a language detector. Reply only with the language name. Detect the language of: cout<<'hello'"
Copy codeCopied!
Step 3. Create the flow in Langflow to call the Granite 4.0 Micro model
You are going to create your first flow in Langflow locally by connecting the input and output components to the Granite 4.0 Micro model to make the flow interactive. This is important because it allows users to dynamically input code snippets and receive real-time language detection results. You do this by linking the Chat Input to the Ollama component’s input, and the Ollama component’s output to the Chat Output, enabling a seamless end-to-end user experience.
In the Langflow UI, click Create first flow.
In the Get Started window, click the Blank Flow button.
Click the Pencil icon at the top of the dialog, and name the flow “Coding_Language_Advisor” and give it a description, such as “Detects the programming language of any code snippet, searches the web for best practices, and summarizes actionable guidance using the Granite 4.0 Micro model running locally through Ollama.” Then, click Save.
Important: Make sure that your tool name complies with the following rules: Only alphanumeric characters and underscores are allowed, and the name cannot start with a number or underscore. This naming convention is required so that you can import it later as a watsonx Orchestrate tool.
Drag the “Chat Input” and “Chat Output” components from the Components in the left panel into the Canvas.
Search for Ollama in the Components and drag it in the Canvas.
Click the Controls button above the Ollama component to configure it.
Specify the Base URL as http://host.docker.internal:11434, then check and uncheck the Model Name toggle to load the models, and then choose “Granite 4:micro-h” from the Model Name list. Then, click Close.
Select the model name again if it is not selected, and then configure the System Message as follows: “You are a code language detector, reply only with the programming language name as one word if the input clearly contains source code syntax, otherwise reply exactly with Other.”.
To test this flow, write this input: “Detect the language of: print('hello')”, and click the play button (the Run Component button).
Click the Inspect Output icon next to Model Response, to check the outcome. It shows that the output is Python.
Change the Input to make it dynamic by connecting the Chat Input component to the Input of the Ollama component, and the Chat Output component to the Model Response component, as shown in the following image.
The playground feature allows you how to test your flow and validate all the integrations and logic. Click Playground to test it.
print('hello')
cout<<'hello'
Hello World
Copy codeCopied!
Step 4. Configure the conditional logic in Langflow
In this step, you are going to add conditional logic to handle cases where the programming language cannot be detected. This is important because it allows the flow to provide user-friendly feedback when the input is unclear, and to proceed with further processing only when a valid language is identified. You do this by using an If-Else component to check if the model’s response is “Other,” then routing the output accordingly.
Drag the If-Else component, and connect the Model Response Output from the Ollama component into the If-Else component input, and in Match Text, write “Other”.
Click Controls for the If-Else component in the Canvas and enable Case True and Case False.
For Case True, configure it with this description: “Unable to detect the language. Please provide a clearer code snippet.”.
Remove the connection from Model Response and Chat Output, and connect Model Response to Case False.
Drag another Chat Output component.
Drag the Text Output component, connect its input to the False branch, then click the Code button and update line 26 to
text= "Best practices for " + self.input_value
Copy codeCopied!
This changes the Output to prepare for the search query, and then click Check & Save.
Connect each of the True Branch and Text Output components to the different Chat Output components.
Click Playground to test the flow.
Step 5. Configure web search in Langflow
In this step, you are going to enrich the flow by integrating a web search and data extraction pipeline to retrieve best practices for the detected programming language. This is important because it allows the assistant to go beyond detection and provide actionable, real-world guidance by scraping relevant content from the web. You do this by connecting the Text Output to a Web Search component, filtering the results using DataFrame Operations, converting them with Type Convert, and finally displaying them through a Chat Output component.
Drag the Web Search component into the Canvas. This component uses DuckDuckGo search and provides HTML scraping.
Remove the connection between Text Output and Chat Output.
Connect Text Output: Output Text to Web Search: Search Query.
Drag the DataFrame Operations component into the Canvas.
Connect Web Search: Search Results into DataFrame Operations input.
In the DataFrame Operations component, configure it to narrow down the search result to only output the actual text content (that is, remove the title and link from the output as they are not needed).
Operation: Select Columns
Columns to Select: content.
Drag the Type Convert component in the Canvas. This component will convert the output from DataFrame format to the message format.
Connect the DataFrame Operation component output to the Type Convert component input, and the output of Type Convert to Chat Output.
Click Playground to test the flow. It should extract all the search results.
Step 6. Summarize the results using the Granite 4.0 Micro model
In this step, you are going to summarize the extracted best practices using a second Ollama component configured with Granite 4.0 Micro for summarization. This is important because it transforms raw search results into a concise, actionable list tailored to the detected programming language, enhancing the value of the agent’s response. You do this by connecting the Type Convert output to a new Ollama component with a summarization prompt and routing its Model Response to the Chat Output for final display.
Remove the connection between the Type Convert and Chat Output components.
Duplicate the Ollama component in the Canvas and replace the System Message in the new one with:
“You are an expert in programming languages and summarization. Your task is to analyze the provided input text, which discusses best practices for Language X from various sources. Extract the key best practices for Language X. Summarize them concisely, starting exactly with the phrase "The Best Practices for Language X is:" followed by a bullet-point list. Each bullet should be brief, actionable, and cover one main practice. Focus on recurring themes like coding style, safety, performance, and structure. Ignore unrelated content like ads, navigation, or non-relevant topics. Do not add any introductory or concluding text beyond the specified format.”
Connect the Ollama: Input component to the Type Convert: Message Output component.
Connect the Ollama: Model Response component to the Chat Output component.
The is the final view of the flow in Canvas.
Click Playground to test the full flow.
First test, if it detects Python programming language, and outputs its best practices.
Then, check the same for Java programming language.
Then, put an invalid programming language to make sure it will give undetected programming language output.
Step 7. Import the Langflow flow into watsonx Orchestrate as a tool
In this step, you are going to export your Langflow flow and import it into watsonx Orchestrate as a tool that can be used programmatically. Langflow runtime is bundled as part of watsonx Orchestrate. This is important because it enables the flow to be reused and invoked by agents within watsonx Orchestrate ecosystem. You do this by exporting the flow as a JSON file, defining required dependencies in a requirements.txt file, and using watsonx ADK command to register the tool with watsonx Orchestrate runtime.
Click the Langflow logo in the upper left corner.
Click on the three dots next to your flow, and then click Export.
Confirm the description, and flow name, and then click Export. Save the JSON file including the flow details locally.
Create a file named requirements.txt.
vi requirements.txt
Copy codeCopied!
Then, add the additional packages for Langflow components.
# Runs and serves Langflow flows headless via CLI for programmatic or service use.
lfx-nightly==0.1.12.dev40
# Provides LangChain's Ollama integration, enables Ollama model nodes in Langflow.
langchain-ollama==0.2.1
# High performance XML and HTML parser, used by parsing and scraping components in flows.
lxml==5.2.1
Copy codeCopied!
Import the Langflow flow as a watsonx Orchestrate tool.
Step 8. Create an agent using watsonx Orchestrate that consumes Langflow tool
In this step, you are going to create an agent in watsonx Orchestrate to integrate your Langflow tool into watsonx Orchestrate. This is important because it enables seamless interaction between users and the Langflow-powered language coding advisor through a conversational interface, making it accessible within enterprise workflows. You do this by creating a new agent in Orchestrate, adding your Langflow tool to its toolset, and defining the agent behavior to route all input directly to the coding_language_advisor tool.
Launch watsonx Orchestrate locally using the ADK. This opens a browser session with watsonx Orchestrate.
orchestrate chat start
Copy codeCopied!
Click Create new agent to access the Agent Builder.
Fill the name and description of the agent as follows. The description is not optional, as it outlines the scope of the agent and makes it easy for other agents and users to know when to interact with this agent. Then, click Create.
Name: Language Advisor Agent
Description: You are a Language Advisor that extracts programming best practices. Using the Langflow tool, you detect the programming language of any given code snippet, search the web for the most relevant and up-to-date best practices, and summarize actionable guidance powered by IBM Granite 4.0 H-Micro running locally through Ollama.
Go to the Toolset section, and then click Add tool to add the Langflow tool imported on the previous steps.
Click Add from local instance.
Select the Langflow tool that you created earlier, and then click Add to agent.
Go to the Behavior section to define how the agent should react to requests and respond to users. Write the following in the Behavior section:
Pass any input directly tothe tool **Coding_Language_Advisor**
Copy codeCopied!
Step 9. Test the watsonx Orchestrate agent
Next, you are going to the test the agent calling the Langflow tool. You can test either from the Preview right pane or through selecting the agent in the watsonx Orchestrate home page and then chatting with it.
In the Preview right pane, write this inquiry:
print('hello')
Copy codeCopied!
Summary and next steps
Langflow, IBM Granite 4.0, and watsonx Orchestrate fit together to turn AI ideas into production-ready agents with speed and governance. In this tutorial, you built a complete example that shows how these components work together to turn a small idea into a governed, reusable tool.
You started the watsonx Orchestrate ADK with Langflow, ran Granite 4.0 Micro locally through Ollama for fast and private inference, then designed a visual flow that detects a code language, routes cleanly when detection fails, searches the web for best practices, and summarizes JSON results into concise guidance. Along the way, you saw how tight prompts prevent the model from treating plain English as code, how a simple If-Else gate returns a static message when the input is not valid code, and how a short summarizer prompt converts raw search JSON into actionable best practices.
You then brought this flow into watsonx Orchestrate as a tool and created an agent that can call them on demand. The agent now uses Granite 4.0 Micro for deterministic local reasoning when it should, uses a web search tool when grounding is needed, and returns a single, clean response. This separation of concerns keeps reasoning fast and private on your machine, lets tools handle retrieval and formatting with reliability, and gives you enterprise controls through Orchestrate.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.