This is a cache of https://developer.ibm.com/tutorials/awb-integration-of-wx-ai-watsonx-discovery-watson-assisant/. It is a snapshot of the page as it appeared on 2025-11-14T13:25:44.785+0000.
Implement Elasticsearch with retrieval augmented generation - IBM Developer

Tutorial

Implement Elasticsearch with retrieval augmented generation

Retrieval augmented generation with watsonx Discovery, watsonx.ai, and watsonx Assistant

By

Simran Gupta

Retrieval augmented generation (RAG) is a hybrid technique in natural language processing that combines the strengths of both retrieval-based and generative models to improve the quality and relevance of generated text. RAG has two main benefits:

  • It ensures that the large language model (LLM) that is generating an answer to a question has access to the most current, reliable facts relevant to that question.
  • It ensures that users (both end users and builders) have access to the LLM’s sources, ensuring that its answers can be traced, checked for accuracy, and ultimately trusted.

The use case shown in this example uses IBM watsonx Discovery for semantic search using an ELSER model and RAG using LLMs on top of it (running on watsonx.ai) to generate content-grounded responses. It's integrated it with IBM watsonx Assistant to implement conversational search.

Overview

The value of watsonx Discovery

Watsonx Discovery is a search and analytics engine that is designed for real-time search based on Elasticsearch technology. It indexes and enriches many data types, including text, numeric, geospatial, pictures, and video. It's an add-on to watsonx Assistant that helps resolve informational tasks through a generative AI assistant.

The term indexing refers to the ability to store information in the watsonx Discovery repository. These indexes can then be analyzed by using vector search, machine learning, observability, and security techniques.

At the core of watsonx Discovery is the Elasticsearch Relevance Engine (ESRE). Two of its key components include:

  1. Native vector search: Vector search is a method of information retrieval in which multimodal (text, images, video, and so on) data and queries are represented as numerical vectors instead of plain text. Frequently used for semantic search, vector search finds similar data using approximate nearest neighbour (ANN) algorithms. Compared to traditional keyword search, vector search yields more relevant results and executes faster.

  2. ELSER (Elastic Learned Sparse EncodeR): ELSER is a proprietary LLM model. It is a retrieval model specializing in the semantic search capability. Semantic search denotes search with meaning, as distinguished from keyword search, where the search engine looks for literal matches of the query words or variants of them without understanding the overall meaning of the query. ELSER is a proprietary generic model that generalizes across domains. While you might eventually want to build a proprietary model on your data, ELSER is a great way to start and performs well in proofs of experience (PoXs). This tutorial deploys the ELSER model.

Prompt engineering

Prompt engineering is the practice of designing and refining prompts that are used to interact with language models where prompts are input queries or instructions. The various aspects of prompt engineering include:

  • Decoding: The process of finding the output sequence when given the input sequence. Decoding can be greedy or sampling. In greedy mode, the model selects the highest probability tokens at every step of decoding, and a model is less creative in that mode. An advantage of greedy decoding is that you see reproducible results.

    In sampling mode, the model chooses the next token from a pool of the most probable next tokens. There is more creativity, but also a larger risk that the output might be nonsensical. The following parameters are only visible when in sampling mode.

    • Temperature: A floating point number ranging from 0.0 (greedy) and 2.00 (which is maximum creativity). It refers to selecting high- or low-probability words. Higher temperature values lead to more variability.
    • Top P (nucleus sampling): Selecting the smallest set of words whose cumulative probability exceeds p.
    • Top K: Selecting k words with the highest probabilities at each step. Higher values lead to more variability.
    • Random seed: An integer in the range 1 to 4,294,967,295. In sampling mode, random seed is helpful for the replicability of experiments. With everything else remaining the same, updating the random seed yields different outputs.
  • Repetition penalty: A value of 1 or 2 (a setting of 1 allows repetition, and 2 prohibits it). This is used to counteract a model's tendency to repeat the prompt text verbatim or get stuck in a loop in generating the same output.

  • Stop sequences: Sequences of characters (text, special characters, and carriage return) that are used as a stop indicator by the model. The stop sequence itself is still included in the model output, but that is the last piece of output.

  • Min tokens: An integer that specifies the minimum number of tokens in the model's output.

  • Max tokens: An integer that specifies the maximum number of tokens in the model’s output. Sometimes, when the generated output looks incomplete, your Max tokens value might be too low.

Overview

In this tutorial, learn how to implement RAG using watsonx.ai with Elasticsearch by using watsonx Discovery, and integrate it with watsonx Assistant. You learn:

  • How to do document ingestion from IBM Cloud Object Storage to Databases for Elasticsearch (using watsonx Discovery) and chunk and index the documents for Elasticsearch.
  • How to deploy a Python function that performs RAG using the Databases for Elasticsearch database and watsonx.ai.

After this implementation, you also learn how you can use this setup with watsonx Assistant by integrating RAG with Elasticsearch. You can further use prompt engineering to improve the response in your format.

Estimated time

It should take you approximately 3-5 hours to complete this tutorial, depending on the time to provision the TechZone environment and your other environment setups.

Prerequisites

To follow this tutorial, you need:

  • An IBM TechZone environment.
  • An IBM Cloud account. If you don’t have an IBM Cloud account, you can create one.
  • Python skills and familiarity to notebooks.

This TechZone reservation provisions all of the required IBM Cloud services when configured with watsonx Discovery installed.

Setting up a TechZone environment

To follow this tutorial, you must have an IBM TechZone environment set up. To set up the environment:

  1. Log in in to the TechZone environment by using your IBM ID.

    Log in screen

  2. Click Reserve now to make a TechZone reservation. This provisions all of the required IBM Cloud services when configured with watsonx Discovery installed.

    Reserve now selection

  3. Change the name of the reservation to WA, WxD, and wx.ai.

  4. Select the Practice/Self-Education tile.
  5. For the Purpose description, enter Learning integration of WA, WxD, and wx.ai.
  6. Select your preferred Geography. In this example, AMERICAS is chosen.
  7. Select Install watsonx Discovery.

    Create a reservation screen

    For now, continue to the bottom section of the reservation page.

  8. Select Yes to install the container registry.
  9. Optionally, you can provide some additional notes.
  10. On the lower-right section of the page, accept the Terms and Conditions, then click Submit.
  11. You receive an email that is sent to your IBM email address letting you know that your reservation is being provisioned (this should happen within approximately 10-15 minutes).
  12. After the watsonx.ai provisioning is complete, you receive a second email telling you that it is ready for use. This email looks like the following example.

    Technology zone status

Setting up an IBM Cloud account

  1. If you do not have an IBM Cloud account associated with this instance, you must click the notifications page link, click the item, and accept the invitation by clicking Join now.

    Notifications

    Action required

  2. You are returned to the IBM Cloud login page. Use your IBMid to log in. You are asked to join the TechZone account. Accept the Terms and Conditions, then click Join Account.

    Join account

  3. When you log in, make sure that you’re using the right account. Assuming that you have accepted the invitation, you should have an itz-watsonx -n account. Go to the menu bar and click the highlighted icon to display the list of accounts available to you. Make sure that you select the account that was assigned to you from your current TechZone reservation.

    Catalog

  4. You can always check your reservation to find the right account. To do so, log on to https://techzone.ibm.com.

  5. Select the My Library pull-down menu, and click My reservations. The My reservations page opens.

  6. Select the correct reservation tile (you might have more than one).

Now that you have your environment set up, it’s time to continue.

Steps

Step 1. Create a project in watsonx.ai

  1. Download the WatsonStudioProjectTemplate.zip file from GitHub.

  2. Open watsonx.ai. You see the homescreen. Open the hamburger menu at the upper-left, and select View all projects under Projects.

    Viewing projects

  3. Create a new project in watsonx.ai by clicking New project.

    New project

  4. Select Local file from the menu on the left, and upload the zip file that you downloaded in the first step. Then, click Create.

    Selecting local file

  5. After creating the project, go to the Assets tab where you can find the connections, prpmpt templates and python notebooks.

    Project assets

Step 2. Add documents to Cloud Object Storage

To add documents that you want to be included in the knowledge base to Cloud Object Storage:

  1. Navigate to your Cloud Object Storage resource in IBM Cloud.

    Resource list

  2. Create a bucket in Cloud Object Storage by clicking Create bucket, then Create a custom bucket.

    Cloud object storage Create bucket

  3. Enter a name for your custom bucket. Select Resiliency as Regional, Storage class as Standard, and click Create bucket.

    Create custom bucket

  4. Upload your files to the bucket (for example, PDF, DOCX, PPTX, HTML, or TXT file types) by clicking Upload under the Objects tab in the created bucket, selecting Standard transfer, and dragging the files to upload. Enter bucket name

    Uploading files

Step 3. Complete connection to Cloud Object Storage

Create a service credential for the Cloud Object Storage instance

  1. Open the Cloud object storage instance.
  2. Go to the Service credential tab, and click New credential.

    Copying the endpoint

  3. Provide Service credential name, select Manager as the Role, and Auto-generated as the Service ID.

    Creating service credential

  4. Click Add.

You have successfully added the service credential that will be used in next step.

Configure the connection to Cloud Object Storage

  1. Click the CloudObjectStorage connection in the watsonx.ai project.
  2. Set the bucket name to the name of the bucket in which the documents are present.
  3. In the Configuration tab of the bucket in Cloud Object Storage, copy the private or public endpoint and paste it to the Login URL field.
  4. Go to the Cloud Object Storage instance, open the Service Credentials tab, expand your Cloud Object Storage credential, then copy the service credentials. Paste the Cloud Object Storage service credential into the Service credentials field.
  5. Test the connection by clicking Test connection. The test should be successful.

    Pasting cloud object storage credential

  6. Save the connection.

Step 4. Complete connection to Databases for Elasticsearch

Copy the Databases for Elasticsearch credentials

  1. Open IBM Cloud, open the hamburger menu at the upper left, then click Resource list. Expand the Databases list.

    watsonx Discovery connection Resource list

  2. In a separate tab, navigate to the Databases for Elasticsearch service under databases.

  3. Open the Overview tab
  4. Under the Endpoints section, go to the HTTPS tab.

    HTTPS connection

  5. Copy the hostname, port, and TLS certificate and save it somewhere.

  6. Go to the Service credentials tab of the service and copy the username and password under connection.https.authentication in the Service credential JSON.

    Service credentials

Configure the connection to watsonx Discovery and Databases for Elasticsearch.

  1. Open your watsonx.ai project created in Step 1.
  2. Go to the Assets tab, and open the WatsonxDiscovery connection.

    watsonx Discovery connection

  3. Set the details that you copied in the previous steps.

    Set details

  4. Click Test connection and there should be Test Successful message that will appear on the screen

  5. Save the connection details.

Step 5. Update PARAMETER SET

Create the Watson Machine Learning deployment space

To create the Watson Machine Learning deployment space:

  1. Open the watsonx.ai project, then open the hamburger menu from upper left.
  2. Navigate to the Deployments section.

    Deployments tab

  3. Click New deployment space to create a new deployment space.

    New deployment space

  4. The storage service should automatically be assigned to your Cloud Object Storage service.

  5. Assign your Watson Machine Learning service to the Machine Learning Service.

    Create deployment space Space is ready

  6. After the deployment space is created, navigate to the space's Manage tab and copy the Space GUID. Save it for now.

    Space GUID

Create IBM API key

In this step, create an IBM Cloud API key in IBM Cloud and save it.

  1. Go to the IBM Cloud account.
  2. Under the Manage drop-down menu at the top, select Access IAM.
  3. Select API keys from the left pane.

    API keys window

  4. Click Create.

  5. Provide a name for the api_key, and click Create.
  6. Copy the key, save it, and download it.

Populate parameter set

  1. Open the watsonx.ai project.
  2. Go to the Assets tab, and open the Notebook_and_Deployment_Parameters parameter set in the project.
  3. Click the pencil icon to edit the parameter, set the wml_space_id to the Space GUID copied from the earlier step, and click Save.
  4. Click the pencil icon to set the ibm_cloud_apikey to the IBM Cloud API key copied from the earlier step. Click Save. (Optional) You can also update the parameters, including the ingestion_chunk_size and ingestion_chunk_overlap per your requirement.

    Populating parameter sets

Step 6. Associate the Watson Machine Learning service with your project

  1. Open the watsonx.ai project.
  2. Go to Manage tab.
  3. Go to Service and integrations.

    Services and integration

  4. Click Associate Service, then select the Watson Machine Learning type service.

    Associating the service

  5. Click Associate.

Try it out (Optional)

In this project, you can open the Mixtral-RAG-Template (Prompt Template). You are given a sample prompt and you can update your prompt as needed along with the model and model parameters. After you’ve made your updates, save the prompt template.

Project

Notebooks

After the setup is complete, the notebooks in the project can be run without errors. For each of the notebooks, make sure to insert the project token by using the upper-right menu (three dots symbol) in the notebook UI before running any cells.

Insert project token

This creates a cell that connects your notebook to the project and its assets. Now, run the following notebooks one by one.

  • 1-file-ingestion-from-cos
  • 2-deploy-rag-function-in-wml
  • 3-test-rag-deployment (optional)

Ingest documents to Databases for Elasticsearch

The 1-file-ingestion-from-cos notebook in the project handles document ingestion from Cloud Object Storage to Databases for Elasticsearch. In this notebook, the following steps are implemented:

  1. Configure the notebook by importing the necessary packages and adding the values configured in your project's Parameter Sets to the namespace.
  2. Connect to Cloud Object Storage.
  3. Read and prepare files from Cloud Object Storage by connecting to your Cloud Object Storage bucket. Then, the files inside the bucket are read, chunked, and formatted into JSON objects that can be ingested by watsonx Discovery. This project uses the LlamaIndex framework and LlamaIndex file readers to perform the file ingestion.
  4. Connect to the Databases for Elasticsearch service by using the connection in the project.
  5. Set up the embedding model, index, and ingestion pipeline.
  6. Ingest the chunked and formatted documents into Watsonx Discovery.

Deploy RAG function in Watson Machine Learning

The 2-deploy-rag-function-in-wml notebook in the project handles the deployment of a Python function that performs RAG using the Databases for Elasticsearch database and watsonx.ai. The following steps are implemented in this notebook:

  1. Configure the notebook by importing the necessary packages, connecting to the Watson Machine Learning client, and adding the values configured in your project's Parameter Sets to the namespace.
  2. To deploy the Python function, a few assets in the project also must be promoted to the deployment space in addition to the function itself. These include those shown in the following image.

    File names

  3. Connect to the index in the Databases for Elasticsearch service that was set up in 1-file-ingestion-from-cos and retrieve relevant documents to the query by using Elasticsearch's embedded ELSER model.

  4. Connect to watsonx.ai and generate a response by using the retrieved documents based on the Mixtral-RAG-Template prompt template.
  5. If run_evaluator is set to True in the code, check against potential hallucination using LlamaIndex's FaithfulnessEvaluator.
  6. Test the scoring function locally before deployment.
  7. Deploy the function to space.
  8. Test the deployed endpoint by scoring some sample data.
  9. Update the OpenAPI JSON with the new wml_deployment_id generated for watsonx Assistant. (This step is optional and is needed only if you are integrating this RAG approach in watsonx Assistant).

Test your deployment (optional)

Optionally, you can test your deployment by using the third notebook, 3-test-rag-deployment, in the project. This notebook calls the deployment endpoint and reformats the deployment responses for better readability.

After running the notebooks, you have implemented the RAG using Elasticsearch and deployed it successfully. You have also tested the RAG using the notebook.

Integrate with watsonx Assistant

Watsonx Assistant provides the query interface, using either the custom RAG extension deployed just now or the native Elasticsearch search extension. The assets for setting up watsonx Assistant are located in the assistant folder of this repository and are also included in the Watson Studio project for convenience.

To use watsonx Assistant with the deployed Python function through a custom extension, you must download two JSON files from the Assets tab of watsonx.ai project:

  • configured-watsonx-assistant-extension-openapi.json
  • watsonx-assistant-actions.json

This must be done after running 2-deploy-rag-function-in-wml because the JSON file requires the deployment_id of the RAG function.

Now, continue with the following steps:

Custom extension using RAG function deployment

To set up the custom extension:

  1. Open watsonx Assistant.
  2. From the hamburger menu at the upper left, select Integrations.

    Integrations

  3. Select Build custom extensions.

    Extenions Custom extensions

  4. In the Import OpenAPI section, upload the configured-watsonx-assistant-extension-openapi.json file that you downloaded, and finish creating the extension.

    Finish extension

  5. Add the extension to the assistant by clicking Add under the newly created extension.

    Adding extension Clicking Add

  6. In the authentication section, set the Authentication type to OAuth 2.0, add your IBM Cloud API key to the Apikey field under Custom Secrets, and finish adding the extension.

    Custom extension

Configure Actions to use the extension

  1. Download the watsonx-assistant-actions.json file from the Assets tab under the watsonx.ai project used in Deploy RAG function step.
  2. Navigate to Actions -> Global Settings.

    Actions, Global settings

  3. Under Global Settings, scroll to the right, navigate to the Upload/Download tab, and upload the JSON file.

    Uploading

  4. Go back to Actions -> Variables -> Created by you ->New Variable -> create variables for the wml_deployment_id and wml_deployment_version variables. Initialize these variables with your deployment_id and version.

    Session variables

  5. Go to Actions -> Set by assistant -> No action matches, configure the extension, and select the extension that you added earlier.

  6. Set the input_data parameter in the extension configuration to:
    session variable -> extension_input_data
  7. Confirm that the wml_deployment_id parameter and version are set to your respective setting.
  8. Save your action and navigate to the preview tab.

Your assistant is now configured.

Test

  1. Go to the Preview tab.
  2. Open the assistant from the bottom right.
  3. Enter your query and submit.
  4. You are able to see a response. For example:

    Configured assistant

Summary

In this tutorial, you learned how to implement RAG using watsonx.ai with Elasticsearch by using watsonx Discovery. You can experiment with it by using it with your own use case. This allows you to use prompt engineering for improving the model responses to be specific to your format and expectation by experimenting with the Prompt Template available under the assets of the watsonx.ai project. You can also experiment with the chunk size and model parameters in the prompt template.

If you have any questions about the tutorial, please contact Simran Gupta.

References

This GitHub repository contains the necessary assets to implement a comprehensive RAG solution using watsonx Discovery and Databases for Elasticsearch, watsonx.ai, and watsonx Assistant.

This TechZone reservation provisions all of the required IBM Cloud services when configured with watsonx Discovery installed.

A more general repository is skol-assets, which contains Python modules and scripts for ingesting various file types into watsonx Discovery: skol-assets/watsonx-wxd-setup-and-ingestion