This is a cache of https://www.elastic.co/observability-labs/blog/mcp-elastic-synthetics. It is a snapshot of the page at 2025-09-18T01:08:20.168+0000.
Automating User Journeys for Synthetic Monitoring with MCP in Elastic — Elastic Observability Labs
Jessica Garson

Automating User Journeys for Synthetic Monitoring with MCP in Elastic

This post explores how you can automatically create user journeys with Synthetic Monitoring in Elastic Observability, TypeScript, and FastMCP, and walks through the app and its workflow.

Automating User Journeys for Synthetic Monitoring with MCP in Elastic

Synthetic Monitoring in Elastic Observability enables you to track user pathways using a global testing infrastructure, emulating the full user path to measure the impact of web applications. It also provides comprehensive insight into your website's performance, functionality, and availability from development to production, allowing you to identify and resolve issues before they affect your customers.

One of the main components of Elastic's Synthetic Monitoring is the ability to create user journeys, which can be done with or without code. There is a Synthetics agent,, a CLI tool that guides you through the process of creating both heartbeat monitors and user journeys and deploying your code to Elastic Observability. If you are using code to create user journeys, you are using Playwright under the hood with some additional configuration to make it easier to work with Elastic Observability.

To automatically create user journeys using TypeScript, you can create Playwright tests based on a prompt using Warp, an AI-assisted terminal, Gemini 2.5 Pro, and MCP. This application was built using Python and FastMCP, which wraps the synthetic agent to deploy browser tests to Elastic automatically. This blog post will guide you through how the application works, how to use it, and its development process. You can find the complete code on GitHub.

Solution overview

Currently, this solution is set up to run inside Warp as an MCP server; however, you can also use another client, such as Claude Desktop or Cursor. From there, you create a Python script using FastMCP, which allows you to create functions that are callable by an LLM. Within Warp, you can make a configuration file in JSON that enables you to point to your Python script and pass in all the environment variables you are working with. From there, you'll want to toggle agent mode and ask a question about creating synthetic testing or call the MCP function directly. There are many options for which LLM you can select, be sure to check out Warp's documentation to learn more about the options available.

After that, you should ask a question about creating synthetic testing or call the MCP function you are looking for. The following three functions can be used:

  • diagnose_warp_mcp_config
    Used for debugging environment variable issues that may arise. This function likely won't be needed unless there is an issue with your configuration.

  • create_and_deploy_browser_test
    Will automatically create Playwright tests if given the test name, the URL you want to test, and a schedule. This approach uses a template-based method, rather than a machine learning-based method, and all the tests it outputs will appear similar.

  • llm_create_and_deploy_test_from_prompt
    Similar to
    create_and_deploy_browser_test
    , but the main difference is that it uses an LLM to create tests based on a prompt you give it. The tests should reflect the prompt you provided. To run this function you'll provide a test name, URL, prompt, and schedule.

Why create this solution as an MCP server?

The reason this was developed as an MCP server, as opposed to just a standalone script or a standard CLI, is that it can be structured and interacted with in a more conversational manner. It enables an LLM to generate dynamic Playwright testing while maintaining consistent arguments, environment variables, and responses to ensure accuracy and reliability. Thus, it becomes a reliable workflow that other agents or developers can compose with additional tools. In other words, the MCP layer turns your LLM-based test authoring into a standardized, reusable capability instead of a one-off script. To learn more about the direction of MCP, be sure to check out our article on the topic.

Implementation considerations

When creating a solution like this one, one thing to be mindful of is your use of tokens. An early version of this solution took approximately twenty minutes to create synthetic tests and ultimately led to severe rate-limiting.

Another issue faced during the building process was striking a balance between creating a template that facilitates the creation of a Playwright script and having an LLM create Playwright scripts based on prompts that didn't feel cookie-cutter. While using a more LLM approach an issue faced was that the scripts often didn't work or were based on parameters that didn't exist and a more templated approach was more reliable but felt repetitive. The final version of this solution attempted to balance this by using elements of the template while adjusting the LLM parameter of temperature, which controls the randomness or creativity of a large language model's output.

While testing this solution, a failing test also emerged that required navigating past a pop-up. In more complex cases, this may serve as a building block that requires additional domain knowledge to create a complete passing Playwright test.

How to get started

Prerequisites

  • The version of Python that is used is Python 3.12.1 but you can use any version of Python higher than 3.10.
  • This application uses Elastic Observability version 9.1.2, but you can use any version of Elastics Observability that is higher than 8.10. You can also use Elastic Cloud Serverless as well.
  • You will also need an OpenAI API key to use the LLM capabilities of this application. You will want to configure an environment variable for your OpenAI API Key, which you can find on the API keys page in OpenAI's developer portal.

Step 1: Install the packages and clone the repository

In order for this MCP server to run locally you will need to install the the following packages:

pip install fastmcp openai
npm install -g playwright @elastic/synthetics

You will use FastMCP 2.0 to create the MCP server, and OpenAI to generate tests based on prompts that you provide. Additionally, you will want to clone the repository to obtain a local copy of the server.

Step 2: Set up a configuration file in Warp

Inside of Warp, you will want to go to the side panel, where it says MCP servers and where it says “add”.

After that, you will be prompted to add a JSON configuration file that should resemble the following. Be sure to add your own Kibana URL, update the correct path, and include your own keys and tokens.

{
 "elastic-synthetics": {
   "command": "python",
   "args": ["elastic_synthetics_server.py"],
   "env": {
     "PYTHONPATH": ".",
     "ELASTIC_KIBANA_URL": "https://your-kibana-url.elastic-cloud.com",
     "ELASTIC_API_KEY": "your-api-key-here",
     "ELASTIC_PROJECT_ID": "mcp-synthetics-demo",
     "ELASTIC_SPACE": "default",
     "ELASTIC_AUTO_PUSH": "true",
     "ELASTIC_USE_JAVASCRIPT": "false",
     "ELASTIC_INSTALL_DEPENDENCIES": "true",
     "OPENAI_API_KEY": "sk-your-openai-key",
     "LLM_MODEL": "gpt-4o"
   },
   "working_directory": "/path/to/your/file",
   "start_on_launch": true 
   }
}

Step 3: Ask a question or call the tools directly

Now that you've set up locally, you will want to toggle agent mode and select the LLM you wish to use. The reason why Gemini-Pro-2.5 was chosen for this blog post is that it provides a straightforward answer, while other LLMs selected returned a very lengthy response.

To start using the MCP tools, from your MCP server, you can ask a question that contains the test name, URL, prompt, and schedule.

You can also call the directly by typing

llm_create_and_deploy_test_from_prompt()
and the program will prompt you for the relevant details:

Inside Kibana, you should see your monitor listed if you click under Applications and select Monitors listed under Synthetics. You can also find a link to your monitor in the response of your MCP tool.

What's Going On Inside

This code sample consists of three primary functions, which are MCP tools that you can call from your MCP client, including

diagnose_warp_mcp_config
,
create_and_deploy_browser_test
and
llm_create_and_deploy_test_from_prompt
.

Debugging environment issues

There were various issues that came up while creating this application around environment variable loading, so there was a need to create an MCP that could be called depending on errors that may be present.

The tool

diagnose_warp_mcp_config
kicks off with a decorator
@mcp.tool()
which allows it to be called and listed in the list of available tools. This tool is designed to help debug issues with Elastic-specific environment variables for troubleshooting purposes. First, it loads in the environment variables and looks for the Elastic specific variables, after it does some security masking so it doesn't show any variables and hides sensitive information like API keys in the output, showing only the first eight characters followed by "...". This tool determines if the minimum required credentials (Kibana URL and API Key) are present to proceed with deployment and provides a report letting you know to address any issues that may exist.

@mcp.tool()
def diagnose_warp_mcp_config() -> Dict[str, Any]:
   """Diagnose Warp MCP environment configuration for Elastic Synthetics"""
   try:
       env_vars = load_env_from_warp_mcp()
      
       # Check for required variables
       kibana_url = env_vars.get('ELASTIC_KIBANA_URL') or env_vars.get('KIBANA_URL')
       api_key = env_vars.get('ELASTIC_API_KEY') or env_vars.get('API_KEY')
       project_id = env_vars.get('ELASTIC_PROJECT_ID') or env_vars.get('PROJECT_ID')
       space = env_vars.get('ELASTIC_SPACE') or env_vars.get('SPACE', 'default')
      
       # Mask sensitive values for display
       masked_vars = {}
       for key, value in env_vars.items():
           if 'API_KEY' in key or 'TOKEN' in key:
               masked_vars[key] = f"{value[:8]}..." if value and len(value) > 8 else "***"
           else:
               masked_vars[key] = value
      
       deployment_ready = bool(kibana_url and api_key)
      
       return safe_json_response({
           "status": "success",
           "environment_variables": masked_vars,
           "required_check": {
               "kibana_url": bool(kibana_url),
               "api_key": bool(api_key),
               "project_id": bool(project_id),
               "space": bool(space)
           },
           "deployment_ready": deployment_ready,
           "recommendations": [
               "Environment variables detected" if env_vars else "No environment variables found",
               "Kibana URL configured" if kibana_url else "Missing ELASTIC_KIBANA_URL or KIBANA_URL",
               "API Key configured" if api_key else "Missing ELASTIC_API_KEY or API_KEY",
               "Ready for deployment" if deployment_ready else "Missing required credentials"
           ]
       })
      
   except Exception as e:
       return safe_json_response({
           "status": "error",
           "error": str(e),
           "error_type": type(e).__name__
       })

Creating synthetic tests based on a template

While developing this solution to generate tests based on a prompt, the process wasn't always smooth. Early versions encountered issues with accuracy, hallucinations, and the creation of loops. To make progress, a version that relied on creating a test template to verify the mechanics of the solution, such as whether the test could pass and be deployed to Elastic correctly, was a logical next step.

This solution automates the entire process of creating a synthetic browser test that will regularly check if a website is working correctly, then deploys it to Elastic Observability Synthetics. Similar to

diagnose_warp_mcp_config
, the MCP tool
create_and_deploy_browser_test
starts with the decorator
@mcp.tool()
and checks to make sure that the proper environment variables are loaded.

From there, it creates a TypeScript test file that is based on templates and generates dynamic test steps based on the target website's characteristics, including navigating to the website, verifying the page title exists, checking page load performance, taking a screenshot, verifying page content is visible, and finally saves the test file in a

synthetic_tests
directory.

Finally, it wraps Elastic's CLI tool

@elastic/synthetics
to push the test to Kibana, allowing you to set which geographic locations to run tests from, how often to run the test, and the project and workspace settings.

You check out the full code for this MCP tool here.

Creating synthetic tests based on a prompt

While creating browser tests based on a templated approach is a good starting point, it felt generic and cookie-cutter. But it made a helpful structure to build an LLM-based function on top of.

The MCP tool

llm_create_and_deploy_test_from_prompt
begins by ensuring that basic parameters, including locations, schedule, and directories, are listed. Additionally, it aims to learn more about the target website to inform the AI and initialize the OpenAI client and model, which is GPT-4o.

After setting up the LLM, it converts natural language requests into actual Playwright test code, then cleans and validates the AI-generated code to prevent issues like injection attacks or malformed syntax. It draws inspiration from the templated approach, wrapping AI-generated steps within a proven, reliable test framework template. Finally, it deploys the test to Elastic in a similar manner to the previous tool.

You can find the code for this tool here.

Conclusion and next steps

Synthetic monitoring in Elastic Observability makes it easy to test complete user journeys and keep your site reliable, with simple setup and a Playwright integration. A tool like this can provide a starting point for tests that you can iterate on after.

A solution like this is just the start of an MCP implementation that automatically generates Playwright tests for you and can be expanded in the future to include heartbeat monitors, utilize the Playwright MCP server, or consider experimenting with Claude for Chrome to create synthetic testing.

Check out more articles on Observability Labs on Synthetic Monitoring

Share this article