This is a cache of https://www.elastic.co/search-labs/blog/eck-gke-autopilot. It is a snapshot of the page at 2025-06-26T01:11:23.638+0000.
ECK made simple: Deploying <strong>elasticsearch</strong> on GCP GKE Autopilot - <strong>elasticsearch</strong> Labs

ECK made simple: Deploying elasticsearch on GCP GKE Autopilot

Learn how to deploy an elasticsearch cluster on GCP using GKE Autopilot and ECK.

Want to get Elastic certified? Find out when the next elasticsearch Engineer training is running!

elasticsearch is packed with new features to help you build the best search solutions for your use case. Dive into our sample notebooks to learn more, start a free cloud trial, or try Elastic on your local machine now.

In this article, we are going to learn how to deploy elasticsearch on Google Cloud Kubernetes (GKE) using Autopilot.

For elasticsearch, we are going to use Elastic Cloud on Kubernetes (ECK), which is the official elasticsearch Kubernetes operator that simplifies the Kubernetes deployments orchestration of all the Elastic Stack components.

elasticsearch deployment effort

What is GKE Autopilot?

Google Kubernetes Engine (GKE) Autopilot provides a fully managed Kubernetes experience where Google handles cluster configuration, node management, security, and scaling while developers focus on deploying applications, allowing teams to go from code to production in minutes with built-in best practices.

When to use the ECK in Google Cloud?

Elastic Cloud on Kubernetes (ECK) is best suited for organizations with existing Kubernetes infrastructure seeking to deploy elasticsearch with advanced features like dedicated node roles, high availability, and automation.

How to set up

1. Log in to the Google Cloud Console.

2. In the top right click on the Cloud Shell button to access the console, and deploy the GKE cluster from there. Alternatively, you can use the gcloud CLI.

Remember to update the project id with yours during the tutorial.

3. Enable the Google Kubernetes Engine API.

Click Next.

Now, Kubernetes Engine API should show enabled when searching for Kubernetes Engine API.

4. In Cloud shell create an Autopilot cluster. We will name it autopilot-cluster-1, and also replace autopilot-test with the id of your project.

5. Wait until it is ready. It takes around 10 minutes to create.

A confirmation message will display after correctly setting up the cluster.

6. Configure kubectl command line access.

You should see:

kubeconfig entry generated for autopilot-cluster-1.

7. Install the Elastic Cloud on Kubernetes (ECK) operator.

8. Let’s create a single node elasticsearch instance with the default values.

If you want to check some recipes for different setups, you can visit this link.

Keep in mind that if you don’t specify a storageClass, ECK will use the one set by default which for GKE is standard-rwo which uses the Compute Engine persistent disk CSI Driver, and create a 1GB volume with it.

We disabled nmap because the default GKE machine has a too low vm.max_map_count value. Disabling it is not recommended for production but increasing the value of vm.max_map_count. You can read more about how to do this here.

9. Let’s also deploy a Kibana single node cluster. For Kibana, we will add a LoadBalancer which will give us an external IP we can use to reach Kibana from our device.

Note the annotation:

cloud.google.com/l4-rbs: "enabled"

It is very important because it tells Autopilot to provide a public-facing LoadBalancer. If not set, the LoadBalancer will be internal.

10. Check that your pods are running

11. You can also run kubectl get elasticsearch and kubectl get kibana for more specific stats like elasticsearch version, nodes and health.

12. Access your services.

This will show you the external URL for Kibana under EXTERNAL-IP. It might take a few minutes for the LoadBalancer to provision. Copy the value of EXTERNAL-IP.

13 Get the elasticsearch password for the ‘elastic’ user:

14. Access Kibana through your browser:

  • URL: https://<EXTERNAL_IP>:5601
  • Username:elastic
  • Password:28Pao50lr2GpyguX470L2uj5 (from previous step)

15. Accessing from your browser you will see the welcome screen.

If you want to change the elasticsearch cluster specifications, like changing or resizing nodes, you can apply the yml manifest again with the new settings:

In this example, we are going to add one more node, and modify RAM and CPU. As you can see, now kubectl get elasticsearch shows 2 nodes:

The same applies for Kibana:

We can adjust the container CPU/RAM and also the Node.js memory usage (max-old-space-size).

Keep in mind that existing volume claims cannot be downsized. After applying the update, the operator will make the changes with the minimal disruption time.

Remember to delete the cluster when you're done testing to avoid unnecessary costs.

Next steps

If you want to learn more about Kubernetes and the Google Kubernetes Engine, check these articles:

Related content

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself