This is a cache of https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html. It is a snapshot of the page at 2024-12-14T00:58:33.172+0000.
Get ready for production | Elasticsearch Guide [8.17] | Elastic

Get ready for production

edit

Many teams rely on Elasticsearch to run their key services. To keep these services running, you can design your Elasticsearch deployment to keep Elasticsearch available, even in case of large-scale outages. To keep it running fast, you also can design your deployment to be responsive to production workloads.

Elasticsearch is built to be always available and to scale with your needs. It does this using a distributed architecture. By distributing your cluster, you can keep Elastic online and responsive to requests.

In case of failure, Elasticsearch offers tools for cross-cluster replication and cluster snapshots that can help you fall back or recover quickly. You can also use cross-cluster replication to serve requests based on the geographic location of your users and your resources.

Elasticsearch also offers security and monitoring tools to help you keep your cluster highly available.

Use multiple nodes and shards

edit

Nodes and shards are what make Elasticsearch distributed and scalable.

These concepts aren’t essential if you’re just getting started. How you deploy Elasticsearch in production determines what you need to know:

  • Self-managed Elasticsearch: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies.
  • Elastic Cloud: Elastic can autoscale resources in response to workload changes. Choose from different deployment types to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important.
  • Elastic Cloud Serverless: You don’t need to worry about nodes, shards, or replicas. These resources are 100% automated on the serverless platform, which is designed to scale with your workload.

You can add servers (nodes) to a cluster to increase capacity, and Elasticsearch automatically distributes your data and query load across all of the available nodes.

Elastic is able to distribute your data across nodes by subdividing an index into shards. Each index in Elasticsearch is a grouping of one or more physical shards, where each shard is a self-contained Lucene index containing a subset of the documents in the index. By distributing the documents in an index across multiple shards, and distributing those shards across multiple nodes, Elasticsearch increases indexing and query capacity.

There are two types of shards: primaries and replicas. Each document in an index belongs to one primary shard. A replica shard is a copy of a primary shard. Replicas maintain redundant copies of your data across the nodes in your cluster. This protects against hardware failure and increases capacity to serve read requests like searching or retrieving a document.

The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can be changed at any time, without interrupting indexing or query operations.

Shard copies in your cluster are automatically balanced across nodes to provide scale and high availability. All nodes are aware of all the other nodes in the cluster and can forward client requests to the appropriate node. This allows Elasticsearch to distribute indexing and query load across the cluster.

If you’re exploring Elasticsearch for the first time or working in a development environment, then you can use a cluster with a single node and create indices with only one shard. However, in a production environment, you should build a cluster with multiple nodes and indices with multiple shards to increase performance and resilience.

To learn about optimizing the number and size of shards in your cluster, refer to Size your shards. To learn about how read and write operations are replicated across shards and shard copies, refer to Reading and writing documents. To adjust how shards are allocated and balanced across nodes, refer to Shard allocation, relocation, and recovery.

CCR for disaster recovery and geo-proximity

edit

To effectively distribute read and write operations across nodes, the nodes in a cluster need good, reliable connections to each other. To provide better connections, you typically co-locate the nodes in the same data center or nearby data centers.

Co-locating nodes in a single location exposes you to the risk of a single outage taking your entire cluster offline. To maintain high availability, you can prepare a second cluster that can take over in case of disaster by implementing cross-cluster replication (CCR).

CCR provides a way to automatically synchronize indices from your primary cluster to a secondary remote cluster that can serve as a hot backup. If the primary cluster fails, the secondary cluster can take over.

You can also use CCR to create secondary clusters to serve read requests in geo-proximity to your users.

Learn more about cross-cluster replication and about designing for resilience.

You can also take snapshots of your cluster that can be restored in case of failure.

Security and monitoring

edit

As with any enterprise system, you need tools to secure, manage, and monitor your Elasticsearch clusters. Security, monitoring, and administrative features that are integrated into Elasticsearch enable you to use Kibana as a control center for managing a cluster.

Learn about securing an Elasticsearch cluster.

Learn about monitoring your cluster.

Cluster design

edit

Elasticsearch offers many options that allow you to configure your cluster to meet your organization’s goals, requirements, and restrictions. You can review the following guides to learn how to tune your cluster to meet your needs:

Many Elasticsearch options come with different performance considerations and trade-offs. The best way to determine the optimal configuration for your use case is through testing with your own data and queries.