This is a cache of https://www.elastic.co/search-labs/blog/category/vector-database. It is a snapshot of the page at 2025-07-13T00:58:56.860+0000.
Vector Database - <strong>elasticsearch</strong> Labs

Vector Database

Diversifying search results with Maximum Marginal Relevance

Implementing the Maximum Marginal Relevance (MMR) algorithm with elasticsearch and Python. This blog includes code examples for vector search reranking.

Diversifying search results with Maximum Marginal Relevance
Semantic text is all that and a bag of (BBQ) chips! With configurable chunking settings and index options

Semantic text is all that and a bag of (BBQ) chips! With configurable chunking settings and index options

Semantic text search is now customizable, with support for customizable chunking settings and index options to customize vector quantization, making semantic_text more powerful for expert use cases.

Elasticsearch open inference API adds support for IBM watsonx.ai rerank models

elasticsearch open inference API adds support for IBM watsonx.ai rerank models

Exploring how to use IBM watsonx™ reranking when building search experiences in the elasticsearch vector database.

Mapping embeddings to Elasticsearch field types: semantic_text, dense_vector, sparse_vector

May 13, 2025

Mapping embeddings to elasticsearch field types: semantic_text, dense_vector, sparse_vector

Discussing how and when to use semantic_text, dense_vector, or sparse_vector, and how they relate to embedding generation.

How to implement Better Binary Quantization (BBQ) into your use case and why you should

April 23, 2025

How to implement Better Binary Quantization (BBQ) into your use case and why you should

Exploring why you would implement Better Binary Quantization (BBQ) in your use case and how to do it.

Elasticsearch BBQ vs. OpenSearch FAISS: Vector search performance comparison

April 15, 2025

elasticsearch BBQ vs. OpenSearch FAISS: Vector search performance comparison

A performance comparison between elasticsearch BBQ and OpenSearch FAISS.

Speeding up merging of HNSW graphs

Speeding up merging of HNSW graphs

Explore the work we’ve been doing to reduce the overhead of building multiple HNSW graphs, particularly reducing the cost of merging graphs.

 Scaling late interaction models in Elasticsearch - part 2

Scaling late interaction models in elasticsearch - part 2

This article explores techniques for making late interaction vectors ready for large-scale production workloads, such as reducing disk space usage and improving computation efficiency.

Exploring GPU-accelerated Vector Search in Elasticsearch with NVIDIA

Exploring GPU-accelerated Vector Search in elasticsearch with NVIDIA

Powered by NVIDIA cuVS, the collaboration looks to provide developers with GPU-acceleration for vector search in elasticsearch.

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself