This is a cache of https://www.elastic.co/search-labs/blog/category/vector-database. It is a snapshot of the page at 2025-09-09T01:05:04.414+0000.
Vector Database - Elasticsearch Labs

Vector Database

MCP for intelligent search

Building an intelligent search system by integrating Elastic's intelligent query layer with MCP to enhance the generative efficacy of LLMs.

MCP for intelligent search
Vector search filtering: Keep it relevant

September 3, 2025

Vector search filtering: Keep it relevant

Performing vector search to find the most similar results to a query is not enough. Filtering is often needed to narrow down search results. This article explains how filtering works for vector search in Elasticsearch and Apache Lucene.

Lighter by default: Excluding vectors from source

Lighter by default: Excluding vectors from source

Elasticsearch now excludes vectors from source by default, saving space and improving performance while keeping vectors accessible when needed.

Beyond similar names: How Elasticsearch semantic text exceeds OpenSearch semantic field in simplicity, efficiency, and integration

August 12, 2025

Beyond similar names: How Elasticsearch semantic text exceeds OpenSearch semantic field in simplicity, efficiency, and integration

Comparing Elasticsearch semantic text and OpenSearch semantic field in terms of simplicity, configurability, and efficiency.

Using Direct IO for vector searches

August 8, 2025

Using Direct IO for vector searches

Using rescoring for kNN vector searches improves search recall, but can increase latency. Learn how to reduce this impact by leveraging direct IO.

Elasticsearch now with BBQ by default & ACORN for filtered vector search

Elasticsearch now with BBQ by default & ACORN for filtered vector search

Explore how Elasticsearch's vector search now delivers better results faster, and at a lower cost.

Diversifying search results with Maximum Marginal Relevance

Diversifying search results with Maximum Marginal Relevance

Implementing the Maximum Marginal Relevance (MMR) algorithm with Elasticsearch and Python. This blog includes code examples for vector search reranking.

Semantic text is all that and a bag of (BBQ) chips! With configurable chunking settings and index options

Semantic text is all that and a bag of (BBQ) chips! With configurable chunking settings and index options

Semantic text search is now customizable, with support for customizable chunking settings and index options to customize vector quantization, making semantic_text more powerful for expert use cases.

Elasticsearch open inference API adds support for IBM watsonx.ai rerank models

Elasticsearch open inference API adds support for IBM watsonx.ai rerank models

Exploring how to use IBM watsonx™ reranking when building search experiences in the Elasticsearch vector database.

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. Elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself