This is a cache of https://www.elastic.co/search-labs/blog/better-binary-quantization-lucene-elasticsearch. It is a snapshot of the page at 2024-12-02T00:08:21.005+0000.
Better Binary Quantization (BBQ) in Lucene and Elasticsearch - Search Labs

Better Binary Quantization (BBQ) in Lucene and Elasticsearch

How Better Binary Quantization works in Lucene and Elasticsearch.

Embedding models output float32 vectors, often too large for efficient processing and practical apps. Elasticsearch supports int8 scalar quantization to reduce vector size while preserving performance. Other methods reduce retrieval quality and are impractical for real world use. In Elasticsearch 8.16 and Lucene, we introduced Better Binary Quantization (BBQ), a new approach developed from insights drawn from a recent technique - dubbed “RaBitQ” - proposed by researchers from Nanyang Technological University, Singapore.

BBQ is a leap forward in quantization for Lucene and Elasticsearch, reducing float32 dimensions to bits, delivering ~95% memory reduction while maintaining high ranking quality. BBQ outperforms traditional approaches like Product Quantization (PQ) in indexing speed (20-30x less quantization time), query speed (2-5x faster queries), with no additional loss in accuracy.

In this blog, we will explore BBQ in Lucene and Elasticsearch, focusing on recall, efficient bitwise operations, and optimized storage for fast, accurate vector search.

What does the "better" in Better Binary Quantization mean?

In Elasticsearch 8.16 and in Lucene, we have introduced what we call "Better Binary Quantization". Naive binary quantization is exceptionally lossy and achieving adequate recall requires gathering 10x or 100x additional neighbors to rerank. This just doesn't cut it.

In comes Better Binary Quantization! Here are some of the significant differences between Better Binary Quantization and naive binary quantization:

  • All vectors are normalized around a centroid. This unlocks some nice properties in quantization.
  • Multiple error correction values are stored. Some of these corrections are for the centroid normalization, some are for the quantization.
  • Asymmetric quantization. Here, while the vectors themselves are stored as single bit values, queries are only quantized down to int4. This significantly increases search quality at no additional cost in storage.
  • Bit-wise operations for fast search. The query vectors are quantized and transformed in such a way that allows for efficient bit-wise operations.

Indexing with better binary quantization

Indexing is simple. Remember, Lucene builds individual read only segments. As vectors come in for a new segment the centroid is incrementally calculated. Then once the segment is flushed, each vector is normalized around the centroid and quantized.

Here is a small example:

v1=[0.56,0.85,0.53,0.25,0.46,0.01,0.63,0.73]c=[0.65,0.65,0.52,0.35,0.69,0.30,0.60,0.76]vc1=v1c=[0.09,0.19,0.01,0.10,0.23,0.38,0.05,0.03]bin(vc1)={{1x>00otherwise:xvc1}bin(vc1)=[0,1,1,0,0,0,0,0]0b00000110=6v_{1} = [0.56, 0.85, 0.53, 0.25, 0.46, 0.01 , 0.63, 0.73] \newline c = [0.65, 0.65, 0.52, 0.35, 0.69, 0.30, 0.60, 0.76] \newline v_{c1}' = v_{1} - c = [-0.09, 0.19, 0.01, -0.10, -0.23, -0.38, -0.05, -0.03] \newline bin(v_{c1}') = \left\{ \begin{cases} 1 & x\gt 0 \\ 0 & otherwise \end{cases} : x \in v_{c1}'\right\} \newline bin(v_{c1}') = [0, 1, 1, 0, 0, 0, 0, 0] \newline 0b00000110 = 6

When quantizing down to the bit level, 8 floating point values are transformed into a single 8bit byte.

Then, each of the bits are packed into a byte and stored in the segment along with any error correction values required for the vector similarity chosen.

For each vector, bytes stored are dims/8 number of bytes and then any error correction values; 2 floating point values for Euclidean, or 3 for dot product.

A quick side quest to talk about how we handle merging

When segments are merged, we can take advantage of the previously calculated centroids. Simply doing a weighted average of the centroids and then re-quantizing the vectors around the new centroid.

What gets tricky is ensuring HNSW graph quality and allowing the graph to be built with the quantized vectors. What's the point of quantizing if you still need all the memory to build the index?!

In addition to appending vectors to the existing largest HNSW graph, we need to ensure vector scoring can take advantage of asymmetric quantization. HNSW has multiple scoring steps: one for the initial collection of neighbors, and another for ensuring only diverse neighbors are connected. In order to efficiently use asymmetric quantization, we create a temporary file of all vectors quantized as 4bit query vectors.

So, as a vector is added to the graph we first:

  • Get the already quantized query vector that is stored in the temporary file.
  • Search the graph as normal using the already existing bit vectors.
  • Once we have the neighbors, diversity and reverse-link scoring can be done with the previously int4 quantized values.

After the merge is complete, the temporary file is removed leaving only the bit quantized vectors.

The temporary file stores each query vector as an int4 byte array which takes dims/2 number of bytes, some floating point error correction values (3 for Euclidean, 4 for dot product), and a short value for the sum of the vector dimensions.

Asymmetric quantization, the interesting bits

I have mentioned asymmetric quantization and how we lay out the queries for graph building. But, how are the vectors actually transformed? How does it work?

The "asymmetric" part is straight forward. We quantize the query vectors to a higher fidelity. So, doc values are bit quantized and query vectors are int4 quantized. What gets a bit more interesting is how these quantized vectors are transformed for fast queries.

Taking our example vector from above, we can quantize it to int4 centered around the centroid.

vc1=v1c=[0.09,0.19,0.01,0.10,0.23,0.38,0.05,0.03]maxvc1=max(vc1)=0.19,minvc1=min(vc1)=0.38Q(xs)={(xminvc1)×15maxvc1minvc1:xxs}Q(vc1)={(x(0.38))×150.19(0.38):xvc1}={(x+0.38)×26.32:xvc1}=[8,15,10,7,4,0,9,9]v_{c1}' = v_{1} - c = [-0.09, 0.19, 0.01, -0.10, -0.23, -0.38, -0.05, -0.03] \newline max_{v_{c1}'}=max(v_{c1}')=0.19,min_{v_{c1}'}=min(v_{c1}')=-0.38 \newline Q(x_{s}) = \{(x-min_{v_{c1}'}) \times \frac{15}{max_{v_{c1}'} - min_{v_{c1}'}} : x \in x_{s} \} \newline Q(v_{c1}') = \{(x-(-0.38)) \times \frac{15}{0.19 -(-0.38)} : x \in v_{c1}' \} \newline = \{(x + 0.38) \times 26.32 : x \in v_{c1}' \} \newline = [8, 15, 10, 7, 4, 0, 9, 9]

With the quantized vector in hand, this is where the fun begins. So we can translate the vector comparisons to a bitwise dot product, the bits are shifted.

Its probably better to just visualize what is happening:

Here, each int4 quantized value has its relative positional bits shifted to a single byte. Note how all the first bits are packed together first, then the second bits, and so on.

But how does this actually translate to dot product? Remember, dot product is the sum of the component products. For the above example, let's write this fully out:

bin(vc1)Q(vc1)=[0,1,1,0,0,0,0,0][8,15,10,7,4,0,9,9]=[0×8+1×15+1×10+0×7+0×4+0×0+0×9+0×9]=15+10=25bin(v_{c1}') \cdot Q(v_{c1}') = [0, 1, 1, 0, 0, 0, 0, 0] \cdot [8, 15, 10, 7, 4, 0, 9, 9] \newline = [0 \times 8 + 1 \times 15 + 1 \times 10 + 0 \times 7 + 0 \times 4 + 0 \times 0 + 0 \times 9 + 0 \times 9] \newline = 15 + 10 = 25

We can see that its simply the summation of the query components where the stored vector bits are 1. And since all numbers are just bits, when expressed using a binary expansion, we can move things around a bit to take advantage of bitwise operations.

The bits that will be flipped after the &\& will be the individual bits of the numbers that contribute to the dot product. In this case 15 and 10.

Remember our originally stored vectorstoredVecBits=bin(vc1)=[0,1,1,0,0,0,0,0]We rotate the combine the bits resulting instoredVectBits=0b11000000The query vector, int4 quantizedQ(vc1)=[8,15,10,7,4,0,9,9]The binary values of each dimensionbits(Q(vc1))=[0b1000,0b1111,0b1010,0b0111,0b0100,0b0000,0b1001,0b1001]We shift the bits and align as shown in above visualizationqVecBits=align(bits(Q(vc1)))=[0b11001010,0b00001110,0b00011010,0b11000111]qVecBits&storedVectBits={qVecBits&bits:bitsstoredVectBits}=[0b00000010,0b00000110,0000000010,0b0000110]\text{Remember our originally stored vector} storedVecBits = bin(v_{c1}') = [0, 1, 1, 0, 0, 0, 0, 0] \newline \text{We rotate the combine the bits resulting in} \newline storedVectBits = 0b11000000 \newline \text{The query vector, int4 quantized} \newline Q(v_{c1}') = [8, 15, 10, 7, 4, 0, 9, 9] \newline \text{The binary values of each dimension} \newline bits(Q(v_{c1}')) = [0b\textcolor{lime}{1}\textcolor{red}{0}\textcolor{cyan}{0}\textcolor{orange}{0}, 0b\textcolor{lime}{1}\textcolor{red}{1}\textcolor{cyan}{1}\textcolor{orange}{1}, 0b\textcolor{lime}{1}\textcolor{red}{0}\textcolor{cyan}{1}\textcolor{orange}{0}, 0b\textcolor{lime}{0}\textcolor{red}{1}\textcolor{cyan}{1}\textcolor{orange}{1}, 0b\textcolor{lime}{0}\textcolor{red}{1}\textcolor{cyan}{0}\textcolor{orange}{0}, 0b\textcolor{lime}{0}\textcolor{red}{0}\textcolor{cyan}{0}\textcolor{orange}{0}, 0b\textcolor{lime}{1}\textcolor{red}{0}\textcolor{cyan}{0}\textcolor{orange}{1}, 0b\textcolor{lime}{1}\textcolor{red}{0}\textcolor{cyan}{0}\textcolor{orange}{1}] \newline \text{We shift the bits and align as shown in above visualization} \newline qVecBits = align(bits(Q(v_{c1}'))) = [0b\textcolor{orange}{11001010}, 0b\textcolor{cyan}{00001110}, 0b\textcolor{red}{00011010}, 0b\textcolor{lime}{11000111}] \newline qVecBits \, \& \, storedVectBits = \{qVecBits \, \& \, bits : bits \in storedVectBits\} \newline = [0b00000010, 0b00000110, 0000000010, 0b0000110]

Now we can count the bits, shift and sum back together. We can see that all the bits that are left over are the positional bits for 15 and 10.

=(bitCount(0b00000010)<<0)+(bitCount(0b00000110)<<1)+(bitCount(0b00000010)<<2)+(bitCount(0b0000110)<<3)=(1<<0)+(2<<1)+(1<<2)+(2<<3)=25 = (bitCount(0b00000010) << 0) + (bitCount(0b00000110) << 1) + (bitCount(0b00000010) << 2) + (bitCount(0b0000110) << 3) \newline = (1 << 0) + (2 << 1) + (1 << 2) + (2 << 3) \newline = 25

Same answer as summing the dimensions directly.

Here is the example but simplifed java code:

byte[] bits = new byte[]{6};
byte[] queryBits = new byte[]{202, 14, 26, 199};
for (int i = 0; i < 4; i++) {
  sum += Integer.bitCount(bits[0] & queryBits[i] & 0xFF) << i;
}

Alright, show me the numbers

We have done extensive testing with BBQ both in Lucene and Elasticsearch directly. Here are some of the results:

Lucene benchmarking

The benchmarking here is done over three datasets: E5-small, CohereV3, and CohereV2. Here, each element indicates recall@100 with oversampling by [1, 1.5, 2, 3, 4, 5].

E5-small

This is 500k vectors for E5-small built from the quora dataset.

quantizationIndex TimeForce Merge timeMem Required
bbq161.8442.3757.6MB
4 bit215.1659.98123.2MB
7 bit 267.1389.99219.6MB
raw249.2677.81793.5MB

It&#x27;s sort of mind blowing that we get recall of 74% with only a single bit precision. Since the number of dimensions are fewer, the BBQ distance calculation isn&#x27;t that much faster than our optimized int4.

CohereV3

This is 1M 1024 dimensional vectors, using the CohereV3 model.

quantizationIndex TimeForce Merge timeMem Required
bbq338.97342.61208MB
4 bit398.71480.78578MB
7 bit437.63744.121094MB
raw408.75798.114162MB

Here, 1bit quantization and HNSW gets above 90% recall with only 3x oversampling.

CohereV2

This is 1M 768 dimensional vectors, using the CohereV2 model and max inner product similarity.

quantizationIndex TimeForce Merge timeMem Required
bbq395.18411.67175.9MB
4 bit463.43573.63439.7MB
7 bit 500.59820.53833.9MB
raw493.44792.043132.8MB

It&#x27;s really interesting to see how much BBQ and int4 are in lock-step with this benchmark. Its neat that BBQ can get such high recall with inner-product similarity with only 3x oversampling.

Larger scale Elasticsearch benchmarking

As referenced in our larger scale vector search blog we have a rally track for larger scale vector search benchmarking.

This data set has 138M floating point vectors of 1024 dimensions. Without any quantization, this would require around 535 GB of memory with HNSW. With better-binary-quantization, the estimate drops to around 19GB.

For this test, we used a single 64GB node in Elastic cloud with the following track parameters:

{
        "mapping_type": "vectors-only",
        "vector_index_type": "bbq_hnsw",
        "number_of_shards": 2,
        "initial_indexing_bulk_indexing_clients": 12,
        "standalone_search_clients": 8,
        "aggressive_merge_policy": true,
        "search_ops": [[10, 20, 0], [10, 20, 20], [10, 50, 0], [10, 50, 20], [10, 50, 50], [10, 100, 0], [10, 100, 20], [10, 100, 50], [10, 100, 100], [10, 200, 0], [10, 200, 20], [10, 200, 50], [10, 200, 100], [10, 200, 200], [10, 500, 0], [10, 500, 20], [10, 500, 50],[10, 500, 100],[10, 500, 200],[10, 500, 500],[10, 1000, 0], [10, 1000, 20], [10, 1000, 50], [10, 1000, 100], [10, 1000, 200], [10, 1000, 500], [10, 1000, 1000]]
}

Important note, if you want to replicate, it will take significant time to download all the data and requires over 4TB of disk space. The reason for all the additional disk space is that this dataset also contains text fields, and you need diskspace for both the compressed files and their inflated size.

The parameters are as follows:

  • k is the number of neighbors to search for
  • num_candidates is the number of candidates used to explore per shard in HNSW
  • rerank is the number of candidates to rerank, so we will gather that many values per shard, collect the total rerank size and then rescore the top k values with the raw float32 vectors.

For indexing time, it took around 12 hours. And instead of showing all the results, here are three interesting ones:

k-num_candidates-rerankAvg Nodes Visited% Of Best NDGCRecallSingle Query Latency Multi-Client QPS
knn-recall-10-100-5036,079.80190% 70% 18ms451.596
knn-recall-10-2015,915.211 78% 45% 9ms1,134.649
knn-recall-10-1000-200115,598.11797% 90% 42.534ms 167.806

This shows the importance of balancing recall, oversampling, reranking and latency. Obviously, each needs to be tuned for your specific use case, but considering this was impossible before and now we have 138M vectors in a single node, it&#x27;s pretty cool.

Conclusion

Thank you for taking a bit of time and reading about Better Binary Quantization. Originally being from Alabama and now living in South Carolina, BBQ already held a special place in my life. Now, I have more reason to love BBQ!

We will release this as tech-preview in 8.16, or in serverless right now. To use this, just set your dense_vector.index_type as bbq_hnsw or bbq_flat in Elasticsearch.

Elasticsearch is packed with new features to help you build the best search solutions for your use case. Dive into our sample notebooks to learn more, start a free cloud trial, or try Elastic on your local machine now.

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. Elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself