-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could we encode postings the way we encode monotonic long doc values? #12477
Comments
Note that Tantivy uses binary search to locate the target docid in the block of docs -- somehow Tantivy uses SIMD to decode (docid-delta encoded) postings into absolute docids first, and then binary search within that block. |
I have attempted to encode/decode the post block using SIMD instructions. However, I believe it may not be the opportune moment to vectorize it. This is because we are currently unable to generate scalar code that can match the performance of the existing code. You can find more details in this pull request. #12417 |
I'm confused by this: don't we already have the scalar code today (our current gen'd FOR implementation that Hotspot autovectorizes well) that we could fallback to? Or is the problem that the Panama APIs don't make it easy for us to determine that we should fallback to our impl? I'll try to catch up on that long discussion. Thanks for the pointer @tang-hi! |
Daniel Lemire's awesome paper "Decoding billions of integers per second through vectorization" should surely be helpful, once we can wrestle Panama into being honest/transparent about the SIMD capabilities of the underlying CPU. |
This is because in the current version, we have implemented some tricks to enable automatic vectorization in the JVM, which makes the existing file format not very friendly for SIMD usage. Therefore, we have two options:
|
Thanks @tang-hi -- I think this is a great place to leverage Lucene's |
Description
Lucene has an efficient (storage and CPU) compressor for monotonic long values, that simply makes "best fit" (ish?) linear model to the N monotonic values, and then encodes the precise error signal (positive and negative) of each value. I think we use it for doc values in certain cases? Or maybe only for in-memory data structures?
Lucene's block postings is different: it encodes the docid delta between each document. But because postings are encoded this way, we have a linear/chained data dependency and must sequentially add up all the docid deltas to get the true docid of each of the 128 postings in the block.
Could we change postings to instead encode with the linear fit? We'd maybe lose some compression (having to store negative and positive --
ZInt
), but then decoding could be done concurrently with simple SIMD math, and then skipping might be able to do a binary search within the block?I know this (efficient block
int[]
encoding for SIMD decode) is a well/heavily studied area in the literature :) I'm sure there is already a good option on Daniel Lemire's blog somewhere!It's not so simple, though, because we also accumulate the
freq
of each posting so we can know where we are in the positions/payloads block space.The text was updated successfully, but these errors were encountered: