Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize Legacy Indexing Service Lookup #122

Closed
hannahhoward opened this issue Feb 12, 2025 · 4 comments · Fixed by #125
Closed

Parallelize Legacy Indexing Service Lookup #122

hannahhoward opened this issue Feb 12, 2025 · 4 comments · Fixed by #125
Assignees

Comments

@hannahhoward
Copy link
Member

What

At present, the legacy "IndexingService" wrapper (https://github.com/storacha/indexing-service/blob/main/pkg/service/legacy/service.go#L49-L117) executes queries sequentially:

  1. Query the normal version of the indexing service, return results if found
  2. Query the block index store → return synthesized location claims if found, otherwise not found.

The problem? Step 1 has to finish before Step 2 starts. If we don't have results through the normal flow, but do have them in the block index store, we waste time waiting on the normal flow.

Proposal

Run the standard indexer service query and block index store query in parallel to speed things up. The key rule:

  • Prioritize normal indexing service results -> if indexing service has results, use them.
  • only use block index store results if normal results are not available -> No mixing results.

This keeps the normal service as the primary source but avoids unnecessary delays when it can't find anything.

Why?

Right now, Query forces a strict step-by-step lookup, which slows things down. Running both queries in parallel cuts down on wait time while keeping the existing behavior intact.

This change makes lookups faster, and doesn’t mess with the logic—just makes it more efficient

@Khwahish29
Copy link

Hey @hannahhoward !
I'd like to take this up :)

@hannahhoward
Copy link
Member Author

hi @Khwahish29 this ticket is very time dependent so I've already assigned it to @frrist

let me take a look at what else might be available -- are you a go developer primarily?

@frrist frrist moved this from Sprint Backlog to In Progress in Storacha Project Planning Feb 12, 2025
@hannahhoward
Copy link
Member Author

@Khwahish29 a good issue to pickup storacha/go-w3up#8

frrist pushed a commit that referenced this issue Feb 12, 2025
…kups

Previously, the Query method in IndexingService executed lookups sequentially:
1. Query the normal indexing service.
2. If no results were found, query the block index store.

This forced an unnecessary delay when results were only available in the block index store.

Changes:
- Run the primary (indexing service) and legacy (block index store) queries in parallel.
- If the primary query returns results, the legacy query is immediately canceled.
- Maintain the original behavior: prioritize primary results and only fall back to legacy if needed.

- closes #122
@Khwahish29
Copy link

@Khwahish29 a good issue to pickup storacha/go-w3up#8

Sure thing @hannahhoward ! I can work on storacha/go-w3up#8. Thanks!

@github-project-automation github-project-automation bot moved this from In Progress to Done in Storacha Project Planning Feb 13, 2025
@github-project-automation github-project-automation bot moved this from In Progress to Done in Storacha Project Planning Feb 13, 2025
volmedo added a commit that referenced this issue Feb 13, 2025
Thinking specifically of enabling cancellation of the request by
cancelling the provided context.

`BucketFallbackMapper` is used by the legacy claims logic, and now that
#122 will make that run in parallel with the query to IPNI and can
potentially be cancelled, I took a look at the context propagation chain
and noticed we were not using it in these calls.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants