-
Notifications
You must be signed in to change notification settings - Fork 25.3k
Implement Delete-by-Query operation after Reshard #125519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…eteByQuery Refresh
…eteByQuery Refresh branch
…eteByQuery Refresh branch
Pinging @elastic/es-distributed-indexing (Team:Distributed Indexing) |
|
||
public void deleteByQuery(ShardSplittingQuery query) throws Exception { | ||
// System.out.println("Delete documents using ShardSplitQuery"); | ||
indexWriter.deleteDocuments(query); | ||
indexWriter.flush(); | ||
indexWriter.commit(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can probably do this in stateless and avoid the need to publish this API here while we're incubating.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also I'd probably move the flush/commit out of the delete operation itself and let something higher up decide when it needs to schedule those.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, I don't know if the flush and commit at this level is sufficient to allow us to move the split on this shard to DONE. When we do that, we're also going to drop the search filter for unowned documents, which means the search nodes need to be using the state we've just flushed. I think that implies we should be doing a refresh.
No description provided.