analyzer/block: Hardcode pre-Eden stale accounts #654
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds lists of accounts that had stale native runtime balances soon(ish) after start of Eden.
Those stale balances are considered unavoidable because
See in-code comments for more details.
The PR also fixes statecheck so it can run against testnet.
This PR is a continuation of #617 ; I just created a new PR because I'll be pushing it to completion and it's easier to do so if I own the PR, rather than
ptrus
.Testing:
NEXUS_FORCE_MARK_STALE_ACCOUNTS=1
.NEXUS_FORCE_MARK_STALE_ACCOUNTS=1
. Even emerald testnet with 120k stale accounts was enqueued without problems in <10s.Stats
Output of statechecks:
Number of stale accounts extracted from logs, before and after ignoring the ones with balance 0. This PR uses the "after" version.
Methodology of collecting addresses
This is for my records/reproducibility more so than anything else. I obtained the list of stale accounts by manually running the statecheck (in k8s). I ran commands like
manual_cronjob mainnet emerald
Then I fetched the logs with
fetch_logs mainnet emerald oneoff
For testnet emerald, the logs were too large and were truncated by k8s. So I had to fetch them from GCP instead of using
kubectl logs
: