-
Notifications
You must be signed in to change notification settings - Fork 154
Atomic VM Refactor #885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Atomic VM Refactor #885
Conversation
Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]> Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]> Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]> Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]> Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]> Signed-off-by: Ceyhun Onur <[email protected]>
…move-atomic-gossip
The e2e test seems unrelated: #928 I'm marking this r4r |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some readability questions, but non-blocking
|
||
return nil | ||
} | ||
func (utx *UnsignedExportTx) Visit(v Visitor) error { return v.ExportTx(utx) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name is a little strange to me - why call it Visit
and Visitor
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is the standard naming for "Visitor" pattern (we also use the same pattern in avalanchego). While we don't have many other operations to support more stuff for visitor pattern, I think this fits well here to keep things separate as possible
"github.com/ava-labs/libevm/trie" | ||
) | ||
|
||
const atomicTrieKeyLen = wrappers.LongLen + common.HashLength | ||
|
||
// atomicTrieIterator is an implementation of types.AtomicTrieIterator that serves |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a type anymore, should probably delete the comment
plugin/evm/atomic/vm/vm.go
Outdated
case errors.Is(err, vmerrors.ErrMakeNewBlockFailed): | ||
log.Debug("discarding txs due to error making new block", "err", err) | ||
vm.mempool.DiscardCurrentTxs() | ||
case err != nil: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems confusing - if there is an unidentified error, then we issue the atomic txs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch. I also realized we were not using ErrMakeNewBlockFailed
. I refactored how we handle atomic tx extraction and should be fixed now.
acceptImpl func(SyncSummary) (block.StateSyncMode, error) | ||
type Syncable interface { | ||
block.StateSummary | ||
GetBlockHash() common.Hash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other getters don't define it like this, can we just call it Hash()
or BlockHash()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They conflicts with field names, and I think we can be a little more explicit here. Syncable
is a bit generic and do not immediately tell anything about Block
plugin/evm/vmerrors/errors.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't we just get rid of this file in libevm 2.5?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is required between atomic pkg and evm pkg (or we can remove it if we let atomic pkg import evm pkg)
plugin/evm/atomic/vm/vm.go
Outdated
networkCodec, err := message.NewCodec(atomicsync.AtomicSyncSummary{}) | ||
if err != nil { | ||
return fmt.Errorf("failed to create codec manager: %w", err) | ||
} | ||
|
||
// Create the atomic extension structs | ||
// some of them need to be initialized after the inner VM is initialized | ||
blockExtension := newBlockExtension(extDataHashes, vm) | ||
syncExtender := &atomicsync.AtomicSyncExtender{} | ||
syncProvider := &atomicsync.AtomicSummaryProvider{} | ||
// Create and pass the leaf handler to the atomic extension | ||
// it will be initialized after the inner VM is initialized | ||
leafHandler := NewAtomicLeafHandler() | ||
atomicLeafTypeConfig := &extension.LeafRequestConfig{ | ||
LeafType: atomicsync.AtomicTrieNode, | ||
MetricName: "sync_atomic_trie_leaves", | ||
Handler: leafHandler, | ||
} | ||
vm.mempool = &txpool.Mempool{} | ||
|
||
extensionConfig := &extension.Config{ | ||
NetworkCodec: networkCodec, | ||
ConsensusCallbacks: vm.createConsensusCallbacks(), | ||
BlockExtension: blockExtension, | ||
SyncableParser: atomicsync.NewAtomicSyncSummaryParser(), | ||
SyncExtender: syncExtender, | ||
SyncSummaryProvider: syncProvider, | ||
ExtraSyncLeafHandlerConfig: atomicLeafTypeConfig, | ||
ExtraMempool: vm.mempool, | ||
Clock: &vm.clock, | ||
} | ||
if err := innerVM.SetExtensionConfig(extensionConfig); err != nil { | ||
return fmt.Errorf("failed to set extension config: %w", err) | ||
} | ||
|
||
// Initialize inner vm with the provided parameters | ||
if err := innerVM.Initialize( | ||
ctx, | ||
chainCtx, | ||
db, | ||
genesisBytes, | ||
upgradeBytes, | ||
configBytes, | ||
toEngine, | ||
fxs, | ||
appSender, | ||
); err != nil { | ||
return fmt.Errorf("failed to initialize inner VM: %w", err) | ||
} | ||
|
||
// Now we can initialize the mempool and so | ||
err = vm.mempool.Initialize(chainCtx, innerVM.MetricRegistry(), defaultMempoolSize, vm.verifyTxAtTip) | ||
if err != nil { | ||
return fmt.Errorf("failed to initialize mempool: %w", err) | ||
} | ||
|
||
// initialize bonus blocks on mainnet | ||
var ( | ||
bonusBlockHeights map[uint64]ids.ID | ||
) | ||
if vm.ctx.NetworkID == constants.MainnetID { | ||
bonusBlockHeights, err = readMainnetBonusBlocks() | ||
if err != nil { | ||
return fmt.Errorf("failed to read mainnet bonus blocks: %w", err) | ||
} | ||
} | ||
|
||
// initialize atomic repository | ||
lastAcceptedHash, lastAcceptedHeight, err := innerVM.ReadLastAccepted() | ||
if err != nil { | ||
return fmt.Errorf("failed to read last accepted block: %w", err) | ||
} | ||
vm.atomicTxRepository, err = atomicstate.NewAtomicTxRepository(innerVM.VersionDB(), atomic.Codec, lastAcceptedHeight) | ||
if err != nil { | ||
return fmt.Errorf("failed to create atomic repository: %w", err) | ||
} | ||
vm.atomicBackend, err = atomicstate.NewAtomicBackend( | ||
vm.ctx.SharedMemory, bonusBlockHeights, | ||
vm.atomicTxRepository, lastAcceptedHeight, lastAcceptedHash, | ||
innerVM.Config().CommitInterval, | ||
) | ||
if err != nil { | ||
return fmt.Errorf("failed to create atomic backend: %w", err) | ||
} | ||
|
||
// Atomic backend is available now, we can initialize structs that depend on it | ||
syncProvider.Initialize(vm.atomicBackend.AtomicTrie()) | ||
syncExtender.Initialize(vm.atomicBackend, vm.atomicBackend.AtomicTrie(), innerVM.Config().StateSyncRequestSize) | ||
leafHandler.Initialize(vm.atomicBackend.AtomicTrie().TrieDB(), atomicstate.AtomicTrieKeyLength, networkCodec) | ||
vm.secpCache = secp256k1.RecoverCache{ | ||
LRU: cache.LRU[ids.ID, *secp256k1.PublicKey]{ | ||
Size: secpCacheSize, | ||
}, | ||
} | ||
|
||
// so [vm.baseCodec] is a dummy codec use to fulfill the secp256k1fx VM | ||
// interface. The fx will register all of its types, which can be safely | ||
// ignored by the VM's codec. | ||
vm.baseCodec = linearcodec.NewDefault() | ||
return vm.fx.Initialize(vm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
innerVM := vm.InnerVM | |
vm.ctx = chainCtx | |
var extDataHashes map[common.Hash]common.Hash | |
// Set the chain config for mainnet/fuji chain IDs | |
switch chainCtx.NetworkID { | |
case constants.MainnetID: | |
extDataHashes = mainnetExtDataHashes | |
case constants.FujiID: | |
extDataHashes = fujiExtDataHashes | |
} | |
// Free the memory of the extDataHash map | |
fujiExtDataHashes = nil | |
mainnetExtDataHashes = nil | |
networkCodec, err := message.NewCodec(atomicsync.AtomicSyncSummary{}) | |
if err != nil { | |
return fmt.Errorf("failed to create codec manager: %w", err) | |
} | |
// Create the atomic extension structs | |
// some of them need to be initialized after the inner VM is initialized | |
blockExtension := newBlockExtension(extDataHashes, vm) | |
syncExtender := &atomicsync.AtomicSyncExtender{} | |
syncProvider := &atomicsync.AtomicSummaryProvider{} | |
// Create and pass the leaf handler to the atomic extension | |
// it will be initialized after the inner VM is initialized | |
leafHandler := NewAtomicLeafHandler() | |
atomicLeafTypeConfig := &extension.LeafRequestConfig{ | |
LeafType: atomicsync.AtomicTrieNode, | |
MetricName: "sync_atomic_trie_leaves", | |
Handler: leafHandler, | |
} | |
vm.mempool = &txpool.Mempool{} | |
extensionConfig := &extension.Config{ | |
NetworkCodec: networkCodec, | |
ConsensusCallbacks: vm.createConsensusCallbacks(), | |
BlockExtension: blockExtension, | |
SyncableParser: atomicsync.NewAtomicSyncSummaryParser(), | |
SyncExtender: syncExtender, | |
SyncSummaryProvider: syncProvider, | |
ExtraSyncLeafHandlerConfig: atomicLeafTypeConfig, | |
ExtraMempool: vm.mempool, | |
Clock: &vm.clock, | |
} | |
if err := innerVM.SetExtensionConfig(extensionConfig); err != nil { | |
return fmt.Errorf("failed to set extension config: %w", err) | |
} | |
// Initialize inner vm with the provided parameters | |
if err := innerVM.Initialize( | |
ctx, | |
chainCtx, | |
db, | |
genesisBytes, | |
upgradeBytes, | |
configBytes, | |
toEngine, | |
fxs, | |
appSender, | |
); err != nil { | |
return fmt.Errorf("failed to initialize inner VM: %w", err) | |
} | |
// Now we can initialize the mempool and so | |
err = vm.mempool.Initialize(chainCtx, innerVM.MetricRegistry(), defaultMempoolSize, vm.verifyTxAtTip) | |
if err != nil { | |
return fmt.Errorf("failed to initialize mempool: %w", err) | |
} | |
// initialize bonus blocks on mainnet | |
var ( | |
bonusBlockHeights map[uint64]ids.ID | |
) | |
if vm.ctx.NetworkID == constants.MainnetID { | |
bonusBlockHeights, err = readMainnetBonusBlocks() | |
if err != nil { | |
return fmt.Errorf("failed to read mainnet bonus blocks: %w", err) | |
} | |
} | |
// initialize atomic repository | |
lastAcceptedHash, lastAcceptedHeight, err := innerVM.ReadLastAccepted() | |
if err != nil { | |
return fmt.Errorf("failed to read last accepted block: %w", err) | |
} | |
vm.atomicTxRepository, err = atomicstate.NewAtomicTxRepository(innerVM.VersionDB(), atomic.Codec, lastAcceptedHeight) | |
if err != nil { | |
return fmt.Errorf("failed to create atomic repository: %w", err) | |
} | |
vm.atomicBackend, err = atomicstate.NewAtomicBackend( | |
vm.ctx.SharedMemory, bonusBlockHeights, | |
vm.atomicTxRepository, lastAcceptedHeight, lastAcceptedHash, | |
innerVM.Config().CommitInterval, | |
) | |
if err != nil { | |
return fmt.Errorf("failed to create atomic backend: %w", err) | |
} | |
// Atomic backend is available now, we can initialize structs that depend on it | |
syncProvider.Initialize(vm.atomicBackend.AtomicTrie()) | |
syncExtender.Initialize(vm.atomicBackend, vm.atomicBackend.AtomicTrie(), innerVM.Config().StateSyncRequestSize) | |
leafHandler.Initialize(vm.atomicBackend.AtomicTrie().TrieDB(), atomicstate.AtomicTrieKeyLength, networkCodec) | |
vm.secpCache = secp256k1.RecoverCache{ | |
LRU: cache.LRU[ids.ID, *secp256k1.PublicKey]{ | |
Size: secpCacheSize, | |
}, | |
} | |
// so [vm.baseCodec] is a dummy codec use to fulfill the secp256k1fx VM | |
// interface. The fx will register all of its types, which can be safely | |
// ignored by the VM's codec. | |
vm.baseCodec = linearcodec.NewDefault() | |
return vm.fx.Initialize(vm) | |
vm.ctx = chainCtx | |
var extDataHashes map[common.Hash]common.Hash | |
// Set the chain config for mainnet/fuji chain IDs | |
switch chainCtx.NetworkID { | |
case constants.MainnetID: | |
extDataHashes = mainnetExtDataHashes | |
case constants.FujiID: | |
extDataHashes = fujiExtDataHashes | |
} | |
// Free the memory of the extDataHash map | |
fujiExtDataHashes = nil | |
mainnetExtDataHashes = nil | |
networkCodec, err := message.NewCodec(atomicsync.AtomicSyncSummary{}) | |
if err != nil { | |
return fmt.Errorf("failed to create codec manager: %w", err) | |
} | |
// Create the atomic extension structs | |
// some of them need to be initialized after the inner VM is initialized | |
blockExtension := newBlockExtension(extDataHashes, vm) | |
syncExtender := &atomicsync.AtomicSyncExtender{} | |
syncProvider := &atomicsync.AtomicSummaryProvider{} | |
// Create and pass the leaf handler to the atomic extension | |
// it will be initialized after the inner VM is initialized | |
leafHandler := NewAtomicLeafHandler() | |
atomicLeafTypeConfig := &extension.LeafRequestConfig{ | |
LeafType: atomicsync.AtomicTrieNode, | |
MetricName: "sync_atomic_trie_leaves", | |
Handler: leafHandler, | |
} | |
vm.mempool = &txpool.Mempool{} | |
extensionConfig := &extension.Config{ | |
NetworkCodec: networkCodec, | |
ConsensusCallbacks: vm.createConsensusCallbacks(), | |
BlockExtension: blockExtension, | |
SyncableParser: atomicsync.NewAtomicSyncSummaryParser(), | |
SyncExtender: syncExtender, | |
SyncSummaryProvider: syncProvider, | |
ExtraSyncLeafHandlerConfig: atomicLeafTypeConfig, | |
ExtraMempool: vm.mempool, | |
Clock: &vm.clock, | |
} | |
if err := vm.SetExtensionConfig(extensionConfig); err != nil { | |
return fmt.Errorf("failed to set extension config: %w", err) | |
} | |
// Initialize inner vm with the provided parameters | |
if err := vm.InnerVM.Initialize( | |
ctx, | |
chainCtx, | |
db, | |
genesisBytes, | |
upgradeBytes, | |
configBytes, | |
toEngine, | |
fxs, | |
appSender, | |
); err != nil { | |
return fmt.Errorf("failed to initialize inner VM: %w", err) | |
} | |
// Now we can initialize the mempool and so | |
err = vm.mempool.Initialize(chainCtx, vm.MetricRegistry(), defaultMempoolSize, vm.verifyTxAtTip) | |
if err != nil { | |
return fmt.Errorf("failed to initialize mempool: %w", err) | |
} | |
// initialize bonus blocks on mainnet | |
var ( | |
bonusBlockHeights map[uint64]ids.ID | |
) | |
if vm.ctx.NetworkID == constants.MainnetID { | |
bonusBlockHeights, err = readMainnetBonusBlocks() | |
if err != nil { | |
return fmt.Errorf("failed to read mainnet bonus blocks: %w", err) | |
} | |
} | |
// initialize atomic repository | |
lastAcceptedHash, lastAcceptedHeight, err := vm.ReadLastAccepted() | |
if err != nil { | |
return fmt.Errorf("failed to read last accepted block: %w", err) | |
} | |
vm.atomicTxRepository, err = atomicstate.NewAtomicTxRepository(vm.VersionDB(), atomic.Codec, lastAcceptedHeight) | |
if err != nil { | |
return fmt.Errorf("failed to create atomic repository: %w", err) | |
} | |
vm.atomicBackend, err = atomicstate.NewAtomicBackend( | |
vm.ctx.SharedMemory, bonusBlockHeights, | |
vm.atomicTxRepository, lastAcceptedHeight, lastAcceptedHash, | |
vm.Config().CommitInterval, | |
) | |
if err != nil { | |
return fmt.Errorf("failed to create atomic backend: %w", err) | |
} | |
// Atomic backend is available now, we can initialize structs that depend on it | |
syncProvider.Initialize(vm.atomicBackend.AtomicTrie()) | |
syncExtender.Initialize(vm.atomicBackend, vm.atomicBackend.AtomicTrie(), vm.Config().StateSyncRequestSize) | |
leafHandler.Initialize(vm.atomicBackend.AtomicTrie().TrieDB(), atomicstate.AtomicTrieKeyLength, networkCodec) | |
vm.secpCache = secp256k1.RecoverCache{ | |
LRU: cache.LRU[ids.ID, *secp256k1.PublicKey]{ | |
Size: secpCacheSize, | |
}, | |
} | |
// so [vm.baseCodec] is a dummy codec use to fulfill the secp256k1fx VM | |
// interface. The fx will register all of its types, which can be safely | |
// ignored by the VM's codec. | |
vm.baseCodec = linearcodec.NewDefault() | |
return vm.fx.Initialize(vm) |
I don't think we need to explicitly call the innerVM
methods, and it could be more error prone if we override/extend those for the atomic.VM in the future.
This is also the case for other methods below that use innerVM
that I think we should clean up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree - I think this is more intuitive to read as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed them to implicit calls, but we should still be careful that we don't introduce any unwanted recursions.
Co-authored-by: Austin Larson <[email protected]> Signed-off-by: Ceyhun Onur <[email protected]>
… into libevm-atomic-refactor-2
) | ||
|
||
type VM struct { | ||
// TODO: decide if we want to directly import the evm package and VM struct |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can still do this, but I cannot decide if keeping evm pkg separate as possible so that it's clear what is needed to wrap the VM (innervm/evm.VM)
or if it's better to keep it cleaner by removing any unnecessary interfaces and directly import plugin/evm.
Why this should be merged
This PR moves atomic logic to atomic pkg and adds atomic VM that wraps the evm/vm.
AvalancheGo PR: ava-labs/avalanchego#3702
How this works
Defines InnerVM/ExtensibleVM interfaces to extend and wrap plugin/vm with atomic logic and capabilities. Implement InnerVM/ExtensibleVM in evm/VM.
How this was tested
Need to be documented?
No
Need to update RELEASES.md?
No