Skip to content

Atomic VM Refactor #885

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 151 commits into
base: master
Choose a base branch
from
Open

Atomic VM Refactor #885

wants to merge 151 commits into from

Conversation

ceyonur
Copy link
Collaborator

@ceyonur ceyonur commented Mar 24, 2025

Why this should be merged

This PR moves atomic logic to atomic pkg and adds atomic VM that wraps the evm/vm.

AvalancheGo PR: ava-labs/avalanchego#3702

How this works

Defines InnerVM/ExtensibleVM interfaces to extend and wrap plugin/vm with atomic logic and capabilities. Implement InnerVM/ExtensibleVM in evm/VM.

How this was tested

  • Extended unit tests
  • moved existing test to atomic pkg
  • e2e tests should cover
  • bootstrapped fuji

Need to be documented?

No

Need to update RELEASES.md?

No

ceyonur and others added 30 commits December 11, 2024 16:57
Co-authored-by: Quentin McGaw <[email protected]>
Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]>
Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]>
Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]>
Signed-off-by: Ceyhun Onur <[email protected]>
Co-authored-by: Quentin McGaw <[email protected]>
Signed-off-by: Ceyhun Onur <[email protected]>
@ceyonur ceyonur mentioned this pull request Mar 27, 2025
@ceyonur ceyonur changed the base branch from libevm to master April 18, 2025 16:18
@ceyonur ceyonur changed the title Libevm atomic refactor 2 Atomic VM Refactor Apr 19, 2025
@ceyonur
Copy link
Collaborator Author

ceyonur commented Apr 22, 2025

The e2e test seems unrelated: #928

I'm marking this r4r

@ceyonur ceyonur marked this pull request as ready for review April 22, 2025 14:39
@ceyonur ceyonur requested a review from a team as a code owner April 22, 2025 14:39
Copy link
Contributor

@alarso16 alarso16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some readability questions, but non-blocking


return nil
}
func (utx *UnsignedExportTx) Visit(v Visitor) error { return v.ExportTx(utx) }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is a little strange to me - why call it Visit and Visitor?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is the standard naming for "Visitor" pattern (we also use the same pattern in avalanchego). While we don't have many other operations to support more stuff for visitor pattern, I think this fits well here to keep things separate as possible

"github.com/ava-labs/libevm/trie"
)

const atomicTrieKeyLen = wrappers.LongLen + common.HashLength

// atomicTrieIterator is an implementation of types.AtomicTrieIterator that serves
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't a type anymore, should probably delete the comment

case errors.Is(err, vmerrors.ErrMakeNewBlockFailed):
log.Debug("discarding txs due to error making new block", "err", err)
vm.mempool.DiscardCurrentTxs()
case err != nil:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems confusing - if there is an unidentified error, then we issue the atomic txs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch. I also realized we were not using ErrMakeNewBlockFailed. I refactored how we handle atomic tx extraction and should be fixed now.

acceptImpl func(SyncSummary) (block.StateSyncMode, error)
type Syncable interface {
block.StateSummary
GetBlockHash() common.Hash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other getters don't define it like this, can we just call it Hash() or BlockHash()?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They conflicts with field names, and I think we can be a little more explicit here. Syncable is a bit generic and do not immediately tell anything about Block

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't we just get rid of this file in libevm 2.5?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is required between atomic pkg and evm pkg (or we can remove it if we let atomic pkg import evm pkg)

Comment on lines 122 to 237
networkCodec, err := message.NewCodec(atomicsync.AtomicSyncSummary{})
if err != nil {
return fmt.Errorf("failed to create codec manager: %w", err)
}

// Create the atomic extension structs
// some of them need to be initialized after the inner VM is initialized
blockExtension := newBlockExtension(extDataHashes, vm)
syncExtender := &atomicsync.AtomicSyncExtender{}
syncProvider := &atomicsync.AtomicSummaryProvider{}
// Create and pass the leaf handler to the atomic extension
// it will be initialized after the inner VM is initialized
leafHandler := NewAtomicLeafHandler()
atomicLeafTypeConfig := &extension.LeafRequestConfig{
LeafType: atomicsync.AtomicTrieNode,
MetricName: "sync_atomic_trie_leaves",
Handler: leafHandler,
}
vm.mempool = &txpool.Mempool{}

extensionConfig := &extension.Config{
NetworkCodec: networkCodec,
ConsensusCallbacks: vm.createConsensusCallbacks(),
BlockExtension: blockExtension,
SyncableParser: atomicsync.NewAtomicSyncSummaryParser(),
SyncExtender: syncExtender,
SyncSummaryProvider: syncProvider,
ExtraSyncLeafHandlerConfig: atomicLeafTypeConfig,
ExtraMempool: vm.mempool,
Clock: &vm.clock,
}
if err := innerVM.SetExtensionConfig(extensionConfig); err != nil {
return fmt.Errorf("failed to set extension config: %w", err)
}

// Initialize inner vm with the provided parameters
if err := innerVM.Initialize(
ctx,
chainCtx,
db,
genesisBytes,
upgradeBytes,
configBytes,
toEngine,
fxs,
appSender,
); err != nil {
return fmt.Errorf("failed to initialize inner VM: %w", err)
}

// Now we can initialize the mempool and so
err = vm.mempool.Initialize(chainCtx, innerVM.MetricRegistry(), defaultMempoolSize, vm.verifyTxAtTip)
if err != nil {
return fmt.Errorf("failed to initialize mempool: %w", err)
}

// initialize bonus blocks on mainnet
var (
bonusBlockHeights map[uint64]ids.ID
)
if vm.ctx.NetworkID == constants.MainnetID {
bonusBlockHeights, err = readMainnetBonusBlocks()
if err != nil {
return fmt.Errorf("failed to read mainnet bonus blocks: %w", err)
}
}

// initialize atomic repository
lastAcceptedHash, lastAcceptedHeight, err := innerVM.ReadLastAccepted()
if err != nil {
return fmt.Errorf("failed to read last accepted block: %w", err)
}
vm.atomicTxRepository, err = atomicstate.NewAtomicTxRepository(innerVM.VersionDB(), atomic.Codec, lastAcceptedHeight)
if err != nil {
return fmt.Errorf("failed to create atomic repository: %w", err)
}
vm.atomicBackend, err = atomicstate.NewAtomicBackend(
vm.ctx.SharedMemory, bonusBlockHeights,
vm.atomicTxRepository, lastAcceptedHeight, lastAcceptedHash,
innerVM.Config().CommitInterval,
)
if err != nil {
return fmt.Errorf("failed to create atomic backend: %w", err)
}

// Atomic backend is available now, we can initialize structs that depend on it
syncProvider.Initialize(vm.atomicBackend.AtomicTrie())
syncExtender.Initialize(vm.atomicBackend, vm.atomicBackend.AtomicTrie(), innerVM.Config().StateSyncRequestSize)
leafHandler.Initialize(vm.atomicBackend.AtomicTrie().TrieDB(), atomicstate.AtomicTrieKeyLength, networkCodec)
vm.secpCache = secp256k1.RecoverCache{
LRU: cache.LRU[ids.ID, *secp256k1.PublicKey]{
Size: secpCacheSize,
},
}

// so [vm.baseCodec] is a dummy codec use to fulfill the secp256k1fx VM
// interface. The fx will register all of its types, which can be safely
// ignored by the VM's codec.
vm.baseCodec = linearcodec.NewDefault()
return vm.fx.Initialize(vm)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
innerVM := vm.InnerVM
vm.ctx = chainCtx
var extDataHashes map[common.Hash]common.Hash
// Set the chain config for mainnet/fuji chain IDs
switch chainCtx.NetworkID {
case constants.MainnetID:
extDataHashes = mainnetExtDataHashes
case constants.FujiID:
extDataHashes = fujiExtDataHashes
}
// Free the memory of the extDataHash map
fujiExtDataHashes = nil
mainnetExtDataHashes = nil
networkCodec, err := message.NewCodec(atomicsync.AtomicSyncSummary{})
if err != nil {
return fmt.Errorf("failed to create codec manager: %w", err)
}
// Create the atomic extension structs
// some of them need to be initialized after the inner VM is initialized
blockExtension := newBlockExtension(extDataHashes, vm)
syncExtender := &atomicsync.AtomicSyncExtender{}
syncProvider := &atomicsync.AtomicSummaryProvider{}
// Create and pass the leaf handler to the atomic extension
// it will be initialized after the inner VM is initialized
leafHandler := NewAtomicLeafHandler()
atomicLeafTypeConfig := &extension.LeafRequestConfig{
LeafType: atomicsync.AtomicTrieNode,
MetricName: "sync_atomic_trie_leaves",
Handler: leafHandler,
}
vm.mempool = &txpool.Mempool{}
extensionConfig := &extension.Config{
NetworkCodec: networkCodec,
ConsensusCallbacks: vm.createConsensusCallbacks(),
BlockExtension: blockExtension,
SyncableParser: atomicsync.NewAtomicSyncSummaryParser(),
SyncExtender: syncExtender,
SyncSummaryProvider: syncProvider,
ExtraSyncLeafHandlerConfig: atomicLeafTypeConfig,
ExtraMempool: vm.mempool,
Clock: &vm.clock,
}
if err := innerVM.SetExtensionConfig(extensionConfig); err != nil {
return fmt.Errorf("failed to set extension config: %w", err)
}
// Initialize inner vm with the provided parameters
if err := innerVM.Initialize(
ctx,
chainCtx,
db,
genesisBytes,
upgradeBytes,
configBytes,
toEngine,
fxs,
appSender,
); err != nil {
return fmt.Errorf("failed to initialize inner VM: %w", err)
}
// Now we can initialize the mempool and so
err = vm.mempool.Initialize(chainCtx, innerVM.MetricRegistry(), defaultMempoolSize, vm.verifyTxAtTip)
if err != nil {
return fmt.Errorf("failed to initialize mempool: %w", err)
}
// initialize bonus blocks on mainnet
var (
bonusBlockHeights map[uint64]ids.ID
)
if vm.ctx.NetworkID == constants.MainnetID {
bonusBlockHeights, err = readMainnetBonusBlocks()
if err != nil {
return fmt.Errorf("failed to read mainnet bonus blocks: %w", err)
}
}
// initialize atomic repository
lastAcceptedHash, lastAcceptedHeight, err := innerVM.ReadLastAccepted()
if err != nil {
return fmt.Errorf("failed to read last accepted block: %w", err)
}
vm.atomicTxRepository, err = atomicstate.NewAtomicTxRepository(innerVM.VersionDB(), atomic.Codec, lastAcceptedHeight)
if err != nil {
return fmt.Errorf("failed to create atomic repository: %w", err)
}
vm.atomicBackend, err = atomicstate.NewAtomicBackend(
vm.ctx.SharedMemory, bonusBlockHeights,
vm.atomicTxRepository, lastAcceptedHeight, lastAcceptedHash,
innerVM.Config().CommitInterval,
)
if err != nil {
return fmt.Errorf("failed to create atomic backend: %w", err)
}
// Atomic backend is available now, we can initialize structs that depend on it
syncProvider.Initialize(vm.atomicBackend.AtomicTrie())
syncExtender.Initialize(vm.atomicBackend, vm.atomicBackend.AtomicTrie(), innerVM.Config().StateSyncRequestSize)
leafHandler.Initialize(vm.atomicBackend.AtomicTrie().TrieDB(), atomicstate.AtomicTrieKeyLength, networkCodec)
vm.secpCache = secp256k1.RecoverCache{
LRU: cache.LRU[ids.ID, *secp256k1.PublicKey]{
Size: secpCacheSize,
},
}
// so [vm.baseCodec] is a dummy codec use to fulfill the secp256k1fx VM
// interface. The fx will register all of its types, which can be safely
// ignored by the VM's codec.
vm.baseCodec = linearcodec.NewDefault()
return vm.fx.Initialize(vm)
vm.ctx = chainCtx
var extDataHashes map[common.Hash]common.Hash
// Set the chain config for mainnet/fuji chain IDs
switch chainCtx.NetworkID {
case constants.MainnetID:
extDataHashes = mainnetExtDataHashes
case constants.FujiID:
extDataHashes = fujiExtDataHashes
}
// Free the memory of the extDataHash map
fujiExtDataHashes = nil
mainnetExtDataHashes = nil
networkCodec, err := message.NewCodec(atomicsync.AtomicSyncSummary{})
if err != nil {
return fmt.Errorf("failed to create codec manager: %w", err)
}
// Create the atomic extension structs
// some of them need to be initialized after the inner VM is initialized
blockExtension := newBlockExtension(extDataHashes, vm)
syncExtender := &atomicsync.AtomicSyncExtender{}
syncProvider := &atomicsync.AtomicSummaryProvider{}
// Create and pass the leaf handler to the atomic extension
// it will be initialized after the inner VM is initialized
leafHandler := NewAtomicLeafHandler()
atomicLeafTypeConfig := &extension.LeafRequestConfig{
LeafType: atomicsync.AtomicTrieNode,
MetricName: "sync_atomic_trie_leaves",
Handler: leafHandler,
}
vm.mempool = &txpool.Mempool{}
extensionConfig := &extension.Config{
NetworkCodec: networkCodec,
ConsensusCallbacks: vm.createConsensusCallbacks(),
BlockExtension: blockExtension,
SyncableParser: atomicsync.NewAtomicSyncSummaryParser(),
SyncExtender: syncExtender,
SyncSummaryProvider: syncProvider,
ExtraSyncLeafHandlerConfig: atomicLeafTypeConfig,
ExtraMempool: vm.mempool,
Clock: &vm.clock,
}
if err := vm.SetExtensionConfig(extensionConfig); err != nil {
return fmt.Errorf("failed to set extension config: %w", err)
}
// Initialize inner vm with the provided parameters
if err := vm.InnerVM.Initialize(
ctx,
chainCtx,
db,
genesisBytes,
upgradeBytes,
configBytes,
toEngine,
fxs,
appSender,
); err != nil {
return fmt.Errorf("failed to initialize inner VM: %w", err)
}
// Now we can initialize the mempool and so
err = vm.mempool.Initialize(chainCtx, vm.MetricRegistry(), defaultMempoolSize, vm.verifyTxAtTip)
if err != nil {
return fmt.Errorf("failed to initialize mempool: %w", err)
}
// initialize bonus blocks on mainnet
var (
bonusBlockHeights map[uint64]ids.ID
)
if vm.ctx.NetworkID == constants.MainnetID {
bonusBlockHeights, err = readMainnetBonusBlocks()
if err != nil {
return fmt.Errorf("failed to read mainnet bonus blocks: %w", err)
}
}
// initialize atomic repository
lastAcceptedHash, lastAcceptedHeight, err := vm.ReadLastAccepted()
if err != nil {
return fmt.Errorf("failed to read last accepted block: %w", err)
}
vm.atomicTxRepository, err = atomicstate.NewAtomicTxRepository(vm.VersionDB(), atomic.Codec, lastAcceptedHeight)
if err != nil {
return fmt.Errorf("failed to create atomic repository: %w", err)
}
vm.atomicBackend, err = atomicstate.NewAtomicBackend(
vm.ctx.SharedMemory, bonusBlockHeights,
vm.atomicTxRepository, lastAcceptedHeight, lastAcceptedHash,
vm.Config().CommitInterval,
)
if err != nil {
return fmt.Errorf("failed to create atomic backend: %w", err)
}
// Atomic backend is available now, we can initialize structs that depend on it
syncProvider.Initialize(vm.atomicBackend.AtomicTrie())
syncExtender.Initialize(vm.atomicBackend, vm.atomicBackend.AtomicTrie(), vm.Config().StateSyncRequestSize)
leafHandler.Initialize(vm.atomicBackend.AtomicTrie().TrieDB(), atomicstate.AtomicTrieKeyLength, networkCodec)
vm.secpCache = secp256k1.RecoverCache{
LRU: cache.LRU[ids.ID, *secp256k1.PublicKey]{
Size: secpCacheSize,
},
}
// so [vm.baseCodec] is a dummy codec use to fulfill the secp256k1fx VM
// interface. The fx will register all of its types, which can be safely
// ignored by the VM's codec.
vm.baseCodec = linearcodec.NewDefault()
return vm.fx.Initialize(vm)

I don't think we need to explicitly call the innerVM methods, and it could be more error prone if we override/extend those for the atomic.VM in the future.

This is also the case for other methods below that use innerVM that I think we should clean up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree - I think this is more intuitive to read as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed them to implicit calls, but we should still be careful that we don't introduce any unwanted recursions.

)

type VM struct {
// TODO: decide if we want to directly import the evm package and VM struct
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can still do this, but I cannot decide if keeping evm pkg separate as possible so that it's clear what is needed to wrap the VM (innervm/evm.VM)

or if it's better to keep it cleaner by removing any unnecessary interfaces and directly import plugin/evm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants