-
Notifications
You must be signed in to change notification settings - Fork 624
feat(blob-uploader): add arweave sender #1683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughThis update introduces Arweave as a new blob storage platform alongside S3, adding transaction hash tracking for blob uploads. It extends configuration files and internal data structures, implements an Arweave uploader with background status checks and reupload logic, updates metrics, and modifies database schema and ORM methods to support the new functionality. Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant BlobUploader
participant S3Uploader
participant ArweaveUploader
participant DB
App->>BlobUploader: UploadBlobToS3 (periodic)
BlobUploader->>DB: Fetch next unuploaded batch
alt S3 upload
BlobUploader->>S3Uploader: Upload blob
S3Uploader-->>BlobUploader: Success/Failure
BlobUploader->>DB: Update blob_upload status
end
App->>BlobUploader: UploadBlobToArweave (periodic/triggered)
BlobUploader->>DB: Fetch next unuploaded batch
alt Arweave upload
BlobUploader->>ArweaveUploader: Upload blob
ArweaveUploader-->>BlobUploader: Returns tx_hash
BlobUploader->>DB: Insert/Update blob_upload (status: pending, tx_hash)
loop every 10s
ArweaveUploader->>DB: Get pending blob uploads
ArweaveUploader->>Arweave: Check tx status
alt Confirmed
ArweaveUploader->>DB: Update status to uploaded
else Pending > 10min
ArweaveUploader->>BlobUploader: Trigger reupload with speed factor
else Dropped/Not found
ArweaveUploader->>BlobUploader: Trigger reupload without speed factor
end
end
end
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (1.64.8)level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package : could not load export data: no export data for "github.com/btcsuite/btcd/btcec"" ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Nitpick comments (9)
database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql (1)
4-8
: ConsiderUNIQUE
orCONCURRENTLY
depending on usageIf every blob upload is expected to have a distinct
tx_hash
, declaring the index asUNIQUE
prevents accidental duplicates.
For large tables, addingCONCURRENTLY
avoids a full-table lock during creation:CREATE UNIQUE INDEX CONCURRENTLY idx_blob_upload_transaction_hash ON blob_upload(tx_hash);Not required, but worth evaluating against production traffic.
rollup/go.mod (1)
59-63
: Twogo-ethereum
module paths may bloat binaries & complicate upgradesThe project now depends on both
github.com/scroll-tech/go-ethereum
(fork) andgithub.com/ethereum/go-ethereum
(upstream) via indirect deps. This results in two copies of essentially the same codebase, increasing build size and risk of incompatible types crossing package boundaries.If possible, exclude the upstream module by adding:
go mod edit -dropreplace github.com/ethereum/go-ethereum go mod tidyor vendor a single fork that satisfies all imports.
rollup/internal/controller/blob_uploader/arweave_sender_test.go (1)
1-1
: Placeholder test file provides no coverage
arweave_sender_test.go
is empty, so the newArweaveUploader
code path remains untested.
At minimum, add unit tests that:
- Mock an Arweave client and assert successful upload updates status/txHash.
- Cover retry and failure paths.
Need help scaffolding tests with
goar
mocks? Let me know.rollup/cmd/blob_uploader/app/app.go (1)
73-74
: Function name & scheduling may no longer reflect multi-backend realityOnly
blobUploader.UploadBlobToS3
is looped every second, yet the PR introduces Arweave support.
If Arweave logic is now embedded inside this S3-named method, rename it to something generic (e.g.,ProcessUploads
) to avoid confusion.
If Arweave requires a separate loop, add it here with an appropriate interval.- go utils.Loop(subCtx, 1*time.Second, blobUploader.UploadBlobToS3) + go utils.Loop(subCtx, 1*time.Second, blobUploader.ProcessUploads)(or start a second goroutine for Arweave).
This will keep the entrypoint aligned with the new feature.
rollup/internal/controller/blob_uploader/s3_sender.go (1)
39-41
: Clarify implicit region resolutionGreat catch making
WithRegion
conditional. Whencfg.Region
is empty the SDK now falls back to the usual resolution chain (env vars, shared config, instance metadata, …). Please add a short comment noting this fallback so future readers don’t wonder why the struct’sregion
can still be empty.rollup/internal/controller/blob_uploader/blob_uploader_metrics.go (1)
33-40
: Metric names: keep platform spelling consistentExisting S3 metrics use “s3”. The new ones use lowercase “arweave” (good) but include “runs … total”, whereas S3 uses “runs … total”. For parity consider matching wording exactly (
upload_to_arweave_success_total
).rollup/internal/controller/blob_uploader/arweave_sender.go (1)
17-18
: Update comment – wrong platformThe header still says “uploading data to AWS S3”. Please change to “Arweave”.
rollup/internal/controller/blob_uploader/blob_uploader.go (2)
52-57
: Replace Chinese comments with English.The comments should be in English for consistency and international collaboration.
- // 先创建 blobUploader 实例 + // Create blobUploader instance first blobUploader := &BlobUploader{ ctx: ctx, cfg: cfg, s3Uploader: s3Uploader, - arweaveUploader: nil, // 稍后设置 + arweaveUploader: nil, // Set later
208-214
: Document the temporary nature of the disabled check.The commented-out code indicates this is a temporary workaround. Consider adding a TODO with a plan for re-enabling this check.
- // temporarily disable this check because the codec_version field for chunk was added later. - // check codec version + // TODO: Re-enable this check once all chunks have been migrated to include codec_version field. + // Temporarily disabled because the codec_version field for chunk was added later. + // Original check: // for _, dbChunk := range dbChunks { // if dbBatch.CodecVersion != dbChunk.CodecVersion { // return nil, fmt.Errorf("batch codec version is different from chunk codec version, batch index: %d, chunk index: %d, batch codec version: %d, chunk codec version: %d", dbBatch.Index, dbChunk.Index, dbBatch.CodecVersion, dbChunk.CodecVersion) // } // }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
go.work.sum
is excluded by!**/*.sum
rollup/go.sum
is excluded by!**/*.sum
📒 Files selected for processing (12)
common/version/version.go
(1 hunks)database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql
(1 hunks)rollup/cmd/blob_uploader/app/app.go
(1 hunks)rollup/conf/config.json
(2 hunks)rollup/go.mod
(10 hunks)rollup/internal/config/l2.go
(2 hunks)rollup/internal/controller/blob_uploader/arweave_sender.go
(1 hunks)rollup/internal/controller/blob_uploader/arweave_sender_test.go
(1 hunks)rollup/internal/controller/blob_uploader/blob_uploader.go
(6 hunks)rollup/internal/controller/blob_uploader/blob_uploader_metrics.go
(2 hunks)rollup/internal/controller/blob_uploader/s3_sender.go
(1 hunks)rollup/internal/orm/blob_upload.go
(4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (5)
common/version/version.go (1)
8-8
: Bump version tag to v4.5.24Version tag updated correctly to reflect the new release. Please verify that any remaining references to v4.5.23 (e.g., changelog entries, CI pipelines, Docker tags) have been updated as well.
rollup/conf/config.json (1)
115-121
:confirmations
field type & empty-value placeholders warrant follow-upThe new
arweave_config
block looks good syntactically, but two small details merit verification:
confirmations
is a JSON string, whereas all other*_config.confirmations
fields in the file use a hex-encoded quantity (e.g."0x0"
). Double-check that the loader interprets an empty string correctly and doesn’t panic on a non-hex value.All four fields are committed with empty strings. That is fine for a template, but make sure real deployments override them through environment-specific configs or secrets management; otherwise the uploader will fail at runtime.
No code change required if this is intentional, but please confirm.
rollup/internal/config/l2.go (1)
54-57
: Nil-check requirement
ArweaveConfig
is a pointer – callers must nil-check before dereferencing. Make sure the blob-uploader constructor guards againstnil
to avoid panics when only S3 is configured.rollup/internal/controller/blob_uploader/blob_uploader.go (2)
151-198
: Well-structured unified upload logic.The method effectively handles uploads to both S3 and Arweave platforms with appropriate error handling and status updates.
279-330
: Robust batch retrieval logic with proper parent validation.The method correctly handles batch reverts by checking parent batch upload status and ensures only committed batches are considered for upload.
DROP INDEX IF EXISTS idx_blob_upload_tx_hash; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mismatch between up/down migration breaks rollback
UP
creates the index as idx_blob_upload_transaction_hash
, but the DOWN
section tries to drop idx_blob_upload_tx_hash
. A rollback will therefore error out, leaving the column and index in place.
-DROP INDEX IF EXISTS idx_blob_upload_tx_hash;
+DROP INDEX IF EXISTS idx_blob_upload_transaction_hash;
🤖 Prompt for AI Agents
In database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql at
lines 14-15, the DOWN migration attempts to drop an index named
idx_blob_upload_tx_hash, but the UP migration creates
idx_blob_upload_transaction_hash. To fix this, update the DROP INDEX statement
in the DOWN migration to use the exact index name
idx_blob_upload_transaction_hash to ensure rollback works correctly.
// ArweaveConfig loads arweave_uploader configuration items. | ||
type ArweaveConfig struct { | ||
Endpoint string `json:"endpoint"` | ||
PrivateKeyPath string `json:"private_key_path"` | ||
TxTag string `json:"tx_tag"` | ||
Confirmations uint64 `json:"confirmations"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Confirmations width mismatch
Confirmations
is uint64
, but ArweaveUploader
casts it to int
. On 32-bit builds this silently truncates. Consider storing it as int
or sanitising the value before the cast.
🤖 Prompt for AI Agents
In rollup/internal/config/l2.go around lines 67 to 73, the Confirmations field
is defined as uint64 but is cast to int in ArweaveUploader, causing potential
truncation on 32-bit systems. To fix this, change the Confirmations field type
to int to match its usage or add validation to ensure the uint64 value fits
within int range before casting.
func (u *ArweaveUploader) checkPendingBlobUploads(ctx context.Context) { | ||
pendingBlobUploads, err := u.blobUploadOrm.GetPendingBlobUploads(ctx, 100) | ||
if err != nil { | ||
log.Error("failed to load pending blob uploads", "err", err) | ||
return | ||
} | ||
|
||
for _, blobUpload := range pendingBlobUploads { | ||
status, err := u.client.GetTransactionStatus(blobUpload.TxHash) | ||
if err != nil { | ||
if err == schema.ErrPendingTx { | ||
if time.Since(blobUpload.UpdatedAt) > 10*time.Minute { | ||
// transaction pending too long, we need to bump the gas price | ||
if u.onReuploadNeeded != nil { | ||
// get batch from database | ||
dbBatch, err := u.batchOrm.GetBatchByIndex(u.ctx, blobUpload.BatchIndex) | ||
if err != nil { | ||
log.Error("failed to get batch by index %d: %w", blobUpload.BatchIndex, err) | ||
continue | ||
} | ||
if dbBatch.Hash != blobUpload.BatchHash { | ||
log.Error("found unmatched batch hash when reupload blob data", "batch index", blobUpload.BatchIndex, "dbBatch hash", dbBatch.Hash, "blobUpload batch hash", blobUpload.BatchHash, "err", err) | ||
continue | ||
} | ||
if _, err := u.onReuploadNeeded(dbBatch, types.BlobStoragePlatformArweave, 50); err != nil { | ||
log.Error("failed to reupload blob", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash, "err", err) | ||
} else { | ||
log.Info("successfully reuploaded blob with higher gas price", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash) | ||
} | ||
} | ||
} | ||
log.Debug("got pending arweave transaction, waiting for confirmation") | ||
} | ||
if err == schema.ErrNotFound || err == schema.ErrInvalidId { | ||
// resend transaction if it's dropped | ||
if u.onReuploadNeeded != nil { | ||
// get batch from database | ||
dbBatch, err := u.batchOrm.GetBatchByIndex(u.ctx, blobUpload.BatchIndex) | ||
if err != nil { | ||
log.Error("failed to get batch by index %d: %w", blobUpload.BatchIndex, err) | ||
continue | ||
} | ||
if dbBatch.Hash != blobUpload.BatchHash { | ||
log.Error("found unmatched batch hash when reupload blob data", "batch index", blobUpload.BatchIndex, "dbBatch hash", dbBatch.Hash, "blobUpload batch hash", blobUpload.BatchHash, "err", err) | ||
continue | ||
} | ||
if _, err := u.onReuploadNeeded(dbBatch, types.BlobStoragePlatformArweave, 0); err != nil { | ||
log.Error("failed to reupload blob", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash, "err", err) | ||
} else { | ||
log.Info("successfully reuploaded blob", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash) | ||
} | ||
} | ||
} | ||
log.Error("failed to get arweave transaction status", "err", err) | ||
return | ||
} | ||
|
||
if status.NumberOfConfirmations >= int(u.confirmations) { | ||
if err := u.blobUploadOrm.UpdateUploadStatus(u.ctx, blobUpload.TxHash, types.BlobUploadStatusUploaded); err != nil { | ||
log.Warn("UpdateUploadStatus failed", "transaction hash", blobUpload.TxHash, "upload status", types.BlobUploadStatusUploaded, "err", err) | ||
return | ||
} | ||
|
||
} | ||
|
||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Error-handling aborts entire scan
checkPendingBlobUploads
returns immediately on the first error (return
at line 144/151).
This aborts processing of the remaining pending uploads and can stall progress if a single record is malformed. Prefer logging the error and continuing the loop:
- log.Error("failed to get arweave transaction status", "err", err)
- return
+ log.Error("failed to get arweave transaction status", "err", err)
+ continue
🤖 Prompt for AI Agents
In rollup/internal/controller/blob_uploader/arweave_sender.go lines 90 to 156,
the function checkPendingBlobUploads aborts the entire processing loop and
returns immediately upon encountering an error when checking a single blob
upload's transaction status. To fix this, replace the return statements inside
the loop that handle errors with continue statements so that the function logs
the error but continues processing the remaining pending blob uploads without
stopping prematurely.
db = db.Where("platform = ?", platform, types.BlobUploadStatusUploaded) | ||
db = db.Where("status = ? OR status = ?", types.BlobUploadStatusUploaded, types.BatchProofsStatusPending) | ||
db = db.Order("batch_index DESC") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong status constant
The next line compares against types.BatchProofsStatusPending
; should be types.BlobUploadStatusPending
. Using the batch-proof constant will miss pending uploads and break scheduler logic.
🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 45 to 47, the code
incorrectly uses the constant types.BatchProofsStatusPending when filtering by
status. Replace types.BatchProofsStatusPending with
types.BlobUploadStatusPending to correctly filter for pending blob uploads and
ensure the scheduler logic works as intended.
SQL placeholder mismatch breaks query
Where("platform = ?", platform, types.BlobUploadStatusUploaded)
supplies two args for one placeholder – GORM silently ignores the second, so the status filter never applies. Drop the extra arg or add a placeholder:
-db = db.Where("platform = ?", platform, types.BlobUploadStatusUploaded)
+db = db.Where("platform = ?", platform)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
db = db.Where("platform = ?", platform, types.BlobUploadStatusUploaded) | |
db = db.Where("status = ? OR status = ?", types.BlobUploadStatusUploaded, types.BatchProofsStatusPending) | |
db = db.Order("batch_index DESC") | |
db = db.Where("platform = ?", platform) | |
db = db.Where("status = ? OR status = ?", types.BlobUploadStatusUploaded, types.BatchProofsStatusPending) | |
db = db.Order("batch_index DESC") |
🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 45 to 47, the first Where
clause has a mismatch between placeholders and arguments, passing two arguments
for one placeholder which causes the second argument to be ignored. Fix this by
either removing the extra argument or adding a second placeholder to match both
arguments properly in the query.
db = db.Where("status = ?", types.BatchProofsStatusPending) | ||
db = db.Order("batch_index asc") | ||
db = db.Limit(limit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pending upload query uses batch-proof status
Same constant mix-up here – replace with types.BlobUploadStatusPending
.
🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 97 to 99, the query filters
by the wrong status constant `types.BatchProofsStatusPending`. Replace this with
the correct constant `types.BlobUploadStatusPending` to ensure the query
correctly targets pending blob uploads.
db = db.Where("tx_hash", txHash) | ||
|
||
updateFields := map[string]interface{}{ | ||
"status": int16(status), | ||
} | ||
if err := db.Updates(updateFields).Error; err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing equality operator in WHERE clause
db.Where("tx_hash", txHash)
lacks = ?
, leading GORM to interpret tx_hash
as a boolean column. Use:
-db = db.Where("tx_hash", txHash)
+db = db.Where("tx_hash = ?", txHash)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
db = db.Where("tx_hash", txHash) | |
updateFields := map[string]interface{}{ | |
"status": int16(status), | |
} | |
if err := db.Updates(updateFields).Error; err != nil { | |
db = db.Where("tx_hash = ?", txHash) | |
updateFields := map[string]interface{}{ | |
"status": int16(status), | |
} | |
if err := db.Updates(updateFields).Error; err != nil { |
🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 155 to 160, the WHERE clause
in the GORM query is missing the equality operator and placeholder. Change
db.Where("tx_hash", txHash) to db.Where("tx_hash = ?", txHash) to correctly
filter by the txHash value.
key := common.BytesToHash(versionedBlobHash[:]).Hex() | ||
err = b.s3Uploader.UploadData(b.ctx, blob[:], key) | ||
// get un-uploaded batches from database in ascending order by their index. | ||
dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformS3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect platform type in Arweave uploader.
The method is fetching batches for S3 platform instead of Arweave platform. This appears to be a copy-paste error.
- dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformS3)
+ dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformArweave)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformS3) | |
dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformArweave) |
🤖 Prompt for AI Agents
In rollup/internal/controller/blob_uploader/blob_uploader.go at line 129, the
method call to GetFirstUnuploadedBatchByPlatform incorrectly uses the S3
platform constant instead of the Arweave platform constant. Replace
types.BlobStoragePlatformS3 with the correct constant representing the Arweave
platform to ensure the method fetches batches for Arweave.
// temporarily add this check because the codec_version field for chunk was added later. | ||
if codecVersion == encoding.CodecV0 && dbBatch.Index < 73224 { | ||
codecVersion = encoding.CodecV1 | ||
} | ||
if codecVersion == encoding.CodecV0 && dbBatch.Index >= 73224 { // for sepolia | ||
codecVersion = encoding.CodecV2 | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extract magic numbers to named constants.
The hardcoded batch index 73224 should be extracted to a named constant for better maintainability and clarity.
+const (
+ // sepoliaBatchIndexV2Transition represents the batch index on Sepolia network where CodecV2 was introduced
+ sepoliaBatchIndexV2Transition = 73224
+)
+
// temporarily add this check because the codec_version field for chunk was added later.
if codecVersion == encoding.CodecV0 && dbBatch.Index < 73224 {
codecVersion = encoding.CodecV1
}
- if codecVersion == encoding.CodecV0 && dbBatch.Index >= 73224 { // for sepolia
+ if codecVersion == encoding.CodecV0 && dbBatch.Index >= sepoliaBatchIndexV2Transition { // for sepolia
codecVersion = encoding.CodecV2
}
🤖 Prompt for AI Agents
In rollup/internal/controller/blob_uploader/blob_uploader.go around lines 230 to
237, the hardcoded batch index value 73224 should be extracted into a named
constant at the top of the file or in a relevant constants section. Define a
descriptive constant name for this value and replace all occurrences of 73224
with this constant to improve code clarity and maintainability.
Purpose or design rationale of this PR
Describe your change. Make sure to answer these three questions: What does this PR do? Why does it do it? How does it do it?
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Improvements
Database
Dependency Updates