Skip to content

feat(blob-uploader): add arweave sender #1683

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: develop
Choose a base branch
from

Conversation

yiweichi
Copy link
Member

@yiweichi yiweichi commented Jun 18, 2025

Purpose or design rationale of this PR

Describe your change. Make sure to answer these three questions: What does this PR do? Why does it do it? How does it do it?

PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • build: Changes that affect the build system or external dependencies (example scopes: yarn, eslint, typescript)
  • ci: Changes to our CI configuration files and scripts (example scopes: vercel, github, cypress)
  • docs: Documentation-only changes
  • feat: A new feature
  • fix: A bug fix
  • perf: A code change that improves performance
  • refactor: A code change that doesn't fix a bug, or add a feature, or improves performance
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: Adding missing tests or correcting existing tests

Deployment tag versioning

Has tag in common/version.go been updated or have you added bump-version label to this PR?

  • No, this PR doesn't involve a new deployment, git tag, docker image tag
  • Yes

Breaking change label

Does this PR have the breaking-change label?

  • No, this PR is not a breaking change
  • Yes

Summary by CodeRabbit

  • New Features

    • Introduced support for uploading blobs to Arweave as an additional storage platform.
    • Added configuration options for Arweave integration, including endpoint, private key path, transaction tag, and confirmations.
    • Implemented transaction hash tracking for blob uploads and enhanced status management.
    • Added new Prometheus metrics to monitor Arweave upload successes and failures.
  • Improvements

    • Increased the frequency of blob uploads to S3.
    • Enhanced handling of legacy data and improved error logging.
    • Improved configuration flexibility for AWS region settings.
  • Database

    • Added a new column and index for transaction hashes in the blob upload table.
  • Dependency Updates

    • Updated Go version and several dependencies for improved stability and compatibility.

Copy link

coderabbitai bot commented Jun 18, 2025

Walkthrough

This update introduces Arweave as a new blob storage platform alongside S3, adding transaction hash tracking for blob uploads. It extends configuration files and internal data structures, implements an Arweave uploader with background status checks and reupload logic, updates metrics, and modifies database schema and ORM methods to support the new functionality.

Changes

File(s) Change Summary
common/version/version.go Updated version tag from v4.5.23 to v4.5.24.
rollup/go.mod Updated Go version, changed go-ethereum dependency, added/updated several indirect dependencies. No direct code changes.
rollup/conf/config.json
rollup/internal/config/l2.go
Added Arweave-specific configuration fields to blob uploader config (endpoint, private key path, tx tag, confirmations). Introduced ArweaveConfig struct.
database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql Added tx_hash column and index to blob_upload table; migration includes up/down scripts.
rollup/internal/orm/blob_upload.go Added TxHash field to BlobUpload struct; updated queries and methods to support transaction hash, pending status queries, and atomic status/hash updates. Added methods for pending uploads and status updates by tx hash.
rollup/internal/controller/blob_uploader/arweave_sender.go New: Implements ArweaveUploader for uploading blobs to Arweave, background status checking, and reupload logic with configurable confirmations and transaction tags.
rollup/internal/controller/blob_uploader/arweave_sender_test.go New: Added empty test file for Arweave uploader (placeholder, no tests yet).
rollup/internal/controller/blob_uploader/blob_uploader.go Extended BlobUploader to support Arweave uploads, unified upload logic, added Arweave uploader initialization, new public method for Arweave uploads, refactored codec version handling, and enhanced error handling and metrics.
rollup/internal/controller/blob_uploader/blob_uploader_metrics.go Added Prometheus counters for Arweave upload successes and failures.
rollup/internal/controller/blob_uploader/s3_sender.go Conditionalized AWS region option in S3 uploader initialization to only append if region is specified.
rollup/cmd/blob_uploader/app/app.go Reduced interval for S3 blob upload attempts from 2 seconds to 1 second.

Sequence Diagram(s)

sequenceDiagram
    participant App
    participant BlobUploader
    participant S3Uploader
    participant ArweaveUploader
    participant DB

    App->>BlobUploader: UploadBlobToS3 (periodic)
    BlobUploader->>DB: Fetch next unuploaded batch
    alt S3 upload
        BlobUploader->>S3Uploader: Upload blob
        S3Uploader-->>BlobUploader: Success/Failure
        BlobUploader->>DB: Update blob_upload status
    end

    App->>BlobUploader: UploadBlobToArweave (periodic/triggered)
    BlobUploader->>DB: Fetch next unuploaded batch
    alt Arweave upload
        BlobUploader->>ArweaveUploader: Upload blob
        ArweaveUploader-->>BlobUploader: Returns tx_hash
        BlobUploader->>DB: Insert/Update blob_upload (status: pending, tx_hash)
        loop every 10s
            ArweaveUploader->>DB: Get pending blob uploads
            ArweaveUploader->>Arweave: Check tx status
            alt Confirmed
                ArweaveUploader->>DB: Update status to uploaded
            else Pending > 10min
                ArweaveUploader->>BlobUploader: Trigger reupload with speed factor
            else Dropped/Not found
                ArweaveUploader->>BlobUploader: Trigger reupload without speed factor
            end
        end
    end
Loading

Possibly related PRs

Suggested labels

bump-version

Suggested reviewers

  • georgehao
  • colinlyguo

Poem

In the meadow of code where the data flows free,
We’ve woven Arweave in with S3.
Now blobs can travel, with hashes in tow,
Through blockchains and clouds, wherever they go.
Metrics are counting, migrations complete—
This bunny is hopping to a decentralized beat! 🐇✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (1.64.8)

level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package : could not load export data: no export data for "github.com/btcsuite/btcd/btcec""
level=error msg="Running error: can't run linter goanalysis_metalinter\nbuildir: failed to load package : could not load export data: no export data for "github.com/btcsuite/btcd/btcec""

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (9)
database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql (1)

4-8: Consider UNIQUE or CONCURRENTLY depending on usage

If every blob upload is expected to have a distinct tx_hash, declaring the index as UNIQUE prevents accidental duplicates.
For large tables, adding CONCURRENTLY avoids a full-table lock during creation:

CREATE UNIQUE INDEX CONCURRENTLY idx_blob_upload_transaction_hash ON blob_upload(tx_hash);

Not required, but worth evaluating against production traffic.

rollup/go.mod (1)

59-63: Two go-ethereum module paths may bloat binaries & complicate upgrades

The project now depends on both
github.com/scroll-tech/go-ethereum (fork) and github.com/ethereum/go-ethereum (upstream) via indirect deps. This results in two copies of essentially the same codebase, increasing build size and risk of incompatible types crossing package boundaries.

If possible, exclude the upstream module by adding:

go mod edit -dropreplace github.com/ethereum/go-ethereum
go mod tidy

or vendor a single fork that satisfies all imports.

rollup/internal/controller/blob_uploader/arweave_sender_test.go (1)

1-1: Placeholder test file provides no coverage

arweave_sender_test.go is empty, so the new ArweaveUploader code path remains untested.
At minimum, add unit tests that:

  • Mock an Arweave client and assert successful upload updates status/txHash.
  • Cover retry and failure paths.

Need help scaffolding tests with goar mocks? Let me know.

rollup/cmd/blob_uploader/app/app.go (1)

73-74: Function name & scheduling may no longer reflect multi-backend reality

Only blobUploader.UploadBlobToS3 is looped every second, yet the PR introduces Arweave support.
If Arweave logic is now embedded inside this S3-named method, rename it to something generic (e.g., ProcessUploads) to avoid confusion.
If Arweave requires a separate loop, add it here with an appropriate interval.

- go utils.Loop(subCtx, 1*time.Second, blobUploader.UploadBlobToS3)
+ go utils.Loop(subCtx, 1*time.Second, blobUploader.ProcessUploads)

(or start a second goroutine for Arweave).

This will keep the entrypoint aligned with the new feature.

rollup/internal/controller/blob_uploader/s3_sender.go (1)

39-41: Clarify implicit region resolution

Great catch making WithRegion conditional. When cfg.Region is empty the SDK now falls back to the usual resolution chain (env vars, shared config, instance metadata, …). Please add a short comment noting this fallback so future readers don’t wonder why the struct’s region can still be empty.

rollup/internal/controller/blob_uploader/blob_uploader_metrics.go (1)

33-40: Metric names: keep platform spelling consistent

Existing S3 metrics use “s3”. The new ones use lowercase “arweave” (good) but include “runs … total”, whereas S3 uses “runs … total”. For parity consider matching wording exactly (upload_to_arweave_success_total).

rollup/internal/controller/blob_uploader/arweave_sender.go (1)

17-18: Update comment – wrong platform

The header still says “uploading data to AWS S3”. Please change to “Arweave”.

rollup/internal/controller/blob_uploader/blob_uploader.go (2)

52-57: Replace Chinese comments with English.

The comments should be in English for consistency and international collaboration.

-		// 先创建 blobUploader 实例
+		// Create blobUploader instance first
		blobUploader := &BlobUploader{
			ctx:             ctx,
			cfg:             cfg,
			s3Uploader:      s3Uploader,
-			arweaveUploader: nil, // 稍后设置
+			arweaveUploader: nil, // Set later

208-214: Document the temporary nature of the disabled check.

The commented-out code indicates this is a temporary workaround. Consider adding a TODO with a plan for re-enabling this check.

-	// temporarily disable this check because the codec_version field for chunk was added later.
-	// check codec version
+	// TODO: Re-enable this check once all chunks have been migrated to include codec_version field.
+	// Temporarily disabled because the codec_version field for chunk was added later.
+	// Original check:
	// for _, dbChunk := range dbChunks {
	// 	if dbBatch.CodecVersion != dbChunk.CodecVersion {
	// 		return nil, fmt.Errorf("batch codec version is different from chunk codec version, batch index: %d, chunk index: %d, batch codec version: %d, chunk codec version: %d", dbBatch.Index, dbChunk.Index, dbBatch.CodecVersion, dbChunk.CodecVersion)
	// 	}
	// }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5d6b5a8 and fe58a30.

⛔ Files ignored due to path filters (2)
  • go.work.sum is excluded by !**/*.sum
  • rollup/go.sum is excluded by !**/*.sum
📒 Files selected for processing (12)
  • common/version/version.go (1 hunks)
  • database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql (1 hunks)
  • rollup/cmd/blob_uploader/app/app.go (1 hunks)
  • rollup/conf/config.json (2 hunks)
  • rollup/go.mod (10 hunks)
  • rollup/internal/config/l2.go (2 hunks)
  • rollup/internal/controller/blob_uploader/arweave_sender.go (1 hunks)
  • rollup/internal/controller/blob_uploader/arweave_sender_test.go (1 hunks)
  • rollup/internal/controller/blob_uploader/blob_uploader.go (6 hunks)
  • rollup/internal/controller/blob_uploader/blob_uploader_metrics.go (2 hunks)
  • rollup/internal/controller/blob_uploader/s3_sender.go (1 hunks)
  • rollup/internal/orm/blob_upload.go (4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
🔇 Additional comments (5)
common/version/version.go (1)

8-8: Bump version tag to v4.5.24

Version tag updated correctly to reflect the new release. Please verify that any remaining references to v4.5.23 (e.g., changelog entries, CI pipelines, Docker tags) have been updated as well.

rollup/conf/config.json (1)

115-121: confirmations field type & empty-value placeholders warrant follow-up

The new arweave_config block looks good syntactically, but two small details merit verification:

  1. confirmations is a JSON string, whereas all other *_config.confirmations fields in the file use a hex-encoded quantity (e.g. "0x0"). Double-check that the loader interprets an empty string correctly and doesn’t panic on a non-hex value.

  2. All four fields are committed with empty strings. That is fine for a template, but make sure real deployments override them through environment-specific configs or secrets management; otherwise the uploader will fail at runtime.

No code change required if this is intentional, but please confirm.

rollup/internal/config/l2.go (1)

54-57: Nil-check requirement

ArweaveConfig is a pointer – callers must nil-check before dereferencing. Make sure the blob-uploader constructor guards against nil to avoid panics when only S3 is configured.

rollup/internal/controller/blob_uploader/blob_uploader.go (2)

151-198: Well-structured unified upload logic.

The method effectively handles uploads to both S3 and Arweave platforms with appropriate error handling and status updates.


279-330: Robust batch retrieval logic with proper parent validation.

The method correctly handles batch reverts by checking parent batch upload status and ensures only committed batches are considered for upload.

Comment on lines +14 to +15
DROP INDEX IF EXISTS idx_blob_upload_tx_hash;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Mismatch between up/down migration breaks rollback

UP creates the index as idx_blob_upload_transaction_hash, but the DOWN section tries to drop idx_blob_upload_tx_hash. A rollback will therefore error out, leaving the column and index in place.

-DROP INDEX IF EXISTS idx_blob_upload_tx_hash;
+DROP INDEX IF EXISTS idx_blob_upload_transaction_hash;
🤖 Prompt for AI Agents
In database/migrate/migrations/00028_add_transaction_hash_to_blob_upload.sql at
lines 14-15, the DOWN migration attempts to drop an index named
idx_blob_upload_tx_hash, but the UP migration creates
idx_blob_upload_transaction_hash. To fix this, update the DROP INDEX statement
in the DOWN migration to use the exact index name
idx_blob_upload_transaction_hash to ensure rollback works correctly.

Comment on lines +67 to +73
// ArweaveConfig loads arweave_uploader configuration items.
type ArweaveConfig struct {
Endpoint string `json:"endpoint"`
PrivateKeyPath string `json:"private_key_path"`
TxTag string `json:"tx_tag"`
Confirmations uint64 `json:"confirmations"`
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Confirmations width mismatch

Confirmations is uint64, but ArweaveUploader casts it to int. On 32-bit builds this silently truncates. Consider storing it as int or sanitising the value before the cast.

🤖 Prompt for AI Agents
In rollup/internal/config/l2.go around lines 67 to 73, the Confirmations field
is defined as uint64 but is cast to int in ArweaveUploader, causing potential
truncation on 32-bit systems. To fix this, change the Confirmations field type
to int to match its usage or add validation to ensure the uint64 value fits
within int range before casting.

Comment on lines +90 to +156
func (u *ArweaveUploader) checkPendingBlobUploads(ctx context.Context) {
pendingBlobUploads, err := u.blobUploadOrm.GetPendingBlobUploads(ctx, 100)
if err != nil {
log.Error("failed to load pending blob uploads", "err", err)
return
}

for _, blobUpload := range pendingBlobUploads {
status, err := u.client.GetTransactionStatus(blobUpload.TxHash)
if err != nil {
if err == schema.ErrPendingTx {
if time.Since(blobUpload.UpdatedAt) > 10*time.Minute {
// transaction pending too long, we need to bump the gas price
if u.onReuploadNeeded != nil {
// get batch from database
dbBatch, err := u.batchOrm.GetBatchByIndex(u.ctx, blobUpload.BatchIndex)
if err != nil {
log.Error("failed to get batch by index %d: %w", blobUpload.BatchIndex, err)
continue
}
if dbBatch.Hash != blobUpload.BatchHash {
log.Error("found unmatched batch hash when reupload blob data", "batch index", blobUpload.BatchIndex, "dbBatch hash", dbBatch.Hash, "blobUpload batch hash", blobUpload.BatchHash, "err", err)
continue
}
if _, err := u.onReuploadNeeded(dbBatch, types.BlobStoragePlatformArweave, 50); err != nil {
log.Error("failed to reupload blob", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash, "err", err)
} else {
log.Info("successfully reuploaded blob with higher gas price", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash)
}
}
}
log.Debug("got pending arweave transaction, waiting for confirmation")
}
if err == schema.ErrNotFound || err == schema.ErrInvalidId {
// resend transaction if it's dropped
if u.onReuploadNeeded != nil {
// get batch from database
dbBatch, err := u.batchOrm.GetBatchByIndex(u.ctx, blobUpload.BatchIndex)
if err != nil {
log.Error("failed to get batch by index %d: %w", blobUpload.BatchIndex, err)
continue
}
if dbBatch.Hash != blobUpload.BatchHash {
log.Error("found unmatched batch hash when reupload blob data", "batch index", blobUpload.BatchIndex, "dbBatch hash", dbBatch.Hash, "blobUpload batch hash", blobUpload.BatchHash, "err", err)
continue
}
if _, err := u.onReuploadNeeded(dbBatch, types.BlobStoragePlatformArweave, 0); err != nil {
log.Error("failed to reupload blob", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash, "err", err)
} else {
log.Info("successfully reuploaded blob", "batch index", blobUpload.BatchIndex, "batch hash", blobUpload.BatchHash)
}
}
}
log.Error("failed to get arweave transaction status", "err", err)
return
}

if status.NumberOfConfirmations >= int(u.confirmations) {
if err := u.blobUploadOrm.UpdateUploadStatus(u.ctx, blobUpload.TxHash, types.BlobUploadStatusUploaded); err != nil {
log.Warn("UpdateUploadStatus failed", "transaction hash", blobUpload.TxHash, "upload status", types.BlobUploadStatusUploaded, "err", err)
return
}

}

}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Error-handling aborts entire scan

checkPendingBlobUploads returns immediately on the first error (return at line 144/151).
This aborts processing of the remaining pending uploads and can stall progress if a single record is malformed. Prefer logging the error and continuing the loop:

-            log.Error("failed to get arweave transaction status", "err", err)
-            return
+            log.Error("failed to get arweave transaction status", "err", err)
+            continue
🤖 Prompt for AI Agents
In rollup/internal/controller/blob_uploader/arweave_sender.go lines 90 to 156,
the function checkPendingBlobUploads aborts the entire processing loop and
returns immediately upon encountering an error when checking a single blob
upload's transaction status. To fix this, replace the return statements inside
the loop that handle errors with continue statements so that the function logs
the error but continues processing the remaining pending blob uploads without
stopping prematurely.

Comment on lines +45 to 47
db = db.Where("platform = ?", platform, types.BlobUploadStatusUploaded)
db = db.Where("status = ? OR status = ?", types.BlobUploadStatusUploaded, types.BatchProofsStatusPending)
db = db.Order("batch_index DESC")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Wrong status constant

The next line compares against types.BatchProofsStatusPending; should be types.BlobUploadStatusPending. Using the batch-proof constant will miss pending uploads and break scheduler logic.

🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 45 to 47, the code
incorrectly uses the constant types.BatchProofsStatusPending when filtering by
status. Replace types.BatchProofsStatusPending with
types.BlobUploadStatusPending to correctly filter for pending blob uploads and
ensure the scheduler logic works as intended.

⚠️ Potential issue

SQL placeholder mismatch breaks query

Where("platform = ?", platform, types.BlobUploadStatusUploaded) supplies two args for one placeholder – GORM silently ignores the second, so the status filter never applies. Drop the extra arg or add a placeholder:

-db = db.Where("platform = ?", platform, types.BlobUploadStatusUploaded)
+db = db.Where("platform = ?", platform)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
db = db.Where("platform = ?", platform, types.BlobUploadStatusUploaded)
db = db.Where("status = ? OR status = ?", types.BlobUploadStatusUploaded, types.BatchProofsStatusPending)
db = db.Order("batch_index DESC")
db = db.Where("platform = ?", platform)
db = db.Where("status = ? OR status = ?", types.BlobUploadStatusUploaded, types.BatchProofsStatusPending)
db = db.Order("batch_index DESC")
🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 45 to 47, the first Where
clause has a mismatch between placeholders and arguments, passing two arguments
for one placeholder which causes the second argument to be ignored. Fix this by
either removing the extra argument or adding a second placeholder to match both
arguments properly in the query.

Comment on lines +97 to +99
db = db.Where("status = ?", types.BatchProofsStatusPending)
db = db.Order("batch_index asc")
db = db.Limit(limit)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Pending upload query uses batch-proof status

Same constant mix-up here – replace with types.BlobUploadStatusPending.

🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 97 to 99, the query filters
by the wrong status constant `types.BatchProofsStatusPending`. Replace this with
the correct constant `types.BlobUploadStatusPending` to ensure the query
correctly targets pending blob uploads.

Comment on lines +155 to +160
db = db.Where("tx_hash", txHash)

updateFields := map[string]interface{}{
"status": int16(status),
}
if err := db.Updates(updateFields).Error; err != nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing equality operator in WHERE clause

db.Where("tx_hash", txHash) lacks = ?, leading GORM to interpret tx_hash as a boolean column. Use:

-db = db.Where("tx_hash", txHash)
+db = db.Where("tx_hash = ?", txHash)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
db = db.Where("tx_hash", txHash)
updateFields := map[string]interface{}{
"status": int16(status),
}
if err := db.Updates(updateFields).Error; err != nil {
db = db.Where("tx_hash = ?", txHash)
updateFields := map[string]interface{}{
"status": int16(status),
}
if err := db.Updates(updateFields).Error; err != nil {
🤖 Prompt for AI Agents
In rollup/internal/orm/blob_upload.go around lines 155 to 160, the WHERE clause
in the GORM query is missing the equality operator and placeholder. Change
db.Where("tx_hash", txHash) to db.Where("tx_hash = ?", txHash) to correctly
filter by the txHash value.

key := common.BytesToHash(versionedBlobHash[:]).Hex()
err = b.s3Uploader.UploadData(b.ctx, blob[:], key)
// get un-uploaded batches from database in ascending order by their index.
dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformS3)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect platform type in Arweave uploader.

The method is fetching batches for S3 platform instead of Arweave platform. This appears to be a copy-paste error.

-	dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformS3)
+	dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformArweave)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformS3)
dbBatch, err := b.GetFirstUnuploadedBatchByPlatform(b.ctx, b.cfg.StartBatch, types.BlobStoragePlatformArweave)
🤖 Prompt for AI Agents
In rollup/internal/controller/blob_uploader/blob_uploader.go at line 129, the
method call to GetFirstUnuploadedBatchByPlatform incorrectly uses the S3
platform constant instead of the Arweave platform constant. Replace
types.BlobStoragePlatformS3 with the correct constant representing the Arweave
platform to ensure the method fetches batches for Arweave.

Comment on lines +230 to +237
// temporarily add this check because the codec_version field for chunk was added later.
if codecVersion == encoding.CodecV0 && dbBatch.Index < 73224 {
codecVersion = encoding.CodecV1
}
if codecVersion == encoding.CodecV0 && dbBatch.Index >= 73224 { // for sepolia
codecVersion = encoding.CodecV2
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract magic numbers to named constants.

The hardcoded batch index 73224 should be extracted to a named constant for better maintainability and clarity.

+const (
+	// sepoliaBatchIndexV2Transition represents the batch index on Sepolia network where CodecV2 was introduced
+	sepoliaBatchIndexV2Transition = 73224
+)
+
	// temporarily add this check because the codec_version field for chunk was added later.
	if codecVersion == encoding.CodecV0 && dbBatch.Index < 73224 {
		codecVersion = encoding.CodecV1
	}
-	if codecVersion == encoding.CodecV0 && dbBatch.Index >= 73224 { // for sepolia
+	if codecVersion == encoding.CodecV0 && dbBatch.Index >= sepoliaBatchIndexV2Transition { // for sepolia
		codecVersion = encoding.CodecV2
	}
🤖 Prompt for AI Agents
In rollup/internal/controller/blob_uploader/blob_uploader.go around lines 230 to
237, the hardcoded batch index value 73224 should be extracted into a named
constant at the top of the file or in a relevant constants section. Define a
descriptive constant name for this value and replace all occurrences of 73224
with this constant to improve code clarity and maintainability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant