Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(common) Operation nonces #147

Merged
merged 11 commits into from
Oct 30, 2024
Merged

feat(common) Operation nonces #147

merged 11 commits into from
Oct 30, 2024

Conversation

jns-ps
Copy link
Contributor

@jns-ps jns-ps commented Oct 24, 2024

Summary by CodeRabbit

  • New Features

    • Introduced a Digest struct for encapsulating a 32-byte array with hashing functionalities.
    • Added a Hasher struct for SHA256 hashing, supporting serialization and deserialization.
    • Implemented an OpsBuilder for streamlined operation creation in testing.
    • Added a last_hash method to Hashchain for retrieving the last hash.
    • Enhanced Operation structures to include a prev_hash field for maintaining operation integrity.
  • Bug Fixes

    • Enhanced error handling in operation creation processes.
  • Chores

    • Updated import paths for Digest across various modules to reflect structural changes.
    • Added a dependency for testing in Cargo.toml.
  • Style

    • Updated formatting configuration in rustfmt.toml to set maximum line and chain widths.

Copy link
Contributor

coderabbitai bot commented Oct 24, 2024

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

📥 Commits

Files that changed from the base of the PR and between dcc8e03 and 4ca07bd.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (22)
  • crates/common/src/digest.rs (1 hunks)
  • crates/common/src/hashchain.rs (5 hunks)
  • crates/common/src/hasher.rs (1 hunks)
  • crates/common/src/keys.rs (1 hunks)
  • crates/common/src/lib.rs (1 hunks)
  • crates/common/src/operation.rs (16 hunks)
  • crates/common/src/test_ops.rs (1 hunks)
  • crates/common/src/test_utils.rs (3 hunks)
  • crates/common/src/tree.rs (3 hunks)
  • crates/da/src/lib.rs (1 hunks)
  • crates/node_types/lightclient/src/lightclient.rs (1 hunks)
  • crates/node_types/prover/src/prover/mod.rs (1 hunks)
  • crates/node_types/prover/src/prover/tests.rs (4 hunks)
  • crates/node_types/prover/src/webserver.rs (1 hunks)
  • crates/storage/src/database.rs (1 hunks)
  • crates/storage/src/inmemory.rs (1 hunks)
  • crates/storage/src/redis.rs (1 hunks)
  • crates/storage/src/rocksdb.rs (1 hunks)
  • crates/tests/Cargo.toml (1 hunks)
  • crates/tests/src/lib.rs (2 hunks)
  • crates/zk/sp1/src/main.rs (1 hunks)
  • rustfmt.toml (1 hunks)
_______________________________________________________________________________
< Why do we never have time to do it right, but always have time to do it over? >
-------------------------------------------------------------------------------
 \
  \   (\__/)
      (•ㅅ•)
      /   づ

Walkthrough

The pull request introduces significant changes across multiple files, primarily focusing on the addition of a new Digest struct and a Hasher struct, enhancing the handling of hashing and digests within the codebase. The Digest struct encapsulates a 32-byte array and provides various conversion methods, while the Hasher struct wraps the sha2::Sha256 hasher, implementing serialization and deserialization. Modifications to existing structures and methods in files related to Hashchain, Operation, and various database implementations reflect the integration of these new types, improving overall functionality and organization.

Changes

File Path Change Summary
crates/common/src/digest.rs Introduced Digest struct with methods for hashing, conversion, and serialization. Implemented traits like Serialize, Deserialize, and Copy.
crates/common/src/hashchain.rs Updated create_account and register_service methods to include prev_hash: Digest. Refactored push method to use last_hash. Added last_hash method to retrieve last entry hash.
crates/common/src/hasher.rs Introduced Hasher struct wrapping sha2::Sha256 with methods for hashing and serialization. Implemented SimpleHasher trait.
crates/common/src/keys.rs Modified import statement for Digest from crate::tree::Digest to crate::digest::Digest.
crates/common/src/lib.rs Added new public modules: digest, hasher, and conditionally included test_ops.
crates/common/src/operation.rs Updated structures to include prev_hash: Digest. Modified constructors to initialize prev_hash.
crates/common/src/test_ops.rs Introduced UncommittedOperation and OpsBuilder structs for managing operations in a hashchain context. Provided various methods for service and account management.
crates/common/src/test_utils.rs Updated methods to include prev_hash parameter for enhanced operation creation.
crates/common/src/tree.rs Removed Hasher struct and its implementations. Updated Operation::CreateAccount to include prev_hash.
crates/da/src/lib.rs Changed import of Digest from prism_common::tree to prism_common::digest.
crates/node_types/lightclient/src/lightclient.rs Updated import for Digest from prism_common::tree::Digest to prism_common::digest::Digest.
crates/node_types/prover/src/prover/mod.rs Reorganized import statements for clarity. No changes to Config or Prover functionality.
crates/node_types/prover/src/prover/tests.rs Transitioned to using OpsBuilder for creating operations in tests, improving readability.
crates/node_types/prover/src/webserver.rs Organized import statements without altering functionality.
crates/storage/src/database.rs Updated import for Digest from prism_common::tree::Digest to prism_common::digest::Digest.
crates/storage/src/inmemory.rs Changed import for Digest from prism_common::tree::Digest to prism_common::digest::Digest.
crates/storage/src/redis.rs Updated import for Digest from prism_common::tree::Digest to prism_common::digest::Digest and modified methods to use the new Digest type.
crates/storage/src/rocksdb.rs Changed import for Digest from prism_common::tree::Digest to prism_common::digest::Digest and updated methods accordingly.
crates/tests/Cargo.toml Added dependency jmt = { workspace = true }.
crates/tests/src/lib.rs Enhanced error handling in add_key function and updated imports for Digest and Hasher.
crates/zk/sp1/src/main.rs Modified import for Digest from prism_common::tree to prism_common::digest.
rustfmt.toml Added settings for max_width and chain_width, both set to 100.

Possibly related issues

Possibly related PRs

  • feat(common) Allow adding signed data #141: The changes in this PR involve the addition of a new AddData operation that utilizes the Digest type, which is directly related to the new Digest struct and its functionalities introduced in the main PR.

Suggested reviewers

  • distractedm1nd
  • sebasti810

Poem

In the code where digests play,
A rabbit hops and finds the way.
With hashes strong and structs anew,
We build a path, both safe and true.
So let us cheer, with joy and glee,
For every change, a victory! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Outside diff range and nitpick comments (15)
crates/storage/src/rocksdb.rs (1)

Based on the implementation details found in both the Digest type and its usage in rocksdb.rs, I can now provide the final response:

The current implementation can be improved using Digest's built-in methods

The Digest type provides several useful methods that can make the code more maintainable:

  • Use Digest::new([u8; 32]) instead of Digest(value) for creating a new digest
  • Use commitment.to_bytes() instead of commitment.0 when storing the digest

Here's how the methods could be used:

  • In get_commitment: Replace Digest(value) with Digest::new(value)
  • In set_commitment: Replace commitment.0 with commitment.to_bytes()
🔗 Analysis chain

Line range hint 31-42: Consider using Digest's built-in methods if available.

The commitment methods currently handle raw byte conversion manually. If the new Digest implementation in prism_common::digest provides methods for serialization/deserialization or database encoding/decoding, consider using those for better maintainability and consistency.

Let's check if the new Digest implementation offers such methods:

Also applies to: 44-49


Let me analyze the code further to understand the current usage context better before making the final recommendation.


Let me try a different pattern to get the context of how the Digest is being used in the storage implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Look for available Digest methods that could improve the implementation
# Test: Search for Digest implementation
ast-grep --pattern 'impl Digest {
  $$$
}'

Length of output: 1726


Script:

#!/bin/bash
# Look for the complete implementation of the function containing the reviewed code
ast-grep --pattern 'fn $_(self, epoch: $_) -> Result<Digest$_> {
  $$$
}'

Length of output: 75


Script:

#!/bin/bash
# Look for the complete implementation in rocksdb.rs
rg -A 10 "fn.*epoch.*Digest" crates/storage/src/rocksdb.rs

Length of output: 925

crates/node_types/prover/src/prover/tests.rs (1)

87-98: Consider adding more detailed test documentation.

While the test logic is clear, it would be helpful to document:

  1. Why these specific operations were chosen to demonstrate the invalid case
  2. What makes these operations valid individually but invalid when processed together
  3. The significance of the key index in this scenario

Add a detailed comment block explaining the test case:

     let mut ops_builder = OpsBuilder::new();
 
     let new_key_1 = create_mock_signing_key();
 
+    // This test demonstrates a key revocation edge case:
+    // 1. Add a new key (becomes index 1)
+    // 2. Revoke that key
+    // 3. Try to use the revoked key (index 1) to add another key
+    // The last operation fails because it uses a revoked key
     let operations = vec![
         ops_builder.register_service_with_random_key("service_id").ex(),
         ops_builder.create_account_with_random_key("account_id", "service_id").ex(),
crates/storage/src/redis.rs (2)

Line range hint 156-167: Consider improving error handling and type conversion.

The current implementation could be more robust in handling potential error cases during string-to-bytes conversion.

Consider this safer implementation:

     fn get_commitment(&self, epoch: &u64) -> Result<Digest> {
         let mut con = self.lock_connection()?;
         let redis_value = con
             .get::<&str, String>(&format!("commitments:epoch_{}", epoch))
             .map_err(|_| {
                 DatabaseError::NotFoundError(format!("commitment from epoch_{}", epoch))
             })?;
 
-        // storing hashes into
-        let value = redis_value.trim_matches('"').as_bytes();
-        Ok(Digest(value.try_into().unwrap()))
+        let value = redis_value.trim_matches('"').as_bytes();
+        value.try_into()
+            .map(Digest)
+            .map_err(|_| anyhow!(DatabaseError::DeserializationError(
+                format!("Invalid digest length for epoch {}", epoch)
+            )))

Based on the output, I can now generate the final response since I have found that the Digest struct already has a proper conversion method to_bytes() that can be used instead of directly accessing the internal field.

Use to_bytes() method instead of accessing Digest's internal field

The Digest struct already provides a proper conversion method to_bytes() that returns [u8; 32]. Here's the corrected suggestion:

     fn set_commitment(&self, epoch: &u64, commitment: &Digest) -> Result<()> {
         let mut con = self.lock_connection()?;
-        con.set::<&String, &[u8; 32], ()>(&format!("commitments:epoch_{}", epoch), &commitment.0)
+        con.set::<&String, &[u8; 32], ()>(
+            &format!("commitments:epoch_{}", epoch),
+            &commitment.to_bytes()
+        )
             .map_err(|_| {
                 anyhow!(DatabaseError::WriteError(format!(
                     "commitment for epoch: {}",
                     epoch
                 )))
             })
     }
🔗 Analysis chain

Line range hint 209-218: Avoid direct access to Digest's internal representation.

The current implementation directly accesses Digest's internal field (.0), which creates tight coupling. Consider using a proper conversion method instead.

Consider adding a method to convert Digest to bytes:

     fn set_commitment(&self, epoch: &u64, commitment: &Digest) -> Result<()> {
         let mut con = self.lock_connection()?;
-        con.set::<&String, &[u8; 32], ()>(&format!("commitments:epoch_{}", epoch), &commitment.0)
+        con.set::<&String, &[u8], ()>(
+            &format!("commitments:epoch_{}", epoch),
+            commitment.as_bytes()
+        )
             .map_err(|_| {
                 anyhow!(DatabaseError::WriteError(format!(
                     "commitment for epoch: {}",
                     epoch
                 )))
             })
     }

This assumes the existence of an as_bytes() method in the new Digest implementation. If it doesn't exist, consider adding it to the Digest struct.

Let's verify if the Digest struct has proper conversion methods:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for Digest implementation to verify available methods
ast-grep --pattern 'impl Digest {
  $$$
}'

Length of output: 1726

crates/common/src/digest.rs (4)

46-57: Simplify error message construction in TryFrom implementation

In the error handling of the conversions in lines 46-57, you're using anyhow!(format!(...)), which can be simplified by passing the formatted string directly to anyhow!. This reduces unnecessary nesting of format! and improves readability.

You may apply the following code changes:

-                    u64::from_le_bytes(byte_array[0..8].try_into().map_err(|_| {
-                        anyhow!(format!("slice to array: [0..8] for digest: {value:?}"))
-                    })?),
+                    u64::from_le_bytes(byte_array[0..8].try_into().map_err(|_| {
+                        anyhow!("slice to array: [0..8] for digest: {:?}", value)
+                    })?),

Apply similar changes to the other occurrences in lines 49-51, 52-54, and 55-57:

-                    u64::from_le_bytes(byte_array[8..16].try_into().map_err(|_| {
-                        anyhow!(format!("slice to array: [8..16] for digest: {value:?}"))
-                    })?),
+                    u64::from_le_bytes(byte_array[8..16].try_into().map_err(|_| {
+                        anyhow!("slice to array: [8..16] for digest: {:?}", value)
+                    })?),

-                    u64::from_le_bytes(byte_array[16..24].try_into().map_err(|_| {
-                        anyhow!(format!("slice to array: [16..24] for digest: {value:?}"))
-                    })?),
+                    u64::from_le_bytes(byte_array[16..24].try_into().map_err(|_| {
+                        anyhow!("slice to array: [16..24] for digest: {:?}", value)
+                    })?),

-                    u64::from_le_bytes(byte_array[24..32].try_into().map_err(|_| {
-                        anyhow!(format!("slice to array: [24..32] for digest: {value:?}"))
-                    })?),
+                    u64::from_le_bytes(byte_array[24..32].try_into().map_err(|_| {
+                        anyhow!("slice to array: [24..32] for digest: {:?}", value)
+                    })?),

42-43: Add comment explaining the rationale for reversing the byte array

Reversing the byte array before converting to u64 segments (line 42) may not be immediately clear to readers. Adding a comment to explain the reason for this step can improve code readability and maintainability.


35-35: Mark temporary implementation with TODO for future updates

The comment on line 35 indicates that this implementation is temporary and subject to change once the curve choice is made. Consider adding a TODO comment or creating a tracking issue to ensure this is revisited in the future.

Would you like me to create a GitHub issue to track this task?


93-96: Simplify error message construction in from_hex method

In line 96, you're using anyhow!(format!(...)), which can be simplified by passing the formatted string directly to anyhow!. This helps to reduce unnecessary nesting and improves code clarity.

Apply this change:

-            .map_err(|e| anyhow!(format!("Invalid Format: {e}")))?;
+            .map_err(|e| anyhow!("Invalid Format: {}", e))?;
crates/common/src/test_ops.rs (1)

219-219: Correct the error message in expect

The error message in the expect call refers to "Creating account operation" instead of "Creating revoke key operation." Updating this message will improve clarity during debugging.

Apply this diff to correct the message:

-                    .expect("Creating account operation should work");
+                    .expect("Creating revoke key operation should work");
crates/common/src/operation.rs (2)

88-89: Add Documentation for prev_hash Field

Consider adding a comment explaining the purpose of the prev_hash field in RegisterServiceArgs for better code clarity.

Apply this diff to include documentation:

 pub struct RegisterServiceArgs {
     /// Service ID
     pub id: String,
     /// Challenge gate for access control
     pub creation_gate: ServiceChallenge,
+    /// The hash of the previous operation
     pub prev_hash: Digest,
 }

125-126: Include Comments for New Fields in KeyOperationArgs

Adding comments for the new prev_hash field enhances code readability and maintainability.

Apply this diff to add the missing documentation:

 pub struct KeyOperationArgs {
     /// Account ID
     pub id: String,
     /// Public key being added or revoked
     pub value: VerifyingKey,
+    /// The hash of the previous operation
     pub prev_hash: Digest,
     /// Signature to authorize the action
     pub signature: HashchainSignatureBundle,
 }
crates/common/src/tree.rs (4)

Line range hint 263-302: Consider more informative error messages for better debugging.

In the error handling when retrieving the service_hashchain:

let Found(service_hashchain, _) = self.get(service_key_hash)? else {
    bail!("Failed to get hashchain for service ID {}", service_id);
};

Provide more context in the error message to assist with debugging. For example, specify whether the service is unregistered or if there's a database access issue.

Apply this diff to enhance the error message:

- bail!("Failed to get hashchain for service ID {}", service_id);
+ bail!("Service hashchain not found for service ID '{}'. Ensure the service is registered before creating an account.", service_id);

9-9: Ensure serde imports are necessary.

You have imported serde::{Deserialize, Serialize}. Verify whether both traits are required for this file. If only one is needed, import that specific trait to keep the code clean.


Line range hint 230-243: Update KeyDirectoryTree to accommodate the removal of Hasher.

Since Hasher is removed, the type parameter in JellyfishMerkleTree<Arc<S>, Hasher> needs to be updated. Determine the appropriate hasher to use or adjust the JellyfishMerkleTree implementation accordingly.


Line range hint 338-344: Handle the case when old_value is None in update.

In the update function:

if old_value.is_none() {
    bail!("Key does not exist");
}

Consider providing more detailed information or handling the case differently if appropriate.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between fa6fd7b and ca8af94.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (22)
  • crates/common/src/digest.rs (1 hunks)
  • crates/common/src/hashchain.rs (4 hunks)
  • crates/common/src/hasher.rs (1 hunks)
  • crates/common/src/keys.rs (1 hunks)
  • crates/common/src/lib.rs (1 hunks)
  • crates/common/src/operation.rs (16 hunks)
  • crates/common/src/test_ops.rs (1 hunks)
  • crates/common/src/test_utils.rs (3 hunks)
  • crates/common/src/tree.rs (3 hunks)
  • crates/da/src/lib.rs (1 hunks)
  • crates/node_types/lightclient/src/lightclient.rs (1 hunks)
  • crates/node_types/prover/src/prover/mod.rs (1 hunks)
  • crates/node_types/prover/src/prover/tests.rs (5 hunks)
  • crates/node_types/prover/src/webserver.rs (1 hunks)
  • crates/storage/src/database.rs (1 hunks)
  • crates/storage/src/inmemory.rs (1 hunks)
  • crates/storage/src/redis.rs (1 hunks)
  • crates/storage/src/rocksdb.rs (1 hunks)
  • crates/tests/Cargo.toml (1 hunks)
  • crates/tests/src/lib.rs (2 hunks)
  • crates/zk/sp1/src/main.rs (1 hunks)
  • rustfmt.toml (1 hunks)
✅ Files skipped from review due to trivial changes (8)
  • crates/common/src/keys.rs
  • crates/da/src/lib.rs
  • crates/node_types/lightclient/src/lightclient.rs
  • crates/node_types/prover/src/prover/mod.rs
  • crates/storage/src/database.rs
  • crates/storage/src/inmemory.rs
  • crates/zk/sp1/src/main.rs
  • rustfmt.toml
🔇 Additional comments (38)
crates/common/src/lib.rs (2)

1-1: LGTM! Well-organized module structure.

The new digest and hasher modules are appropriately placed and follow Rust's module organization conventions. Their public visibility is correct as they provide core functionality for hash operations.

Also applies to: 3-3


11-12: Verify test coverage for the new test_ops module.

The new test_ops module is correctly feature-gated with "test_utils". Let's verify it has corresponding tests.

✅ Verification successful

Let me gather more information about the test_ops module to better understand its purpose and test coverage requirements.


Let me check the actual test file that uses the test_ops module to understand how it's being tested.


I can now generate the final response based on the gathered information.

The test_ops module is well-tested through integration tests

The test_ops module is a test utility module that provides an OpsBuilder for creating test operations. It is actively used in multiple test cases in crates/node_types/prover/src/prover/tests.rs, including:

  • test_validate_and_queue_update
  • test_process_operation
  • test_execute_block_with_invalid_tx
  • test_execute_block
  • test_finalize_new_epoch
  • test_restart_sync_from_scratch
  • test_load_persisted_state

Each test exercises different aspects of the OpsBuilder, effectively testing the functionality of the test_ops module through its usage in these integration tests.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for test coverage of the test_ops module

# Look for test modules or functions related to test_ops
rg -g '*.rs' -l 'mod test(_)?ops.*tests' || echo "No test module found"

# Look for actual test functions
rg -g '*.rs' '#\[test\].*fn.*test_ops' -A 2 || echo "No test functions found"

Length of output: 196


Script:

#!/bin/bash
# Check the content of test_ops module to understand its functionality
fd test_ops.rs | xargs cat || echo "test_ops.rs not found"

# Look for any tests in the same directory
fd -e rs -p '.*/tests/.*' --exec rg -l 'test_ops' {} || echo "No references in test files"

# Check if test_ops is used in integration tests
rg -g '*.rs' 'use.*test_ops' -A 1 || echo "No usages found"

Length of output: 9460


Script:

#!/bin/bash
# Check the content of the test file that uses test_ops
rg -g '*.rs' 'use.*test_ops' -B 5 -A 20

# Look for actual test functions in that file
cat crates/node_types/prover/src/prover/tests.rs || echo "Test file not found"

Length of output: 9361

crates/tests/Cargo.toml (1)

20-20: LGTM! Clean addition of JMT dependency.

The addition of the jmt dependency with workspace inheritance is well-structured and aligns with the PR's objectives for operation nonces and hashing functionality.

crates/common/src/hasher.rs (4)

1-5: LGTM! Clean struct design with appropriate trait derivations.

The struct is well-designed as a thin wrapper around sha2::Sha256 with necessary trait implementations.


30-54: LGTM! Correct implementation of the visitor pattern.

The deserialization implementation is well-structured and consistent with the serialization approach.


12-14: ⚠️ Potential issue

Fix infinite recursion in update method.

The current implementation calls itself recursively, which will cause a stack overflow.

Fix by calling the inner hasher's update method:

pub fn update(&mut self, data: &[u8]) {
-    self.update(data);
+    self.0.update(data);
}

Likely invalid or redundant comment.


21-28: Verify serialization requirements for Hasher state.

The current implementation serializes the Hasher as an empty tuple struct, discarding its internal state. This means that deserializing and continuing to use the hasher will effectively reset its state.

Please confirm if this is the intended behavior. If the hasher's state needs to be preserved across serialization:

✅ Verification successful

Let me gather more information about the Hasher usage and implementation to better understand the serialization requirements.


Let me gather one more piece of information about the usage patterns of this hasher.


Based on the gathered information, I can now provide a final response:

Serialization behavior for Hasher is correct and intentional

The current serialization implementation that discards the internal state is the correct approach because:

  1. The Hasher is used primarily for creating hashes in a Merkle tree implementation (JellyfishMerkleTree) and computing key hashes
  2. The hasher is always used in a pattern where it's created, immediately used to hash data, and finalized
  3. There are no instances in the codebase where a hasher's intermediate state needs to be preserved
  4. The matching Deserialize implementation also creates a fresh default hasher, maintaining consistency with the serialization behavior

The implementation follows a common pattern for cryptographic hashers where preserving intermediate state across serialization boundaries is typically avoided for security and simplicity.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any tests or usage patterns that might rely on hasher state preservation
rg -l "serde|serialize|deserialize" | xargs rg "Hasher.*update.*finalize"

Length of output: 75


Script:

#!/bin/bash
# Look for the Hasher implementation and its usage patterns
ast-grep --pattern 'impl $_ for Hasher { $$$ }'

# Also check for any tests involving Hasher
rg -A 5 "Hasher" "*test*.rs"

# Look for any documentation comments on Hasher
rg -B 2 "struct Hasher"

Length of output: 1399


Script:

#!/bin/bash
# Look for any usage patterns of Hasher, particularly around serialization
rg -B 2 -A 2 "Hasher" --type rust

# Check if there's any deserialization implementation
ast-grep --pattern 'impl Deserialize for Hasher { $$$ }'

Length of output: 11440

crates/storage/src/rocksdb.rs (1)

7-7: LGTM! Verify digest compatibility.

The import path change to prism_common::digest::Digest reflects a better organization of types.

Let's verify the new Digest implementation maintains the same interface:

✅ Verification successful

Let me gather more information about the Digest implementation to ensure complete verification.


Based on the gathered information, I can now generate the final response:

Import change is safe and maintains compatibility

The verification confirms that:

  • The new Digest struct in prism_common::digest maintains the same core interface with [u8; 32] internal representation
  • All storage implementations (RocksDB, Redis, InMemory) consistently use the new import path
  • The implementation provides the same functionality plus additional methods like hash(), zero(), and format conversions
  • The type maintains compatibility with RootHash through From implementations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the Digest type's interface and usage
# Test 1: Check if Digest is used consistently across the codebase
rg --type rust "use.*digest::Digest"

# Test 2: Look for any potential breaking changes in Digest usage
ast-grep --pattern 'Digest($_)'

Length of output: 900


Script:

#!/bin/bash
# Check the implementation of Digest in the new location
rg -A 10 "^pub struct Digest" crates/common/src/digest.rs

# Check if there are any breaking changes in the Digest API
rg -A 10 "impl.*Digest" crates/common/src/digest.rs

# Look for any test cases that verify Digest behavior
rg -B 2 -A 5 "#\[test\].*Digest" crates/common/src/digest.rs

Length of output: 2200

crates/tests/src/lib.rs (5)

6-14: LGTM! Import changes are well-organized

The new imports for bail, Digest, and Hasher are properly organized and align with the PR's objective of implementing operation nonces.


35-36: LGTM! Robust hashing implementation

The new hashing logic using Digest::hash and KeyHash::with<Hasher> provides a more structured and type-safe approach.


38-44: LGTM! Improved error handling

The use of pattern matching with clear error messages improves the code's robustness and debuggability.


Line range hint 67-186: LGTM! Comprehensive integration test

The test provides thorough coverage of the new functionality:

  • Tests both account creation and key updates
  • Includes error handling for failed operations
  • Uses realistic scenarios with multiple concurrent operations

51-54: Verify the hardcoded key index

The key index is hardcoded to 0 without explanation. Consider documenting why this is acceptable in a test context or if it should be derived from the actual key count.

✅ Verification successful

Let me search for more context about the key index usage and implementation.


Based on the search results, I can now provide a conclusive response about the key index usage.

The hardcoded key index 0 is appropriate in test context

The key index is used as part of the HashchainSignatureBundle to identify which key in the hashchain was used to sign an operation. In test contexts, using index 0 is appropriate because:

  1. Test utilities consistently use index 0 across all test operations (as seen in test_utils.rs and test_ops.rs)
  2. The index 0 represents the first/root key in a new hashchain, which is the typical scenario in tests
  3. The validation logic in hashchain.rs shows that the key index is properly validated in production code through verify_signature_at_key_idx
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other occurrences of key index usage
rg -A 5 "new_add_key" 

Length of output: 1928


Script:

#!/bin/bash
# Search for key_idx usage in operation implementations and tests
rg -A 10 "key_idx" 

# Search for any validation or checks related to key indices
ast-grep --pattern 'fn validate($$$) { $$$ }'

Length of output: 17585

crates/node_types/prover/src/webserver.rs (2)

16-16: LGTM! Clean import organization.

The consolidated imports improve readability by grouping related components from prism_common.


Line range hint 71-75: Verify compatibility with Operation and Hasher changes.

Since this PR involves changes to Operation and hash-related functionality, let's verify the compatibility of the web endpoints:

  1. The /update-entry endpoint handling Operation inputs
  2. The response types using Hasher

Also applies to: 146-152

crates/node_types/prover/src/prover/tests.rs (3)

2-2: LGTM! Clean import of the new OpsBuilder.

The import is correctly placed and properly scoped.


20-26: LGTM! Well-structured test operation creation.

The refactoring to use OpsBuilder provides a more maintainable and readable way to create test operations. The sequence of operations follows a logical flow: service registration → account creation → key addition.


85-98: Verify the test's assumption about key indices.

The test creates an interesting edge case where operations are valid individually but fail when processed together. The comment suggests a dependency on key index=1, but this assumption should be verified.

Let's verify the key indexing logic:

✅ Verification successful

Let me gather more information about the key indexing implementation to better understand the test's assumptions.

The first pattern will help us understand how add_random_key handles the key index parameter, and the second search will reveal any validation logic around key indices. This will help verify if the test's assumption about index=1 is correct and whether the failure scenario is properly testing the intended edge case.


Let me search for one more crucial piece of information about how revoked keys are handled.

This will help us understand the interaction between key revocation and subsequent key addition at the same index, which is central to the test case's scenario.


Based on the gathered evidence, I can now provide a final response about the test's assumptions.

The test correctly verifies key index handling and revocation logic

The test's assumption about key indices is valid, and the failure scenario is correctly constructed because:

  1. The code in hashchain.rs shows that key revocation state is tracked by scanning the operation history (is_key_revoked method)
  2. When a key is used, it's validated against both its index (get_key_at_index) and revocation status
  3. The test creates a deliberate conflict by:
    • Adding a key (becoming index 1)
    • Revoking that key
    • Attempting to add a new key specifically at index 1 in the same block

This sequence correctly tests the edge case where operations that are individually valid become invalid when processed together, as the revocation of a key should prevent its index from being reused immediately.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Search for key index handling in the codebase
# to verify the assumption about index=1 in the test.

# Look for key index related code
rg -A 5 'key.*index|index.*key' --type rust

# Look for related test cases
rg -A 5 'add.*key.*index|index.*add.*key' --type rust

Length of output: 5061


Script:

#!/bin/bash
# Look for the implementation of add_random_key and related methods in OpsBuilder
ast-grep --pattern 'impl OpsBuilder {
  $$$
  add_random_key($$$) {
    $$$
  }
  $$$
}'

# Search for key index validation logic
rg -B 2 -A 5 'validate.*key.*index|verify.*key.*index' --type rust

Length of output: 739


Script:

#!/bin/bash
# Look for revoked key handling logic
rg -B 2 -A 5 'is_key_revoked|revoked.*key' --type rust

# Look for the implementation of revoke_key_verified_with_root
ast-grep --pattern 'revoke_key_verified_with_root($$$) {
  $$$
}'

Length of output: 1248

crates/storage/src/redis.rs (1)

7-7: LGTM: Import change aligns with the new Digest implementation.

The import change correctly reflects the architectural shift from tree-based Digest to the new digest module implementation.

crates/common/src/test_utils.rs (3)

119-119: LGTM: Correctly includes previous hash in key addition operation.

The change properly maintains operation chain integrity by including the last hash from the account's hashchain.


158-159: LGTM: Properly handles previous hash in data addition operation.

The changes correctly maintain operation chain integrity by:

  1. Extracting the last hash before operation creation
  2. Including it in the new operation

Also applies to: 164-164


236-243: LGTM: Operation creation properly includes last hash.

The changes maintain consistency with other operation modifications. However, let's verify the indentation style.

crates/common/src/hashchain.rs (5)

10-11: LGTM! Clean module organization.

The separation of Digest and Hasher into their own modules improves code organization and modularity.


276-278: LGTM! Clean abstraction for last hash retrieval.

The method is well-implemented using idiomatic Rust and properly handles the empty case.


285-285: LGTM! Good use of abstraction.

The use of last_hash method here follows the DRY principle and improves code maintainability.


78-79: LGTM! Enhanced chain of trust with prev_hash.

The addition of prev_hash parameter strengthens the chain of trust by linking operations. Consider verifying that the prev_hash matches the actual previous hash.

#!/bin/bash
# Search for potential mismatches between prev_hash assignments and actual previous hashes
ast-grep --pattern 'prev_hash = $expr' -l rust

Also applies to: 87-88


99-99: 🛠️ Refactor suggestion

Consider adding prev_hash validation.

While using Digest::zero() for the first operation is fine, consider adding explicit validation in validate_new_operation to ensure that prev_hash matches the actual previous hash for subsequent operations.

Consider adding validation like this:

fn validate_new_operation(&self, operation: &Operation) -> Result<()> {
    // Validate prev_hash matches the last hash
    if let Some(prev_hash) = operation.prev_hash() {
        if prev_hash != self.last_hash() {
            bail!("Previous hash mismatch");
        }
    }
    // ... rest of the validation
}

Also applies to: 285-285

crates/common/src/operation.rs (10)

8-11: Ensure Correct Module Imports

The added imports for Digest, SigningKey, and VerifyingKey are appropriate for the new functionality introduced.


163-167: Initialize prev_hash in new_register_service

The inclusion of prev_hash: Digest::zero() in the new_register_service method ensures the operation chain starts correctly.


230-238: Validate prev_hash Handling in new_add_signed_data

Confirm that the prev_hash field is appropriately managed in the new_add_signed_data method and that its introduction does not disrupt existing functionality.


295-296: Preserve prev_hash in without_challenge Method

In the without_challenge method, maintaining the prev_hash ensures the continuity of the operation chain when challenges are omitted.


307-309: Consistent Preservation of prev_hash in without_signature

The prev_hash field should remain unchanged in the without_signature method across all operation types to maintain chain integrity.

Also applies to: 316-318, 325-328, 337-338, 343-344


173-180: Handle prev_hash in new_add_key Function

The prev_hash parameter is added to new_add_key. Verify that all calls to this function supply the correct prev_hash to maintain operation chain integrity.

Run the following script to inspect function calls:

#!/bin/bash
# Find all usages of `new_add_key` and check `prev_hash` parameter

rg 'new_add_key\(' -A2

143-146: ⚠️ Potential issue

Correct Initialization in new_create_account

Ensure that both prev_hash and signature are properly initialized in the new_create_account method to prevent potential errors.

Apply this diff for accurate initialization:

 let mut op = Operation::CreateAccount(CreateAccountArgs {
     id: id.to_string(),
     value: signing_key.clone().verifying_key(),
     service_id,
     challenge: ServiceChallengeInput::Signed(Vec::new()),
+    prev_hash: Digest::zero(),
+    signature: Vec::new(),
 });

Likely invalid or redundant comment.


201-208: Update new_revoke_key with prev_hash Parameter

Ensure that the prev_hash is correctly passed and utilized in the new_revoke_key function to preserve the operation sequence.

Run the following script to verify updates:

✅ Verification successful

Let me gather more context about the implementation of new_revoke_key to verify if prev_hash is being correctly utilized.


Let me try a different approach to get the implementation details using ripgrep with more context lines.


Based on the code inspection, I can now provide the final response:

prev_hash Parameter is Correctly Implemented in new_revoke_key

The verification shows that prev_hash is properly:

  • Accepted as a parameter in the function signature
  • Passed through to the KeyOperationArgs struct
  • Used in test code with hc.last_hash()
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Locate all calls to `new_revoke_key` and confirm `prev_hash` usage

rg 'new_revoke_key\(' -A2

Length of output: 454


Script:

#!/bin/bash
# Get the implementation of new_revoke_key function
ast-grep --pattern 'fn new_revoke_key($$$) { $$$ }'

Length of output: 53


Script:

#!/bin/bash
# Get the implementation of new_revoke_key function with more context
rg -U 'pub fn new_revoke_key' -B2 -A15 crates/common/src/operation.rs

Length of output: 607


75-78: ⚠️ Potential issue

Initialize the signature Field Properly

The signature field has been added to CreateAccountArgs. Ensure that this field is correctly initialized and handled in all relevant methods to prevent uninitialized value issues.

Apply this diff to confirm proper initialization:

 pub struct CreateAccountArgs {
     /// Account ID
     pub id: String,
     /// Public key being added
     pub value: VerifyingKey,
     /// Associated service ID
     pub service_id: String,
     /// Challenge input for verification
     pub challenge: ServiceChallengeInput,
     /// The hash of the previous operation
     pub prev_hash: Digest,
+    /// The signature that signed the operation
     pub signature: Vec<u8>,
 }

Run the following script to check initialization:

✅ Verification successful

Let me gather more information to see how the signature field is being set and used.


Based on the gathered information, I can now provide the final response:

signature Field is Properly Initialized and Handled

The signature field in CreateAccountArgs is correctly initialized and handled throughout the codebase:

  • It's initialized as an empty vector in new_create_account
  • The signature is properly set via insert_signature method
  • Signature verification is implemented in verify_user_signature
  • Empty signatures are validated in the validate method
  • The field is properly handled in signature-less operations via without_signature
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify that `signature` is properly initialized in all instances

rg 'CreateAccountArgs \{' -A5

Length of output: 3564


Script:

#!/bin/bash
# Check for signature field initialization and usage
rg "signature.*=|signature:" -B2 -A2 crates/common/src/operation.rs

Length of output: 4836


Script:

#!/bin/bash
# Look for methods that might be setting the signature
ast-grep --pattern 'fn $FUNC($ARGS) {
  $$$
  signature
  $$$
}'

Length of output: 66


Script:

#!/bin/bash
# Check for any sign/verify methods that might be related
rg "sign|verify" -B2 -A2 crates/common/src/operation.rs

Length of output: 9236


112-113: Verify Integration of prev_hash in AddDataArgs

Ensure that the prev_hash field in AddDataArgs is correctly managed in all related operations and that it maintains the integrity of the operation chain.

Run the following script to confirm usage:

✅ Verification successful

Let me gather more information about the AddDataArgs structure and its usage.


Let me search for the implementation of new_add_data to see how prev_hash is handled in the operation creation.


Let me try one final search to find the complete structure and usage of AddDataArgs.


Based on the gathered information, I can now provide a final response:

prev_hash Field is Properly Integrated in AddDataArgs

The prev_hash field in AddDataArgs is correctly integrated and managed throughout the codebase:

  • The field is properly defined in the AddDataArgs struct with appropriate documentation
  • It's used consistently in operation chain validation through HashchainSignatureBundle
  • The field is properly handled in operation creation and verification flows
  • The integrity of the operation chain is maintained through proper signature verification and hash chain validation
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check all instantiations of `AddDataArgs` for `prev_hash` handling

rg 'AddDataArgs \{' -A5

Length of output: 1092


Script:

#!/bin/bash
# Search for AddDataArgs struct definition and its usage
ast-grep --pattern 'struct AddDataArgs {
  $$$
}'

# Also search for any direct mentions of AddDataArgs
rg "AddDataArgs" -B2 -A2

Length of output: 3569


Script:

#!/bin/bash
# Search for new_add_data implementation
ast-grep --pattern 'fn new_add_data(
  $$$
)'

# Also search for any test cases involving AddData operations
rg "new_add_data" -B2 -A5

Length of output: 73


Script:

#!/bin/bash
# Search for the complete AddDataArgs struct definition
rg -A 10 "pub struct AddDataArgs \{"

# Also search for any tests involving AddData operations
rg "Operation::AddData" -B2 -A2

Length of output: 5778

crates/common/src/tree.rs (1)

Line range hint 263-302: Ensure proper handling of prev_hash in CreateAccount.

The prev_hash parameter has been added to the CreateAccount operation. Ensure that:

  • prev_hash is properly validated to prevent potential security vulnerabilities like replay attacks.
  • The use of *prev_hash correctly dereferences the value without causing errors.
  • All downstream functions that use prev_hash, such as Hashchain::create_account, are updated to handle the new parameter appropriately.

crates/common/src/hasher.rs Show resolved Hide resolved
crates/node_types/prover/src/prover/tests.rs Outdated Show resolved Hide resolved
crates/common/src/digest.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/tree.rs Show resolved Hide resolved
crates/common/src/tree.rs Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between ca8af94 and 99fea23.

📒 Files selected for processing (4)
  • crates/common/src/lib.rs (1 hunks)
  • crates/common/src/test_ops.rs (1 hunks)
  • crates/node_types/prover/src/prover/tests.rs (5 hunks)
  • rustfmt.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • crates/common/src/lib.rs
  • rustfmt.toml
🔇 Additional comments (4)
crates/node_types/prover/src/prover/tests.rs (4)

2-2: LGTM: Clean import addition for OpsBuilder.

The new import is properly scoped and consistently used throughout the test file.


20-26: LGTM: Well-structured operation creation using builder pattern.

The refactoring to use OpsBuilder improves code readability and maintainability. The operations are created in a logical sequence that represents a typical workflow.


90-98: LGTM: Well-designed test case for invalid operation handling.

The test effectively verifies the system's behavior when processing operations that are individually valid but collectively invalid (revoking and re-adding a key in the same block). The comments clearly explain the test scenario.


120-129: Remove debug print statements as previously discussed.

These debug print statements were previously flagged for removal in the earlier review discussion.

crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
crates/common/src/test_ops.rs Show resolved Hide resolved
Copy link
Contributor

@distractedm1nd distractedm1nd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only requested change is removing println

crates/common/src/test_ops.rs Show resolved Hide resolved
crates/node_types/prover/src/prover/tests.rs Show resolved Hide resolved
crates/node_types/prover/src/prover/tests.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
crates/common/src/hashchain.rs (1)

275-277: Add documentation for the last_hash method.

Consider adding documentation to explain the method's purpose and return values, especially the zero digest case for empty chains.

+    /// Returns the hash of the last entry in the hashchain, or a zero digest if the chain is empty.
+    ///
+    /// # Returns
+    /// - The hash of the last entry if the chain is not empty
+    /// - A zero digest (Digest::zero()) if the chain is empty
     pub fn last_hash(&self) -> Digest {
         self.last().map_or(Digest::zero(), |entry| entry.hash)
     }
crates/common/src/operation.rs (1)

Line range hint 348-385: LGTM: Comprehensive signature verification implementation.

The verify_user_signature method provides thorough verification:

  • Handles all operation types appropriately
  • Separates operation and value signature verification
  • Provides good error context

Consider adding debug logging for signature verification steps to aid in troubleshooting.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 99fea23 and dcc8e03.

📒 Files selected for processing (4)
  • crates/common/src/hashchain.rs (5 hunks)
  • crates/common/src/operation.rs (16 hunks)
  • crates/node_types/prover/src/prover/tests.rs (4 hunks)
  • rustfmt.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • crates/node_types/prover/src/prover/tests.rs
  • rustfmt.toml
🔇 Additional comments (7)
crates/common/src/hashchain.rs (4)

10-11: LGTM! Import refactoring improves modularity.

The separation of Digest and Hasher into distinct modules enhances code organization and maintainability.


98-98: LGTM! Proper initialization of prev_hash for service registration.

Setting prev_hash to Digest::zero() for the initial service registration operation maintains chain integrity.


284-284: LGTM! Good use of the last_hash method.

Reusing last_hash() follows the DRY principle and improves code maintainability.


77-87: Verify impact of parameter reordering in create_account.

While adding prev_hash enhances chain integrity, reordering parameters is a breaking change that could affect existing callers.

crates/common/src/operation.rs (3)

295-296: LGTM: Helper methods correctly preserve prev_hash.

The without_challenge and without_signature methods properly maintain chain integrity by preserving the prev_hash field when creating new instances.

Also applies to: 307-307, 316-316, 325-325, 337-338, 343-343


8-11: Verify the operation chaining implementation across the codebase.

The addition of prev_hash fields implements operation chaining, which helps prevent replay attacks and ensures operation order. However, we should verify that this is consistently implemented across the system.

Let's verify the implementation:

Consider documenting the following in the codebase:

  1. How operation chaining works
  2. How to verify the chain's integrity
  3. What happens when the chain is broken

Also applies to: 75-76, 88-89, 112-113, 125-126


143-143: Verify prev_hash initialization and validation.

The constructors handle prev_hash differently:

  • new_create_account and new_register_service initialize it to zero
  • Other constructors accept it as a parameter

This suggests that account creation and service registration are considered chain-starting operations.

Let's verify this assumption:

Would you like me to help implement prev_hash validation to ensure chain integrity?

Also applies to: 166-166, 173-173, 201-201, 230-230

✅ Verification successful

The previous results showed zero hash validation but let's get more context about the validation flow in the hashchain implementation to verify our assumptions about chain integrity.


Prev_hash initialization and validation is correctly implemented

The codebase shows proper implementation of prev_hash validation:

  • CreateAccount and RegisterService operations must have prev_hash set to zero and can only be the first entry in the chain
  • Other operations must have prev_hash matching the last hash in the chain
  • Validation is enforced in validate_new_operation() method of the hashchain

The different constructor behaviors align with the hashchain validation rules, ensuring chain integrity.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for prev_hash validation in the codebase

# Test 1: Look for zero hash validation
rg -A 5 "Digest::zero"

# Test 2: Look for prev_hash validation logic
rg -A 5 "validate.*prev_hash"

Length of output: 2262


Script:

#!/bin/bash
# Let's check for validation logic in hashchain operations
ast-grep --pattern 'fn push($$$) {
  $$$
}'

# Also search for any validation patterns in operation handling
rg -A 10 "validate|verify" crates/common/src/hashchain.rs

Length of output: 4594

crates/common/src/hashchain.rs Show resolved Hide resolved
crates/common/src/hashchain.rs Show resolved Hide resolved
@jns-ps jns-ps self-assigned this Oct 29, 2024
Copy link
Contributor

@distractedm1nd distractedm1nd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lfg

Copy link
Contributor

@sebasti810 sebasti810 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beautiful

@jns-ps jns-ps merged commit 6344658 into main Oct 30, 2024
5 checks passed
@jns-ps jns-ps deleted the operation-nonces branch October 30, 2024 09:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants