-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(common) Operation nonces #147
Conversation
Note Currently processing new changes in this PR. This may take a few minutes, please wait... ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (22)
WalkthroughThe pull request introduces significant changes across multiple files, primarily focusing on the addition of a new Changes
Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Outside diff range and nitpick comments (15)
crates/storage/src/rocksdb.rs (1)
Based on the implementation details found in both the
Digest
type and its usage inrocksdb.rs
, I can now provide the final response:The current implementation can be improved using Digest's built-in methods
The
Digest
type provides several useful methods that can make the code more maintainable:
- Use
Digest::new([u8; 32])
instead ofDigest(value)
for creating a new digest- Use
commitment.to_bytes()
instead ofcommitment.0
when storing the digestHere's how the methods could be used:
- In
get_commitment
: ReplaceDigest(value)
withDigest::new(value)
- In
set_commitment
: Replacecommitment.0
withcommitment.to_bytes()
🔗 Analysis chain
Line range hint
31-42
: Consider using Digest's built-in methods if available.The commitment methods currently handle raw byte conversion manually. If the new
Digest
implementation inprism_common::digest
provides methods for serialization/deserialization or database encoding/decoding, consider using those for better maintainability and consistency.Let's check if the new Digest implementation offers such methods:
Also applies to: 44-49
Let me analyze the code further to understand the current usage context better before making the final recommendation.
Let me try a different pattern to get the context of how the Digest is being used in the storage implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Look for available Digest methods that could improve the implementation # Test: Search for Digest implementation ast-grep --pattern 'impl Digest { $$$ }'Length of output: 1726
Script:
#!/bin/bash # Look for the complete implementation of the function containing the reviewed code ast-grep --pattern 'fn $_(self, epoch: $_) -> Result<Digest$_> { $$$ }'Length of output: 75
Script:
#!/bin/bash # Look for the complete implementation in rocksdb.rs rg -A 10 "fn.*epoch.*Digest" crates/storage/src/rocksdb.rsLength of output: 925
crates/node_types/prover/src/prover/tests.rs (1)
87-98
: Consider adding more detailed test documentation.While the test logic is clear, it would be helpful to document:
- Why these specific operations were chosen to demonstrate the invalid case
- What makes these operations valid individually but invalid when processed together
- The significance of the key index in this scenario
Add a detailed comment block explaining the test case:
let mut ops_builder = OpsBuilder::new(); let new_key_1 = create_mock_signing_key(); + // This test demonstrates a key revocation edge case: + // 1. Add a new key (becomes index 1) + // 2. Revoke that key + // 3. Try to use the revoked key (index 1) to add another key + // The last operation fails because it uses a revoked key let operations = vec![ ops_builder.register_service_with_random_key("service_id").ex(), ops_builder.create_account_with_random_key("account_id", "service_id").ex(),crates/storage/src/redis.rs (2)
Line range hint
156-167
: Consider improving error handling and type conversion.The current implementation could be more robust in handling potential error cases during string-to-bytes conversion.
Consider this safer implementation:
fn get_commitment(&self, epoch: &u64) -> Result<Digest> { let mut con = self.lock_connection()?; let redis_value = con .get::<&str, String>(&format!("commitments:epoch_{}", epoch)) .map_err(|_| { DatabaseError::NotFoundError(format!("commitment from epoch_{}", epoch)) })?; - // storing hashes into - let value = redis_value.trim_matches('"').as_bytes(); - Ok(Digest(value.try_into().unwrap())) + let value = redis_value.trim_matches('"').as_bytes(); + value.try_into() + .map(Digest) + .map_err(|_| anyhow!(DatabaseError::DeserializationError( + format!("Invalid digest length for epoch {}", epoch) + )))
Based on the output, I can now generate the final response since I have found that the
Digest
struct already has a proper conversion methodto_bytes()
that can be used instead of directly accessing the internal field.Use
to_bytes()
method instead of accessing Digest's internal fieldThe
Digest
struct already provides a proper conversion methodto_bytes()
that returns[u8; 32]
. Here's the corrected suggestion:fn set_commitment(&self, epoch: &u64, commitment: &Digest) -> Result<()> { let mut con = self.lock_connection()?; - con.set::<&String, &[u8; 32], ()>(&format!("commitments:epoch_{}", epoch), &commitment.0) + con.set::<&String, &[u8; 32], ()>( + &format!("commitments:epoch_{}", epoch), + &commitment.to_bytes() + ) .map_err(|_| { anyhow!(DatabaseError::WriteError(format!( "commitment for epoch: {}", epoch ))) }) }🔗 Analysis chain
Line range hint
209-218
: Avoid direct access to Digest's internal representation.The current implementation directly accesses Digest's internal field (.0), which creates tight coupling. Consider using a proper conversion method instead.
Consider adding a method to convert Digest to bytes:
fn set_commitment(&self, epoch: &u64, commitment: &Digest) -> Result<()> { let mut con = self.lock_connection()?; - con.set::<&String, &[u8; 32], ()>(&format!("commitments:epoch_{}", epoch), &commitment.0) + con.set::<&String, &[u8], ()>( + &format!("commitments:epoch_{}", epoch), + commitment.as_bytes() + ) .map_err(|_| { anyhow!(DatabaseError::WriteError(format!( "commitment for epoch: {}", epoch ))) }) }This assumes the existence of an
as_bytes()
method in the new Digest implementation. If it doesn't exist, consider adding it to the Digest struct.Let's verify if the Digest struct has proper conversion methods:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for Digest implementation to verify available methods ast-grep --pattern 'impl Digest { $$$ }'Length of output: 1726
crates/common/src/digest.rs (4)
46-57
: Simplify error message construction inTryFrom
implementationIn the error handling of the conversions in lines 46-57, you're using
anyhow!(format!(...))
, which can be simplified by passing the formatted string directly toanyhow!
. This reduces unnecessary nesting offormat!
and improves readability.You may apply the following code changes:
- u64::from_le_bytes(byte_array[0..8].try_into().map_err(|_| { - anyhow!(format!("slice to array: [0..8] for digest: {value:?}")) - })?), + u64::from_le_bytes(byte_array[0..8].try_into().map_err(|_| { + anyhow!("slice to array: [0..8] for digest: {:?}", value) + })?),Apply similar changes to the other occurrences in lines 49-51, 52-54, and 55-57:
- u64::from_le_bytes(byte_array[8..16].try_into().map_err(|_| { - anyhow!(format!("slice to array: [8..16] for digest: {value:?}")) - })?), + u64::from_le_bytes(byte_array[8..16].try_into().map_err(|_| { + anyhow!("slice to array: [8..16] for digest: {:?}", value) + })?), - u64::from_le_bytes(byte_array[16..24].try_into().map_err(|_| { - anyhow!(format!("slice to array: [16..24] for digest: {value:?}")) - })?), + u64::from_le_bytes(byte_array[16..24].try_into().map_err(|_| { + anyhow!("slice to array: [16..24] for digest: {:?}", value) + })?), - u64::from_le_bytes(byte_array[24..32].try_into().map_err(|_| { - anyhow!(format!("slice to array: [24..32] for digest: {value:?}")) - })?), + u64::from_le_bytes(byte_array[24..32].try_into().map_err(|_| { + anyhow!("slice to array: [24..32] for digest: {:?}", value) + })?),
42-43
: Add comment explaining the rationale for reversing the byte arrayReversing the byte array before converting to u64 segments (line 42) may not be immediately clear to readers. Adding a comment to explain the reason for this step can improve code readability and maintainability.
35-35
: Mark temporary implementation withTODO
for future updatesThe comment on line 35 indicates that this implementation is temporary and subject to change once the curve choice is made. Consider adding a
TODO
comment or creating a tracking issue to ensure this is revisited in the future.Would you like me to create a GitHub issue to track this task?
93-96
: Simplify error message construction infrom_hex
methodIn line 96, you're using
anyhow!(format!(...))
, which can be simplified by passing the formatted string directly toanyhow!
. This helps to reduce unnecessary nesting and improves code clarity.Apply this change:
- .map_err(|e| anyhow!(format!("Invalid Format: {e}")))?; + .map_err(|e| anyhow!("Invalid Format: {}", e))?;crates/common/src/test_ops.rs (1)
219-219
: Correct the error message inexpect
The error message in the
expect
call refers to "Creating account operation" instead of "Creating revoke key operation." Updating this message will improve clarity during debugging.Apply this diff to correct the message:
- .expect("Creating account operation should work"); + .expect("Creating revoke key operation should work");crates/common/src/operation.rs (2)
88-89
: Add Documentation forprev_hash
FieldConsider adding a comment explaining the purpose of the
prev_hash
field inRegisterServiceArgs
for better code clarity.Apply this diff to include documentation:
pub struct RegisterServiceArgs { /// Service ID pub id: String, /// Challenge gate for access control pub creation_gate: ServiceChallenge, + /// The hash of the previous operation pub prev_hash: Digest, }
125-126
: Include Comments for New Fields inKeyOperationArgs
Adding comments for the new
prev_hash
field enhances code readability and maintainability.Apply this diff to add the missing documentation:
pub struct KeyOperationArgs { /// Account ID pub id: String, /// Public key being added or revoked pub value: VerifyingKey, + /// The hash of the previous operation pub prev_hash: Digest, /// Signature to authorize the action pub signature: HashchainSignatureBundle, }
crates/common/src/tree.rs (4)
Line range hint
263-302
: Consider more informative error messages for better debugging.In the error handling when retrieving the
service_hashchain
:let Found(service_hashchain, _) = self.get(service_key_hash)? else { bail!("Failed to get hashchain for service ID {}", service_id); };Provide more context in the error message to assist with debugging. For example, specify whether the service is unregistered or if there's a database access issue.
Apply this diff to enhance the error message:
- bail!("Failed to get hashchain for service ID {}", service_id); + bail!("Service hashchain not found for service ID '{}'. Ensure the service is registered before creating an account.", service_id);
9-9
: Ensureserde
imports are necessary.You have imported
serde::{Deserialize, Serialize}
. Verify whether both traits are required for this file. If only one is needed, import that specific trait to keep the code clean.
Line range hint
230-243
: UpdateKeyDirectoryTree
to accommodate the removal ofHasher
.Since
Hasher
is removed, the type parameter inJellyfishMerkleTree<Arc<S>, Hasher>
needs to be updated. Determine the appropriate hasher to use or adjust theJellyfishMerkleTree
implementation accordingly.
Line range hint
338-344
: Handle the case whenold_value
isNone
inupdate
.In the
update
function:if old_value.is_none() { bail!("Key does not exist"); }Consider providing more detailed information or handling the case differently if appropriate.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (22)
- crates/common/src/digest.rs (1 hunks)
- crates/common/src/hashchain.rs (4 hunks)
- crates/common/src/hasher.rs (1 hunks)
- crates/common/src/keys.rs (1 hunks)
- crates/common/src/lib.rs (1 hunks)
- crates/common/src/operation.rs (16 hunks)
- crates/common/src/test_ops.rs (1 hunks)
- crates/common/src/test_utils.rs (3 hunks)
- crates/common/src/tree.rs (3 hunks)
- crates/da/src/lib.rs (1 hunks)
- crates/node_types/lightclient/src/lightclient.rs (1 hunks)
- crates/node_types/prover/src/prover/mod.rs (1 hunks)
- crates/node_types/prover/src/prover/tests.rs (5 hunks)
- crates/node_types/prover/src/webserver.rs (1 hunks)
- crates/storage/src/database.rs (1 hunks)
- crates/storage/src/inmemory.rs (1 hunks)
- crates/storage/src/redis.rs (1 hunks)
- crates/storage/src/rocksdb.rs (1 hunks)
- crates/tests/Cargo.toml (1 hunks)
- crates/tests/src/lib.rs (2 hunks)
- crates/zk/sp1/src/main.rs (1 hunks)
- rustfmt.toml (1 hunks)
✅ Files skipped from review due to trivial changes (8)
- crates/common/src/keys.rs
- crates/da/src/lib.rs
- crates/node_types/lightclient/src/lightclient.rs
- crates/node_types/prover/src/prover/mod.rs
- crates/storage/src/database.rs
- crates/storage/src/inmemory.rs
- crates/zk/sp1/src/main.rs
- rustfmt.toml
🔇 Additional comments (38)
crates/common/src/lib.rs (2)
1-1
: LGTM! Well-organized module structure.The new
digest
andhasher
modules are appropriately placed and follow Rust's module organization conventions. Their public visibility is correct as they provide core functionality for hash operations.Also applies to: 3-3
11-12
: Verify test coverage for the new test_ops module.The new
test_ops
module is correctly feature-gated with "test_utils". Let's verify it has corresponding tests.✅ Verification successful
Let me gather more information about the
test_ops
module to better understand its purpose and test coverage requirements.
Let me check the actual test file that uses the
test_ops
module to understand how it's being tested.
I can now generate the final response based on the gathered information.
The test_ops module is well-tested through integration tests
The
test_ops
module is a test utility module that provides anOpsBuilder
for creating test operations. It is actively used in multiple test cases incrates/node_types/prover/src/prover/tests.rs
, including:
test_validate_and_queue_update
test_process_operation
test_execute_block_with_invalid_tx
test_execute_block
test_finalize_new_epoch
test_restart_sync_from_scratch
test_load_persisted_state
Each test exercises different aspects of the
OpsBuilder
, effectively testing the functionality of thetest_ops
module through its usage in these integration tests.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for test coverage of the test_ops module # Look for test modules or functions related to test_ops rg -g '*.rs' -l 'mod test(_)?ops.*tests' || echo "No test module found" # Look for actual test functions rg -g '*.rs' '#\[test\].*fn.*test_ops' -A 2 || echo "No test functions found"Length of output: 196
Script:
#!/bin/bash # Check the content of test_ops module to understand its functionality fd test_ops.rs | xargs cat || echo "test_ops.rs not found" # Look for any tests in the same directory fd -e rs -p '.*/tests/.*' --exec rg -l 'test_ops' {} || echo "No references in test files" # Check if test_ops is used in integration tests rg -g '*.rs' 'use.*test_ops' -A 1 || echo "No usages found"Length of output: 9460
Script:
#!/bin/bash # Check the content of the test file that uses test_ops rg -g '*.rs' 'use.*test_ops' -B 5 -A 20 # Look for actual test functions in that file cat crates/node_types/prover/src/prover/tests.rs || echo "Test file not found"Length of output: 9361
crates/tests/Cargo.toml (1)
20-20
: LGTM! Clean addition of JMT dependency.The addition of the
jmt
dependency with workspace inheritance is well-structured and aligns with the PR's objectives for operation nonces and hashing functionality.crates/common/src/hasher.rs (4)
1-5
: LGTM! Clean struct design with appropriate trait derivations.The struct is well-designed as a thin wrapper around sha2::Sha256 with necessary trait implementations.
30-54
: LGTM! Correct implementation of the visitor pattern.The deserialization implementation is well-structured and consistent with the serialization approach.
12-14
:⚠️ Potential issueFix infinite recursion in update method.
The current implementation calls itself recursively, which will cause a stack overflow.
Fix by calling the inner hasher's update method:
pub fn update(&mut self, data: &[u8]) { - self.update(data); + self.0.update(data); }Likely invalid or redundant comment.
21-28
: Verify serialization requirements for Hasher state.The current implementation serializes the Hasher as an empty tuple struct, discarding its internal state. This means that deserializing and continuing to use the hasher will effectively reset its state.
Please confirm if this is the intended behavior. If the hasher's state needs to be preserved across serialization:
✅ Verification successful
Let me gather more information about the Hasher usage and implementation to better understand the serialization requirements.
Let me gather one more piece of information about the usage patterns of this hasher.
Based on the gathered information, I can now provide a final response:
Serialization behavior for Hasher is correct and intentional
The current serialization implementation that discards the internal state is the correct approach because:
- The Hasher is used primarily for creating hashes in a Merkle tree implementation (JellyfishMerkleTree) and computing key hashes
- The hasher is always used in a pattern where it's created, immediately used to hash data, and finalized
- There are no instances in the codebase where a hasher's intermediate state needs to be preserved
- The matching Deserialize implementation also creates a fresh default hasher, maintaining consistency with the serialization behavior
The implementation follows a common pattern for cryptographic hashers where preserving intermediate state across serialization boundaries is typically avoided for security and simplicity.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any tests or usage patterns that might rely on hasher state preservation rg -l "serde|serialize|deserialize" | xargs rg "Hasher.*update.*finalize"Length of output: 75
Script:
#!/bin/bash # Look for the Hasher implementation and its usage patterns ast-grep --pattern 'impl $_ for Hasher { $$$ }' # Also check for any tests involving Hasher rg -A 5 "Hasher" "*test*.rs" # Look for any documentation comments on Hasher rg -B 2 "struct Hasher"Length of output: 1399
Script:
#!/bin/bash # Look for any usage patterns of Hasher, particularly around serialization rg -B 2 -A 2 "Hasher" --type rust # Check if there's any deserialization implementation ast-grep --pattern 'impl Deserialize for Hasher { $$$ }'Length of output: 11440
crates/storage/src/rocksdb.rs (1)
7-7
: LGTM! Verify digest compatibility.The import path change to
prism_common::digest::Digest
reflects a better organization of types.Let's verify the new Digest implementation maintains the same interface:
✅ Verification successful
Let me gather more information about the Digest implementation to ensure complete verification.
Based on the gathered information, I can now generate the final response:
Import change is safe and maintains compatibility
The verification confirms that:
- The new
Digest
struct inprism_common::digest
maintains the same core interface with[u8; 32]
internal representation- All storage implementations (RocksDB, Redis, InMemory) consistently use the new import path
- The implementation provides the same functionality plus additional methods like
hash()
,zero()
, and format conversions- The type maintains compatibility with
RootHash
through From implementations🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the Digest type's interface and usage # Test 1: Check if Digest is used consistently across the codebase rg --type rust "use.*digest::Digest" # Test 2: Look for any potential breaking changes in Digest usage ast-grep --pattern 'Digest($_)'Length of output: 900
Script:
#!/bin/bash # Check the implementation of Digest in the new location rg -A 10 "^pub struct Digest" crates/common/src/digest.rs # Check if there are any breaking changes in the Digest API rg -A 10 "impl.*Digest" crates/common/src/digest.rs # Look for any test cases that verify Digest behavior rg -B 2 -A 5 "#\[test\].*Digest" crates/common/src/digest.rsLength of output: 2200
crates/tests/src/lib.rs (5)
6-14
: LGTM! Import changes are well-organizedThe new imports for
bail
,Digest
, andHasher
are properly organized and align with the PR's objective of implementing operation nonces.
35-36
: LGTM! Robust hashing implementationThe new hashing logic using
Digest::hash
andKeyHash::with<Hasher>
provides a more structured and type-safe approach.
38-44
: LGTM! Improved error handlingThe use of pattern matching with clear error messages improves the code's robustness and debuggability.
Line range hint
67-186
: LGTM! Comprehensive integration testThe test provides thorough coverage of the new functionality:
- Tests both account creation and key updates
- Includes error handling for failed operations
- Uses realistic scenarios with multiple concurrent operations
51-54
: Verify the hardcoded key indexThe key index is hardcoded to 0 without explanation. Consider documenting why this is acceptable in a test context or if it should be derived from the actual key count.
✅ Verification successful
Let me search for more context about the key index usage and implementation.
Based on the search results, I can now provide a conclusive response about the key index usage.
The hardcoded key index 0 is appropriate in test context
The key index is used as part of the
HashchainSignatureBundle
to identify which key in the hashchain was used to sign an operation. In test contexts, using index 0 is appropriate because:
- Test utilities consistently use index 0 across all test operations (as seen in
test_utils.rs
andtest_ops.rs
)- The index 0 represents the first/root key in a new hashchain, which is the typical scenario in tests
- The validation logic in
hashchain.rs
shows that the key index is properly validated in production code throughverify_signature_at_key_idx
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for other occurrences of key index usage rg -A 5 "new_add_key"Length of output: 1928
Script:
#!/bin/bash # Search for key_idx usage in operation implementations and tests rg -A 10 "key_idx" # Search for any validation or checks related to key indices ast-grep --pattern 'fn validate($$$) { $$$ }'Length of output: 17585
crates/node_types/prover/src/webserver.rs (2)
16-16
: LGTM! Clean import organization.The consolidated imports improve readability by grouping related components from
prism_common
.
Line range hint
71-75
: Verify compatibility with Operation and Hasher changes.Since this PR involves changes to
Operation
and hash-related functionality, let's verify the compatibility of the web endpoints:
- The
/update-entry
endpoint handlingOperation
inputs- The response types using
Hasher
Also applies to: 146-152
crates/node_types/prover/src/prover/tests.rs (3)
2-2
: LGTM! Clean import of the new OpsBuilder.The import is correctly placed and properly scoped.
20-26
: LGTM! Well-structured test operation creation.The refactoring to use OpsBuilder provides a more maintainable and readable way to create test operations. The sequence of operations follows a logical flow: service registration → account creation → key addition.
85-98
: Verify the test's assumption about key indices.The test creates an interesting edge case where operations are valid individually but fail when processed together. The comment suggests a dependency on key index=1, but this assumption should be verified.
Let's verify the key indexing logic:
✅ Verification successful
Let me gather more information about the key indexing implementation to better understand the test's assumptions.
The first pattern will help us understand how
add_random_key
handles the key index parameter, and the second search will reveal any validation logic around key indices. This will help verify if the test's assumption about index=1 is correct and whether the failure scenario is properly testing the intended edge case.
Let me search for one more crucial piece of information about how revoked keys are handled.
This will help us understand the interaction between key revocation and subsequent key addition at the same index, which is central to the test case's scenario.
Based on the gathered evidence, I can now provide a final response about the test's assumptions.
The test correctly verifies key index handling and revocation logic
The test's assumption about key indices is valid, and the failure scenario is correctly constructed because:
- The code in
hashchain.rs
shows that key revocation state is tracked by scanning the operation history (is_key_revoked
method)- When a key is used, it's validated against both its index (
get_key_at_index
) and revocation status- The test creates a deliberate conflict by:
- Adding a key (becoming index 1)
- Revoking that key
- Attempting to add a new key specifically at index 1 in the same block
This sequence correctly tests the edge case where operations that are individually valid become invalid when processed together, as the revocation of a key should prevent its index from being reused immediately.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for key index handling in the codebase # to verify the assumption about index=1 in the test. # Look for key index related code rg -A 5 'key.*index|index.*key' --type rust # Look for related test cases rg -A 5 'add.*key.*index|index.*add.*key' --type rustLength of output: 5061
Script:
#!/bin/bash # Look for the implementation of add_random_key and related methods in OpsBuilder ast-grep --pattern 'impl OpsBuilder { $$$ add_random_key($$$) { $$$ } $$$ }' # Search for key index validation logic rg -B 2 -A 5 'validate.*key.*index|verify.*key.*index' --type rustLength of output: 739
Script:
#!/bin/bash # Look for revoked key handling logic rg -B 2 -A 5 'is_key_revoked|revoked.*key' --type rust # Look for the implementation of revoke_key_verified_with_root ast-grep --pattern 'revoke_key_verified_with_root($$$) { $$$ }'Length of output: 1248
crates/storage/src/redis.rs (1)
7-7
: LGTM: Import change aligns with the new Digest implementation.The import change correctly reflects the architectural shift from tree-based Digest to the new digest module implementation.
crates/common/src/test_utils.rs (3)
119-119
: LGTM: Correctly includes previous hash in key addition operation.The change properly maintains operation chain integrity by including the last hash from the account's hashchain.
158-159
: LGTM: Properly handles previous hash in data addition operation.The changes correctly maintain operation chain integrity by:
- Extracting the last hash before operation creation
- Including it in the new operation
Also applies to: 164-164
236-243
: LGTM: Operation creation properly includes last hash.The changes maintain consistency with other operation modifications. However, let's verify the indentation style.
crates/common/src/hashchain.rs (5)
10-11
: LGTM! Clean module organization.The separation of
Digest
andHasher
into their own modules improves code organization and modularity.
276-278
: LGTM! Clean abstraction for last hash retrieval.The method is well-implemented using idiomatic Rust and properly handles the empty case.
285-285
: LGTM! Good use of abstraction.The use of
last_hash
method here follows the DRY principle and improves code maintainability.
78-79
: LGTM! Enhanced chain of trust with prev_hash.The addition of
prev_hash
parameter strengthens the chain of trust by linking operations. Consider verifying that theprev_hash
matches the actual previous hash.#!/bin/bash # Search for potential mismatches between prev_hash assignments and actual previous hashes ast-grep --pattern 'prev_hash = $expr' -l rustAlso applies to: 87-88
99-99
: 🛠️ Refactor suggestionConsider adding prev_hash validation.
While using
Digest::zero()
for the first operation is fine, consider adding explicit validation invalidate_new_operation
to ensure thatprev_hash
matches the actual previous hash for subsequent operations.Consider adding validation like this:
fn validate_new_operation(&self, operation: &Operation) -> Result<()> { // Validate prev_hash matches the last hash if let Some(prev_hash) = operation.prev_hash() { if prev_hash != self.last_hash() { bail!("Previous hash mismatch"); } } // ... rest of the validation }Also applies to: 285-285
crates/common/src/operation.rs (10)
8-11
: Ensure Correct Module ImportsThe added imports for
Digest
,SigningKey
, andVerifyingKey
are appropriate for the new functionality introduced.
163-167
: Initializeprev_hash
innew_register_service
The inclusion of
prev_hash: Digest::zero()
in thenew_register_service
method ensures the operation chain starts correctly.
230-238
: Validateprev_hash
Handling innew_add_signed_data
Confirm that the
prev_hash
field is appropriately managed in thenew_add_signed_data
method and that its introduction does not disrupt existing functionality.
295-296
: Preserveprev_hash
inwithout_challenge
MethodIn the
without_challenge
method, maintaining theprev_hash
ensures the continuity of the operation chain when challenges are omitted.
307-309
: Consistent Preservation ofprev_hash
inwithout_signature
The
prev_hash
field should remain unchanged in thewithout_signature
method across all operation types to maintain chain integrity.Also applies to: 316-318, 325-328, 337-338, 343-344
173-180
: Handleprev_hash
innew_add_key
FunctionThe
prev_hash
parameter is added tonew_add_key
. Verify that all calls to this function supply the correctprev_hash
to maintain operation chain integrity.Run the following script to inspect function calls:
#!/bin/bash # Find all usages of `new_add_key` and check `prev_hash` parameter rg 'new_add_key\(' -A2
143-146
:⚠️ Potential issueCorrect Initialization in
new_create_account
Ensure that both
prev_hash
andsignature
are properly initialized in thenew_create_account
method to prevent potential errors.Apply this diff for accurate initialization:
let mut op = Operation::CreateAccount(CreateAccountArgs { id: id.to_string(), value: signing_key.clone().verifying_key(), service_id, challenge: ServiceChallengeInput::Signed(Vec::new()), + prev_hash: Digest::zero(), + signature: Vec::new(), });Likely invalid or redundant comment.
201-208
: Updatenew_revoke_key
withprev_hash
ParameterEnsure that the
prev_hash
is correctly passed and utilized in thenew_revoke_key
function to preserve the operation sequence.Run the following script to verify updates:
✅ Verification successful
Let me gather more context about the implementation of
new_revoke_key
to verify ifprev_hash
is being correctly utilized.
Let me try a different approach to get the implementation details using ripgrep with more context lines.
Based on the code inspection, I can now provide the final response:
prev_hash
Parameter is Correctly Implemented innew_revoke_key
The verification shows that
prev_hash
is properly:
- Accepted as a parameter in the function signature
- Passed through to the
KeyOperationArgs
struct- Used in test code with
hc.last_hash()
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Locate all calls to `new_revoke_key` and confirm `prev_hash` usage rg 'new_revoke_key\(' -A2Length of output: 454
Script:
#!/bin/bash # Get the implementation of new_revoke_key function ast-grep --pattern 'fn new_revoke_key($$$) { $$$ }'Length of output: 53
Script:
#!/bin/bash # Get the implementation of new_revoke_key function with more context rg -U 'pub fn new_revoke_key' -B2 -A15 crates/common/src/operation.rsLength of output: 607
75-78
:⚠️ Potential issueInitialize the
signature
Field ProperlyThe
signature
field has been added toCreateAccountArgs
. Ensure that this field is correctly initialized and handled in all relevant methods to prevent uninitialized value issues.Apply this diff to confirm proper initialization:
pub struct CreateAccountArgs { /// Account ID pub id: String, /// Public key being added pub value: VerifyingKey, /// Associated service ID pub service_id: String, /// Challenge input for verification pub challenge: ServiceChallengeInput, /// The hash of the previous operation pub prev_hash: Digest, + /// The signature that signed the operation pub signature: Vec<u8>, }
Run the following script to check initialization:
✅ Verification successful
Let me gather more information to see how the signature field is being set and used.
Based on the gathered information, I can now provide the final response:
signature
Field is Properly Initialized and HandledThe
signature
field inCreateAccountArgs
is correctly initialized and handled throughout the codebase:
- It's initialized as an empty vector in
new_create_account
- The signature is properly set via
insert_signature
method- Signature verification is implemented in
verify_user_signature
- Empty signatures are validated in the
validate
method- The field is properly handled in signature-less operations via
without_signature
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that `signature` is properly initialized in all instances rg 'CreateAccountArgs \{' -A5Length of output: 3564
Script:
#!/bin/bash # Check for signature field initialization and usage rg "signature.*=|signature:" -B2 -A2 crates/common/src/operation.rsLength of output: 4836
Script:
#!/bin/bash # Look for methods that might be setting the signature ast-grep --pattern 'fn $FUNC($ARGS) { $$$ signature $$$ }'Length of output: 66
Script:
#!/bin/bash # Check for any sign/verify methods that might be related rg "sign|verify" -B2 -A2 crates/common/src/operation.rsLength of output: 9236
112-113
: Verify Integration ofprev_hash
inAddDataArgs
Ensure that the
prev_hash
field inAddDataArgs
is correctly managed in all related operations and that it maintains the integrity of the operation chain.Run the following script to confirm usage:
✅ Verification successful
Let me gather more information about the
AddDataArgs
structure and its usage.
Let me search for the implementation of
new_add_data
to see howprev_hash
is handled in the operation creation.
Let me try one final search to find the complete structure and usage of AddDataArgs.
Based on the gathered information, I can now provide a final response:
prev_hash
Field is Properly Integrated inAddDataArgs
The
prev_hash
field inAddDataArgs
is correctly integrated and managed throughout the codebase:
- The field is properly defined in the
AddDataArgs
struct with appropriate documentation- It's used consistently in operation chain validation through
HashchainSignatureBundle
- The field is properly handled in operation creation and verification flows
- The integrity of the operation chain is maintained through proper signature verification and hash chain validation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check all instantiations of `AddDataArgs` for `prev_hash` handling rg 'AddDataArgs \{' -A5Length of output: 1092
Script:
#!/bin/bash # Search for AddDataArgs struct definition and its usage ast-grep --pattern 'struct AddDataArgs { $$$ }' # Also search for any direct mentions of AddDataArgs rg "AddDataArgs" -B2 -A2Length of output: 3569
Script:
#!/bin/bash # Search for new_add_data implementation ast-grep --pattern 'fn new_add_data( $$$ )' # Also search for any test cases involving AddData operations rg "new_add_data" -B2 -A5Length of output: 73
Script:
#!/bin/bash # Search for the complete AddDataArgs struct definition rg -A 10 "pub struct AddDataArgs \{" # Also search for any tests involving AddData operations rg "Operation::AddData" -B2 -A2Length of output: 5778
crates/common/src/tree.rs (1)
Line range hint
263-302
: Ensure proper handling ofprev_hash
inCreateAccount
.The
prev_hash
parameter has been added to theCreateAccount
operation. Ensure that:
prev_hash
is properly validated to prevent potential security vulnerabilities like replay attacks.- The use of
*prev_hash
correctly dereferences the value without causing errors.- All downstream functions that use
prev_hash
, such asHashchain::create_account
, are updated to handle the new parameter appropriately.
ca8af94
to
99fea23
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
- crates/common/src/lib.rs (1 hunks)
- crates/common/src/test_ops.rs (1 hunks)
- crates/node_types/prover/src/prover/tests.rs (5 hunks)
- rustfmt.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/common/src/lib.rs
- rustfmt.toml
🔇 Additional comments (4)
crates/node_types/prover/src/prover/tests.rs (4)
2-2
: LGTM: Clean import addition for OpsBuilder.The new import is properly scoped and consistently used throughout the test file.
20-26
: LGTM: Well-structured operation creation using builder pattern.The refactoring to use OpsBuilder improves code readability and maintainability. The operations are created in a logical sequence that represents a typical workflow.
90-98
: LGTM: Well-designed test case for invalid operation handling.The test effectively verifies the system's behavior when processing operations that are individually valid but collectively invalid (revoking and re-adding a key in the same block). The comments clearly explain the test scenario.
120-129
: Remove debug print statements as previously discussed.These debug print statements were previously flagged for removal in the earlier review discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only requested change is removing println
99fea23
to
dcc8e03
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
crates/common/src/hashchain.rs (1)
275-277
: Add documentation for the last_hash method.
Consider adding documentation to explain the method's purpose and return values, especially the zero digest case for empty chains.
+ /// Returns the hash of the last entry in the hashchain, or a zero digest if the chain is empty.
+ ///
+ /// # Returns
+ /// - The hash of the last entry if the chain is not empty
+ /// - A zero digest (Digest::zero()) if the chain is empty
pub fn last_hash(&self) -> Digest {
self.last().map_or(Digest::zero(), |entry| entry.hash)
}
crates/common/src/operation.rs (1)
Line range hint 348-385
: LGTM: Comprehensive signature verification implementation.
The verify_user_signature
method provides thorough verification:
- Handles all operation types appropriately
- Separates operation and value signature verification
- Provides good error context
Consider adding debug logging for signature verification steps to aid in troubleshooting.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
- crates/common/src/hashchain.rs (5 hunks)
- crates/common/src/operation.rs (16 hunks)
- crates/node_types/prover/src/prover/tests.rs (4 hunks)
- rustfmt.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/node_types/prover/src/prover/tests.rs
- rustfmt.toml
🔇 Additional comments (7)
crates/common/src/hashchain.rs (4)
10-11
: LGTM! Import refactoring improves modularity.
The separation of Digest
and Hasher
into distinct modules enhances code organization and maintainability.
98-98
: LGTM! Proper initialization of prev_hash for service registration.
Setting prev_hash
to Digest::zero()
for the initial service registration operation maintains chain integrity.
284-284
: LGTM! Good use of the last_hash method.
Reusing last_hash()
follows the DRY principle and improves code maintainability.
77-87
: Verify impact of parameter reordering in create_account.
While adding prev_hash
enhances chain integrity, reordering parameters is a breaking change that could affect existing callers.
crates/common/src/operation.rs (3)
295-296
: LGTM: Helper methods correctly preserve prev_hash.
The without_challenge
and without_signature
methods properly maintain chain integrity by preserving the prev_hash
field when creating new instances.
Also applies to: 307-307, 316-316, 325-325, 337-338, 343-343
8-11
: Verify the operation chaining implementation across the codebase.
The addition of prev_hash
fields implements operation chaining, which helps prevent replay attacks and ensures operation order. However, we should verify that this is consistently implemented across the system.
Let's verify the implementation:
Consider documenting the following in the codebase:
- How operation chaining works
- How to verify the chain's integrity
- What happens when the chain is broken
Also applies to: 75-76, 88-89, 112-113, 125-126
143-143
: Verify prev_hash initialization and validation.
The constructors handle prev_hash
differently:
new_create_account
andnew_register_service
initialize it to zero- Other constructors accept it as a parameter
This suggests that account creation and service registration are considered chain-starting operations.
Let's verify this assumption:
Would you like me to help implement prev_hash validation to ensure chain integrity?
Also applies to: 166-166, 173-173, 201-201, 230-230
✅ Verification successful
The previous results showed zero hash validation but let's get more context about the validation flow in the hashchain implementation to verify our assumptions about chain integrity.
Prev_hash initialization and validation is correctly implemented
The codebase shows proper implementation of prev_hash validation:
CreateAccount
andRegisterService
operations must haveprev_hash
set to zero and can only be the first entry in the chain- Other operations must have
prev_hash
matching the last hash in the chain - Validation is enforced in
validate_new_operation()
method of the hashchain
The different constructor behaviors align with the hashchain validation rules, ensuring chain integrity.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for prev_hash validation in the codebase
# Test 1: Look for zero hash validation
rg -A 5 "Digest::zero"
# Test 2: Look for prev_hash validation logic
rg -A 5 "validate.*prev_hash"
Length of output: 2262
Script:
#!/bin/bash
# Let's check for validation logic in hashchain operations
ast-grep --pattern 'fn push($$$) {
$$$
}'
# Also search for any validation patterns in operation handling
rg -A 10 "validate|verify" crates/common/src/hashchain.rs
Length of output: 4594
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lfg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
beautiful
This serves as a nonce to protect against replay attacks.
dcc8e03
to
4ca07bd
Compare
Summary by CodeRabbit
New Features
Digest
struct for encapsulating a 32-byte array with hashing functionalities.Hasher
struct for SHA256 hashing, supporting serialization and deserialization.OpsBuilder
for streamlined operation creation in testing.last_hash
method toHashchain
for retrieving the last hash.Operation
structures to include aprev_hash
field for maintaining operation integrity.Bug Fixes
Chores
Digest
across various modules to reflect structural changes.Cargo.toml
.Style
rustfmt.toml
to set maximum line and chain widths.