-
Notifications
You must be signed in to change notification settings - Fork 20
feat(CodecV7): add CodecV7 to support upgrade 5.2 Euclid phase2 #33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes introduce new methods to the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant CodecSelector as CodecFromConfig/CodecFromVersion
participant Codec as {DACodecV7, DACodecV6}
Client->>CodecSelector: Request codec based on version/config
CodecSelector-->>Client: Check if configuration is EuclidV2
alt EuclidV2 enabled
CodecSelector->>Codec: Return DACodecV7 instance
else
CodecSelector->>Codec: Return DACodecV6 instance (or others based on version)
end
sequenceDiagram
participant Block
participant MessageProcessor as MessageQueueV2ApplyL1MessagesFromBlocks
participant HashHelper as {messageQueueV2ApplyL1Message, messageQueueV2EncodeRollingHash}
Block->>MessageProcessor: Provide initial queue hash and blocks
loop For each block
MessageProcessor->>HashHelper: Process individual L1MessageTx
HashHelper-->>MessageProcessor: Updated hash for message
end
MessageProcessor->>HashHelper: Encode final rolling hash
HashHelper-->>MessageProcessor: Final updated hash
MessageProcessor-->>Block: Return computed rolling hash
Suggested reviewers
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
…1 message hash from a given set of blocks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
encoding/codecv7_test.go (3)
370-385: Be mindful of global patching when running tests in parallel.
Using gomonkey patches at the package level can cause conflicts if tests are run concurrently. Consider applying patches in an isolated way (e.g., within each sub-test or using a narrower scope) to avoid potential race conditions.
441-445: Potential performance bottleneck with 10k test blocks.
Creating tens of thousands of blocks in a single test can be time-consuming and might exceed resource limits on certain CI environments. Consider splitting these scenarios into smaller test subsets or using a stress test suite separate from normal test runs.Also applies to: 585-589
953-966: Consider randomizing the test seed or documenting the fixed seed usage.
A fixed seed is excellent for reproducibility, but might hide random edge cases that a rotating seed could discover. If the seed is intentionally fixed, add a comment explaining the rationale.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
encoding/codecv7_test.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (1)
encoding/codecv7_test.go (1)
933-935: Consider validatingFromfield in the transaction data.
Currently, lines 933-935 comment out the check forexpected.Fromvs.actual.From. If the validity of theFromfield is critical, you may want to add an explicit test or note why it's omitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (7)
encoding/codecv7.go (2)
73-80: Consider clarifying chunks vs. batch usage.
Though the docstring notes that in DACodecV7 "there is no notion of chunks," the code still callsNewDABlockand does chunk-like logic in other places. It might be helpful to document how or why chunk-related code paths remain relevant in a no-chunk environment.
333-345: Finalize gas estimation logic post-contract implementation.
There is a TODO indicating that gas cost breakdown might change. Ensure these fixed cost additions align with contract state once the contract is finalized, and remove the placeholder after verification.encoding/codecv7_test.go (2)
387-393: Deduplicate therepeatfunction.
Therepeathelper is also declared again at lines 533–539 with identical logic. Extract it into a shared test utility to avoid duplication and improve maintainability.
650-658: Clarify minimal batch validation in compression test.
Passing a single-byte array tocheckCompressedDataCompatibilitybypasses deeper batch validation. Add an explanatory comment to avoid confusion for future maintainers.encoding/codecv7_types.go (3)
122-123: Return early when constructing empty batch.
This code ensures that data length matchesdaBatchV7EncodedLength; consider clarifying the expected usage for an empty batch scenario or adding logs for easier debugging if the length check fails.
218-223: Document usage in decoding stage fields.
blocksare used for encoding, whereasdaBlocksis for decoding. Consider adding doc comments clarifying the difference, to avoid confusion about which fields are relevant after a payload is decoded.
579-605: Sporadic chunk checks.
This consistency check is crucial for bridging older versions of the relayer with V7. Document that it’s a transitional approach and clarify when it becomes obsolete, so it can be removed in future.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
encoding/codecv7.go(1 hunks)encoding/codecv7_test.go(1 hunks)encoding/codecv7_types.go(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
encoding/codecv7_types.go (2)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:152-154
Timestamp: 2024-11-12T12:17:31.140Z
Learning: In the `daBatchV1` struct, the `BlobBytes()` method is intentionally returning `nil`.
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
encoding/codecv7.go (3)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1.go:152-239
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The `constructBlobPayload` method in `encoding/codecv1.go` should remain as is; preferences are to keep it without refactoring or additional optimizations.
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv2.go:0-0
Timestamp: 2024-11-12T12:17:31.140Z
Learning: In `encoding/codecv2.go`, the `constructBlobPayload` function should remain as is; prefer to keep it without refactoring or additional optimizations.
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (8)
encoding/codecv7.go (2)
218-238: Confirm non-beneficial compression approach.
When compressed payloads are the same size or larger than the original data, the code silently proceeds without compression, returningnilfor compressed data. This approach seems correct but ensure that an explicit log or comment clarifies that defaulting to uncompressed is intended.
349-361: Validate JSON output for unusual edge cases.
WhileJSONFromByteshandles typical batch data properly, consider adding explicit tests for partially filled or corrupted batch bytes to confirm that JSON output either fails gracefully or rejects malformed data.encoding/codecv7_test.go (4)
41-41: Handle missing test fixture.
Tests rely ontestdata/blockTrace_02.json, but if the file is absent or incorrectly named, the test will fail. Ensure the fixture file is present and correctly placed.
473-475: Test large input edge case.
This test confirms behavior when data is 1 byte too big. Consider additional validation or a more direct check on the boundary to ensure the error path remains stable across future refactors.
585-589: Validate scaling with thousands of blocks.
The test attempts 10,000 blocks and expects an error if uncompressed data exceeds blob size. This is good coverage but be mindful of test execution time in CI. If it grows more, consider a faster approach or sampling.
[performance]
809-895: Confirm correctness of point evaluation proofs.
The tests forBlobDataProofForPointEvaluationare thorough, but any mismatch in the KZG constants or changes to the modulus could break them. Keep an eye on library updates and confirm that the tested results remain valid.encoding/codecv7_types.go (2)
139-165: Validate presence of blob and blobBytes in proof generation.
When callingBlobDataProofForPointEvaluation, the code expectsb.blobandb.blobBytesto be non-nil. If either is missing, an NPE or invalid proof may result. Add defensive checks or error returns.
310-314: Verify negative transaction counts.
This safeguard ensures L1 messages do not exceed total transactions for the block. Good check. Confirm that upstream usage never artificially setsNumL1Messages > NumTransactionsfor normal flows.
roynalnaruto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. However, I think its best to add some compatibility tests between the circuit-side and da-codec just to avoid any implementation mismatch
5fd8356
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
encoding/codecv7_types.go (1)
502-508: 🛠️ Refactor suggestionAdd input validation for size encoding.
Ensuring that
datadoes not exceed0xFFFFFFhelps prevent overflow. For example:func encodeSize3Bytes(data uint32) []byte { + if data > 0xFFFFFF { + panic("data exceeds 3-byte limit") + } return []byte{byte(data >> 16), byte(data >> 8), byte(data)} }
🧹 Nitpick comments (2)
encoding/codecv7_types.go (2)
98-109: Consider minimal validation for constructor inputs.Currently,
newDABatchV7does not validate parameters likeversionorbatchIndex. Adding at least basic validation (e.g., ensuringversionis a known enum) can help catch configuration errors early.
139-165: Handle nil blob references to avoid potential runtime errors.If
b.bloborb.blobBytesisnil, calls toComputeProoforcrypto.Keccak256(b.blobBytes)could panic. Consider returning an error if these fields are unexpectedly nil.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
encoding/codecv7_test.go(1 hunks)encoding/codecv7_types.go(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
encoding/codecv7_types.go (2)
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:152-154
Timestamp: 2024-11-12T12:17:31.140Z
Learning: In the `daBatchV1` struct, the `BlobBytes()` method is intentionally returning `nil`.
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-11-12T12:17:31.140Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (4)
encoding/codecv7_types.go (1)
240-273: Beware of large memory usage during encoding.Appending all block data and L2 transactions into a single buffer can lead to high memory usage for large batches. A chunk-based or streamed approach may improve scalability.
encoding/codecv7_test.go (3)
41-41: Action Required: Missing JSON Fixture DetectedThe references to
testdata/blockTrace_0X.jsoncould fail if the fixture is absent. Ensure the specified files exist or remove/update the tests accordingly.Also applies to: 47-47, 53-53, 59-59, 67-67, 73-73, 87-87, 136-136, 145-145, 146-146, 147-147, 241-241, 253-253, 265-265, 278-278, 279-279, 289-289, 296-296, 311-311
387-393: Deduplicate the repeated “repeat” function.The same helper is declared in multiple places. Extracting it into a shared test utility promotes reuse and consistency.
Also applies to: 533-539
1-967: Overall test suite is thorough.Coverage appears comprehensive, and test cases handle an array of edge conditions and expected errors. Good job!
Purpose or design rationale of this PR
This PR adds support for
CodecV7for theEuclidV2hardfork.PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit
New Features
• Introduced a new data processing version that improves how blobs and batches are created and decoded.
• Enhanced support for L1 message handling with optimized processing and stronger error detection.
• Added new methods for encoding and decoding functionalities in the latest codec version.
• Implemented a new interface for blob payload management.
• Added additional methods for estimating calldata sizes and gas costs.
Tests
• Expanded test coverage to validate encoding/decoding, rolling hash computations, and configuration changes.
• Introduced new test cases for codec versions and configurations.
Chores
• Updated key dependencies to further boost performance and stability.