-
Notifications
You must be signed in to change notification settings - Fork 20
fix: consider RLE blocks in zstd compatibility check #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds blkType derivation in checkCompressedDataCompatibilityV7, branching handling for block types: forces blkSize=1 for RLE (type 1), errors on reserved (type 3), and otherwise computes blkSize as before. Control flow adjusts data-length validation and block advancement based on blkType. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Caller
participant F as checkCompressedDataCompatibilityV7
participant H as Header Parser
C->>F: validate compressed data (v7)
F->>H: read header
H-->>F: blkType, header fields
alt blkType == 1 (RLE)
note right of F: Force blkSize = 1
F->>F: validate data length
F->>F: advance to next block / check last-block
else blkType == 3 (reserved)
F-->>C: return error (reserved block type)
else other types
F->>F: compute blkSize from header
F->>F: validate data length
F->>F: advance / check last-block
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
encoding/da.go (1)
498-507: LGTM! RLE and reserved block handling added correctly.The logic correctly handles RLE blocks (type 1) by setting
blkSize = 1, and properly rejects reserved blocks (type 3). The block type extraction and branching logic are accurate.However, consider this minor improvement to the error message at line 504:
- return fmt.Errorf("encounter reserved block type at %v", data) + return fmt.Errorf("encounter reserved block type (3) at offset %d", len(originalData)-len(data))This would show the position rather than dumping potentially large data arrays. To implement this, you'd need to track the original data length before the loop.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
encoding/da.go(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-10-18T03:40:09.800Z
Learnt from: colinlyguo
PR: scroll-tech/da-codec#25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
Applied to files:
encoding/da.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: tests
🔇 Additional comments (1)
encoding/da.go (1)
500-507: Raw blocks (type 0) are valid per ZSTD v7 frame spec Raw (uncompressed) blocks are explicitly defined in the format and must be supported, so accepting type 0 here is correct.
jonastheis
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this change should be relatively low risk as we decode batches before submitting them in the rollup relayer anyway.
As discussed in Slack, we should probably replace this check long-term with a decompression with the underlying library to avoid such compatibility issues and be better maintainable.
This reverts commit ef47e96.
* fix: consider RLE blocks in zstd compatibility check (#64) * fix compability * fmt * override behavior * Apply suggestions from code review Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Ho <[email protected]> Co-authored-by: georgehao <[email protected]> Co-authored-by: Copilot <[email protected]>
* add galileo CodecV9 * add missing switch cases and tests * feat: consider RLE blocks in compatibility check starting from V9 (#67) * fix: consider RLE blocks in zstd compatibility check (#64) * fix compability * fmt * override behavior * Apply suggestions from code review Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Ho <[email protected]> Co-authored-by: georgehao <[email protected]> Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Péter Garamvölgyi <[email protected]> Co-authored-by: Ho <[email protected]> Co-authored-by: Copilot <[email protected]>
Fix an issue that the new
checkCompressedDataCompatibilityV7do not consider RLE type block, which would be possible when we have switched to official zstd for data compressionAfter this change,
checkCompressedDataCompatibilityV7will serve as a sanity check, it will only return error if there is some issue with the compressed data. Examples include: Data is corrupted, data is truncated, compressed data uses unsupported features (e.g. dictionary). If the compression module is configured correctly,checkCompressedDataCompatibilityV7will never fail.See the spec for more details: https://datatracker.ietf.org/doc/html/rfc8878
Summary by CodeRabbit