Skip to content

Conversation

@jonastheis
Copy link
Contributor

@jonastheis jonastheis commented Nov 13, 2025

Reverts #64

Summary by CodeRabbit

Release Notes

  • Refactor
    • Simplified internal data block handling logic in compression compatibility checks for improved code maintainability and more efficient processing.

@coderabbitai
Copy link

coderabbitai bot commented Nov 13, 2025

Walkthrough

The checkCompressedDataCompatibilityV7 function in encoding/da.go is simplified by removing conditional block-type handling for RLE and reserved types. Block size is now always derived directly from the 3-byte header without branching, and explicit blkType validation is dropped, relying instead on blkSize and length checks for block traversal.

Changes

Cohort / File(s) Summary
Block-type handling simplification
encoding/da.go
Removed conditional logic distinguishing RLE (blkType == 1) and reserved (blkType == 3) blocks; blkSize now always computed directly from 3-byte header without branches; eliminated explicit blkType validation against reserved types; control flow now linear with single blkSize computation path and basic length check

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Critical concern: Removal of blkType validation (reserved-type rejection) may affect correctness and error detection; verify that blkSize and length checks provide sufficient safeguards for malformed or malicious compressed data.
  • Risk area: Changes to error paths and how non-standard block types are now treated warrant testing of edge cases.

Possibly related PRs

  • scroll-tech/da-codec#64: Modifies the same checkCompressedDataCompatibilityV7 function and directly addresses block-type handling, with opposing approach (adds explicit RLE/reserved handling vs. removes it).
  • scroll-tech/da-codec#52: Simplifies block handling in checkCompressedDataCompatibilityV7 by dropping blkType validation and relying on blkSize/length checks.

Suggested reviewers

  • noel2004
  • Thegaram

Poem

🐰 Blocks once branched in tangled ways,
Now linear through simpler maze,
Three bytes direct, no types to check,
One path forward—code less wrecked! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is minimal and only states 'Reverts #64' without addressing required template sections like purpose, design rationale, or conventional commit type confirmation. Expand the description to include the purpose/rationale for the revert, confirm the PR type (appears to be 'revert'), and address the breaking change section as specified in the template.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: it's a revert of a previous fix related to RLE blocks in zstd compatibility checking, which matches the raw summary showing removal of RLE block handling logic.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch revert-64-fix/da-compabilitycheck

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ef47e96 and 6fb059b.

📒 Files selected for processing (1)
  • encoding/da.go (1 hunks)
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv2.go:38-47
Timestamp: 2024-10-16T18:43:44.520Z
Learning: In the `DecodeTxsFromBlob` method in `encoding/codecv2.go`, the `compressedBytes` variable will not be empty, so additional error handling for empty `compressedBytes` is unnecessary.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv3.go:90-107
Timestamp: 2024-10-17T04:15:29.946Z
Learning: In the function `NewDABatchFromBytes` in `encoding/codecv3.go`, the code already checks that `len(data) == 193` before slicing, so additional length checks are unnecessary.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv2.go:0-0
Timestamp: 2024-10-17T07:33:28.436Z
Learning: In `encoding/codecv2.go`, the `constructBlobPayload` function should remain as is; prefer to keep it without refactoring or additional optimizations.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv3.go:33-35
Timestamp: 2024-10-17T04:16:36.614Z
Learning: In the `NewDABatch` function in `encoding/codecv3.go`, it's acceptable for the last block in the last chunk to have no transactions.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv3_types.go:160-189
Timestamp: 2024-10-16T18:11:21.429Z
Learning: In the Go package `encoding`, within the `daBatchV3` struct in `codecv3_types.go`, the `MarshalJSON` method is only used for providing witness to the prover, and including `skippedL1MessageBitmap` in JSON serialization is not necessary.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv0.go:387-401
Timestamp: 2024-10-17T05:40:03.610Z
Learning: In `DACodecV0`, methods like `EstimateChunkL1CommitBatchSizeAndBlobSize`, `EstimateBatchL1CommitBatchSizeAndBlobSize`, and `JSONFromBytes` are intentionally left as no-ops (returning zero or nil) to maintain a consistent interface across codecs and prevent the caller from needing conditional logic.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv1.go:152-239
Timestamp: 2024-10-17T05:41:29.398Z
Learning: The `constructBlobPayload` method in `encoding/codecv1.go` should remain as is; preferences are to keep it without refactoring or additional optimizations.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv2.go:222-223
Timestamp: 2024-10-17T08:49:05.064Z
Learning: In the function `NewDABatchFromBytes` in `encoding/codecv2.go`, the assignments of `parentBatchHash` and `blobVersionedHash` are correct as implemented.
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/interfaces.go:95-108
Timestamp: 2024-10-17T04:13:14.579Z
Learning: In the `CodecFromConfig` function in the Go `encoding/interfaces.go` file, if none of the chain configuration conditions match, it's acceptable to default to returning `&DACodecV0{}` because, in the current logic, we can only deduce the codec version as the function implements, and the logic is complete.
📚 Learning: 2024-10-18T03:40:09.800Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv1_types.go:105-116
Timestamp: 2024-10-18T03:40:09.800Z
Learning: The code in `encoding/codecv1_types.go`, specifically the `Encode` method in `daBatchV1`, has been updated. Previous comments regarding hardcoded byte offsets may be outdated.

Applied to files:

  • encoding/da.go
📚 Learning: 2024-10-16T18:43:44.520Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv2.go:38-47
Timestamp: 2024-10-16T18:43:44.520Z
Learning: In the `DecodeTxsFromBlob` method in `encoding/codecv2.go`, the `compressedBytes` variable will not be empty, so additional error handling for empty `compressedBytes` is unnecessary.

Applied to files:

  • encoding/da.go
📚 Learning: 2024-10-17T04:15:29.946Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv3.go:90-107
Timestamp: 2024-10-17T04:15:29.946Z
Learning: In the function `NewDABatchFromBytes` in `encoding/codecv3.go`, the code already checks that `len(data) == 193` before slicing, so additional length checks are unnecessary.

Applied to files:

  • encoding/da.go
📚 Learning: 2024-10-17T05:40:03.610Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv0.go:387-401
Timestamp: 2024-10-17T05:40:03.610Z
Learning: In `DACodecV0`, methods like `EstimateChunkL1CommitBatchSizeAndBlobSize`, `EstimateBatchL1CommitBatchSizeAndBlobSize`, and `JSONFromBytes` are intentionally left as no-ops (returning zero or nil) to maintain a consistent interface across codecs and prevent the caller from needing conditional logic.

Applied to files:

  • encoding/da.go
📚 Learning: 2024-10-17T07:33:28.436Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv2.go:0-0
Timestamp: 2024-10-17T07:33:28.436Z
Learning: In `encoding/codecv2.go`, the `constructBlobPayload` function should remain as is; prefer to keep it without refactoring or additional optimizations.

Applied to files:

  • encoding/da.go
📚 Learning: 2024-10-17T04:16:36.614Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv3.go:33-35
Timestamp: 2024-10-17T04:16:36.614Z
Learning: In the `NewDABatch` function in `encoding/codecv3.go`, it's acceptable for the last block in the last chunk to have no transactions.

Applied to files:

  • encoding/da.go
📚 Learning: 2024-10-17T05:41:29.398Z
Learnt from: colinlyguo
Repo: scroll-tech/da-codec PR: 25
File: encoding/codecv1.go:152-239
Timestamp: 2024-10-17T05:41:29.398Z
Learning: The `constructBlobPayload` method in `encoding/codecv1.go` should remain as is; preferences are to keep it without refactoring or additional optimizations.

Applied to files:

  • encoding/da.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Thegaram Thegaram merged commit b167b3f into main Nov 13, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants