-
Notifications
You must be signed in to change notification settings - Fork 4
feat: Add batch extender + persistent fixes #45
Conversation
|
Warning Rate limit exceeded@facundomedica has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 17 minutes and 17 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
WalkthroughThe updates refactor transaction management to emphasize batch processing. In Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (8)
sequencing/transaction_queue.go (5)
13-16: Clarify interface method intent.
TheHead(max uint64)andTail(max uint64)methods are somewhat ambiguous in purpose. Adding short doc comments clarifying what each method is expected to return (e.g., a transaction prefix or suffix within a size limit) would make this interface easier to use and maintain.
74-77: Avoid potentially emptying the queue incorrectly.
In edge cases wherebatchis extremely large, repeatedly slicing until it fits might empty the entire queue. Ensure that’s the intended fallback.
95-115: Consider atomic batch removal vs. partial skipping.
If a single transaction retrieval fails (except for the head/tail), the entire batch is discarded. Instead, you might want a more granular approach (e.g., skipping only missing transactions) or a rollback to maintain atomic batch integrity.
117-144: Handle partial loading failures gracefully.
Indb.View, failing on one stored transaction halts the entire loading. Consider logging or skipping only the corrupted entries instead of failing the entire load.
146-165: Optimize repeated DB writes.
Re-inserting every transaction inbatch.Transactionsmight become expensive. If the queue is large, consider batch writes or a more efficient design.sequencing/sequencer.go (1)
160-160: Add doc for newextenderparameter.
TheNewSequencerfunction now acceptsextender BatchExtender. Consider documenting the expected usage or assumptions about this parameter, especially around concurrency or error handling.sequencing/sequencer_test.go (2)
55-55: Test with a real or mockBatchExtender.
Currently, these tests passnilfor theextender. Consider a separate test using a simple mockBatchExtenderto confirm the behavior of the sequencer when head/tail transactions are used.
72-72: Maintain consistency with the first test.
Here too,NewSequencerusesnilforextender. Expand coverage by verifying ifBatchExtenderlogic influences transaction submission.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
sequencing/sequencer.go(3 hunks)sequencing/sequencer_test.go(2 hunks)sequencing/transaction_queue.go(1 hunks)
🔇 Additional comments (5)
sequencing/transaction_queue.go (3)
18-23: Confirm slice usage under concurrency.
queue []sequencing.Txis effectively used as a queue but is appended/trimmed while guarded by a mutex. This is generally safe. Just confirm there's no scenario in which partial writes or re-slicing occur without the lock that might cause concurrency issues.
52-77: Verify partial batch truncation logic.
While removing transactions until the new batch fits withinmax, the logic discards transactions from the end ofbatch. Confirm that partial batch truncation is acceptable and won't break any higher-level guarantee. In certain use cases, it might be preferable to reject the entire batch if it exceeds the limit.
167-173: Method name matches usage.
totalBytesis straightforward and helps keep the code clear. No issues found.sequencing/sequencer.go (2)
190-190: InitializeTransactionQueuewithextender.
The addition ofNewTransactionQueue(extender)seamlessly injects the batch extender logic. Verify that all code paths rely on this queue for transaction handling.
339-350: Evaluate re-adding failed batch.
WhensubmitBatchToDAfails, the code re-adds the batch to the queue. Ensure repeated failures of the same batch do not cause an infinite loop.
| // AddTransaction adds a new transaction to the queue | ||
| func (tq *TransactionQueue) AddTransaction(tx sequencing.Tx, db *badger.DB) error { | ||
| tq.mu.Lock() | ||
| tq.queue = append(tq.queue, tx) | ||
| tq.mu.Unlock() | ||
|
|
||
| // Store transaction in BadgerDB | ||
| err := db.Update(func(txn *badger.Txn) error { | ||
| return txn.Set([]byte(GetTransactionHash(tx)), tx) | ||
| }) | ||
| return err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle DB failure to keep in-memory and DB states synced.
If db.Update fails, the transaction remains in tq.queue but not in BadgerDB, leading to inconsistent states. Consider rolling back the in-memory append or adding a retry mechanism.
func (tq *TransactionQueue) AddTransaction(tx sequencing.Tx, db *badger.DB) error {
tq.mu.Lock()
tq.queue = append(tq.queue, tx)
tq.mu.Unlock()
err := db.Update(func(txn *badger.Txn) error {
return txn.Set([]byte(GetTransactionHash(tx)), tx)
})
+ if err != nil {
+ // rollback from memory queue on error
+ tq.mu.Lock()
+ tq.queue = tq.queue[:len(tq.queue)-1]
+ tq.mu.Unlock()
+ }
return err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // AddTransaction adds a new transaction to the queue | |
| func (tq *TransactionQueue) AddTransaction(tx sequencing.Tx, db *badger.DB) error { | |
| tq.mu.Lock() | |
| tq.queue = append(tq.queue, tx) | |
| tq.mu.Unlock() | |
| // Store transaction in BadgerDB | |
| err := db.Update(func(txn *badger.Txn) error { | |
| return txn.Set([]byte(GetTransactionHash(tx)), tx) | |
| }) | |
| return err | |
| } | |
| // AddTransaction adds a new transaction to the queue | |
| func (tq *TransactionQueue) AddTransaction(tx sequencing.Tx, db *badger.DB) error { | |
| tq.mu.Lock() | |
| tq.queue = append(tq.queue, tx) | |
| tq.mu.Unlock() | |
| // Store transaction in BadgerDB | |
| err := db.Update(func(txn *badger.Txn) error { | |
| return txn.Set([]byte(GetTransactionHash(tx)), tx) | |
| }) | |
| if err != nil { | |
| // rollback from memory queue on error | |
| tq.mu.Lock() | |
| tq.queue = tq.queue[:len(tq.queue)-1] | |
| tq.mu.Unlock() | |
| } | |
| return err | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
sequencing/transaction_queue.go (3)
15-19: Add documentation for interface methods.The interface methods
HeadandTailneed documentation to explain their purpose, parameters, and return values.Add documentation like this:
// BatchExtender is an interface for extending a batch of transactions type BatchExtender interface { + // Head returns the transaction to be added at the beginning of the batch. + // max is the maximum size in bytes that the transaction can be. + // Returns the transaction bytes and any error encountered. Head(max uint64) ([]byte, error) + // Tail returns the transaction to be added at the end of the batch. + // max is the maximum size in bytes that the transaction can be. + // Returns the transaction bytes and any error encountered. Tail(max uint64) ([]byte, error) }
21-26: Add documentation for structure fields.The
TransactionQueuestructure fields need documentation to explain their purpose.Add documentation like this:
// TransactionQueue is a queue of transactions type TransactionQueue struct { + // queue holds the list of transactions in FIFO order queue []sequencing.Tx + // mu protects concurrent access to the queue mu sync.Mutex + // extender provides functionality to extend batches with head and tail transactions extender BatchExtender }
170-176: Add documentation for totalBytes function.The
totalBytesfunction needs documentation to explain its purpose and parameters.Add documentation like this:
+// totalBytes calculates the total number of bytes in a slice of byte slices. +// It returns the sum of the lengths of all byte slices in the input. func totalBytes(data [][]byte) int {sequencing/sequencer.go (1)
159-160: Update NewSequencer documentation.The function documentation needs to be updated to explain the new
extenderparameter.Add documentation like this:
-// NewSequencer ... +// NewSequencer creates a new Sequencer instance. +// Parameters: +// - daAddress: Address of the DA layer +// - daAuthToken: Authentication token for the DA layer +// - daNamespace: Namespace for the DA layer +// - rollupId: ID of the rollup +// - batchTime: Time between batch submissions +// - metrics: Metrics collector +// - dbPath: Path to the database +// - extender: BatchExtender for adding transactions at the beginning or end of batches +// Returns a new Sequencer instance and any error encountered. func NewSequencer(daAddress, daAuthToken string, daNamespace []byte, rollupId []byte, batchTime time.Duration, metrics *Metrics, dbPath string, extender BatchExtender) (*Sequencer, error) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (4)
go.mod(1 hunks)sequencing/sequencer.go(4 hunks)sequencing/sequencer_test.go(3 hunks)sequencing/transaction_queue.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- sequencing/sequencer_test.go
🧰 Additional context used
🪛 GitHub Check: codecov/patch
sequencing/transaction_queue.go
[warning] 63-71: sequencing/transaction_queue.go#L63-L71
Added lines #L63 - L71 were not covered by tests
[warning] 78-79: sequencing/transaction_queue.go#L78-L79
Added lines #L78 - L79 were not covered by tests
[warning] 86-87: sequencing/transaction_queue.go#L86-L87
Added lines #L86 - L87 were not covered by tests
[warning] 90-91: sequencing/transaction_queue.go#L90-L91
Added lines #L90 - L91 were not covered by tests
[warning] 104-108: sequencing/transaction_queue.go#L104-L108
Added lines #L104 - L108 were not covered by tests
[warning] 113-114: sequencing/transaction_queue.go#L113-L114
Added lines #L113 - L114 were not covered by tests
[warning] 133-141: sequencing/transaction_queue.go#L133-L141
Added lines #L133 - L141 were not covered by tests
[warning] 150-164: sequencing/transaction_queue.go#L150-L164
Added lines #L150 - L164 were not covered by tests
[warning] 167-167: sequencing/transaction_queue.go#L167
Added line #L167 was not covered by tests
🔇 Additional comments (5)
sequencing/transaction_queue.go (3)
42-53: Handle DB failure to keep in-memory and DB states synced.If
db.Updatefails, the transaction remains intq.queuebut not in BadgerDB, leading to inconsistent states.
149-168: Add test coverage for AddBatchBackToQueue.The entire
AddBatchBackToQueuemethod (lines 150-164) is not covered by tests.Run the following script to verify test coverage:
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 150-164: sequencing/transaction_queue.go#L150-L164
Added lines #L150 - L164 were not covered by tests
[warning] 167-167: sequencing/transaction_queue.go#L167
Added line #L167 was not covered by tests
120-147: Add test coverage for LoadFromDB error paths.The error handling paths in
LoadFromDBare not covered by tests (lines 133-141).Run the following script to verify test coverage:
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 133-141: sequencing/transaction_queue.go#L133-L141
Added lines #L133 - L141 were not covered by testssequencing/sequencer.go (1)
337-356: LGTM!The function correctly handles batch submission and error cases, including proper reversion of failed batches.
go.mod (1)
8-8: Verify Badger v4 upgrade compatibility.The upgrade from Badger v3 to v4 is a major version change that might include breaking changes.
✅ Verification successful
🌐 Web query:
What are the breaking changes in Badger v4 compared to v3?Length of output: 5008
Action: Verify Badger v4 Upgrade Compatibility
Based on the web query results, the upgrade to Badger v4 introduces several breaking changes compared to v3 that require careful handling. In particular, note that:
- Incompatible Data Format: Badger v4 uses a new external magic number for tracking dependencies, meaning that data written with v3 may need migration.
- Increased Minimum Go Version: The upgrade requires Go 1.19 or later.
- API Adjustments: Certain configuration options (for example,
WithKeepL0InMemory,WithLoadBloomsOnOpen, and the like) are removed or altered, and transaction handling now enforces stricter conflict detection.- Metrics System Update: Metric names have been revised (e.g., renaming of disk-read metrics), which may impact monitoring setups.
- Dependency Changes: Upgraded dependencies (like the switch to Google’s protobuf) could also affect integration.
These changes imply that simply updating the version in your
go.modfile is not sufficient—you will need to review and adjust your code (and possibly perform data migrations) to accommodate these breaking changes.
| // GetNextBatch extracts a batch of transactions from the queue | ||
| func (tq *TransactionQueue) GetNextBatch(max uint64, db *badger.DB) sequencing.Batch { | ||
| tq.mu.Lock() | ||
| defer tq.mu.Unlock() | ||
|
|
||
| // Add head of batch if extender is provided, also ask for the tail of the batch | ||
| var head, tail []byte | ||
| if tq.extender != nil { | ||
| var err error | ||
| head, err = tq.extender.Head(max) | ||
| if err != nil { | ||
| return sequencing.Batch{Transactions: nil} | ||
| } | ||
| tail, err = tq.extender.Tail(max) | ||
| if err != nil { | ||
| return sequencing.Batch{Transactions: nil} | ||
| } | ||
| } | ||
|
|
||
| batch := tq.queue | ||
| headTailSize := len(head) + len(tail) | ||
|
|
||
| for uint64(totalBytes(batch)+headTailSize) > max { | ||
| batch = batch[:len(batch)-1] | ||
| } | ||
|
|
||
| // batchLen before adding head and tail, to remove the correct number of transactions from the queue | ||
| batchLen := len(batch) | ||
|
|
||
| // Add head and tail of the batch | ||
| if head != nil { | ||
| batch = append([][]byte{head}, batch...) | ||
| } | ||
|
|
||
| if tail != nil { | ||
| batch = append(batch, tail) | ||
| } | ||
|
|
||
| if len(batch) == 0 { | ||
| return sequencing.Batch{Transactions: nil} | ||
| } | ||
|
|
||
| // Retrieve transactions from BadgerDB and remove processed ones | ||
| for i, tx := range batch { | ||
| txHash := GetTransactionHash(tx) | ||
| err := db.Update(func(txn *badger.Txn) error { | ||
| // Get and then delete the transaction from BadgerDB | ||
| _, err := txn.Get([]byte(txHash)) | ||
| if err != nil { | ||
| // If the transaction not found is the head or tail, skip it as they are not in the queue | ||
| if errors.Is(err, badger.ErrKeyNotFound) && (i == 0 || i == len(batch)-1) { | ||
| return nil | ||
| } | ||
| return err | ||
| } | ||
| return txn.Delete([]byte(txHash)) // Remove processed transaction | ||
| }) | ||
| if err != nil { | ||
| return sequencing.Batch{Transactions: nil} // Return empty batch if any transaction retrieval fails | ||
| } | ||
| } | ||
| tq.queue = tq.queue[batchLen:] | ||
| return sequencing.Batch{Transactions: batch} | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add test coverage for critical code paths.
Several critical code paths in GetNextBatch are not covered by tests:
- Error handling for
HeadandTailmethods (lines 63-71) - Batch size management (lines 78-79)
- Head and tail transaction handling (lines 86-87, 90-91)
- Error handling for transaction retrieval (lines 104-108, 113-114)
Run the following script to verify test coverage:
🏁 Script executed:
#!/bin/bash
# Description: Check test coverage for GetNextBatch method.
# Search for test files that test GetNextBatch
rg -A 5 'func.*Test.*GetNextBatch' sequencing/Length of output: 1534
Add test cases to cover all critical branches in TransactionQueue.GetNextBatch.
The current tests in sequencing/sequencer_test.go focus on the Sequencer-level behavior (e.g., last batch mismatches and a successful batch) but do not explicitly exercise several critical paths in TransactionQueue.GetNextBatch. In particular, you should add tests that directly verify:
- Error handling when
extender.Headandextender.Tailreturn an error. - The batch trimming loop when the summed size of transactions (plus head and tail) exceeds the maximum allowed.
- Correct behavior when the head or tail transactions are not found in the BadgerDB (i.e., the logic that skips deletion for these cases).
- Handling of errors returned by
db.Updateduring transaction retrieval and deletion.
Addressing these gaps will ensure that all critical branches in the method are properly verified.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 63-71: sequencing/transaction_queue.go#L63-L71
Added lines #L63 - L71 were not covered by tests
[warning] 78-79: sequencing/transaction_queue.go#L78-L79
Added lines #L78 - L79 were not covered by tests
[warning] 86-87: sequencing/transaction_queue.go#L86-L87
Added lines #L86 - L87 were not covered by tests
[warning] 90-91: sequencing/transaction_queue.go#L90-L91
Added lines #L90 - L91 were not covered by tests
[warning] 104-108: sequencing/transaction_queue.go#L104-L108
Added lines #L104 - L108 were not covered by tests
[warning] 113-114: sequencing/transaction_queue.go#L113-L114
Added lines #L113 - L114 were not covered by tests
Head branch was pushed to by a user without write access
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
sequencing/batch_queue.go (4)
10-11: Improve struct documentation.The comment "BatchQueue ..." is incomplete. Consider adding a detailed description of the struct's purpose and thread-safety guarantees.
-// BatchQueue ... +// BatchQueue provides a thread-safe queue for managing sequencing.Batch instances +// with persistent storage capabilities using BadgerDB.
23-24: Fix incorrect method documentation.The comment incorrectly refers to "transaction" instead of "batch".
-// AddBatch adds a new transaction to the queue +// AddBatch adds a new batch to the queue and persists it to BadgerDB
49-50: Fix method documentation.The comment is incorrect and doesn't follow the exported method documentation format.
-// AddBatch adds a new transaction to the queue +// AddBatchToTheTop prepends a new batch to the front of the queue and persists it to BadgerDB🧰 Tools
🪛 golangci-lint (1.62.2)
[warning] 49-49: exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..."
(revive)
🪛 GitHub Check: lint / golangci-lint
[failure] 49-49:
exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..." (revive)🪛 GitHub Actions: CI and Release
[error] 49-49: exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..." (revive)
49-73: Consider extracting common batch persistence logic.The batch persistence logic is duplicated between
AddBatchandAddBatchToTheTop. Consider extracting it into a private helper method.+func (bq *BatchQueue) persistBatch(batch sequencing.Batch, db *badger.DB) error { + // Get the hash and bytes of the batch + h, err := batch.Hash() + if err != nil { + return err + } + + // Marshal the batch + batchBytes, err := batch.Marshal() + if err != nil { + return err + } + + // Store the batch in BadgerDB + err = db.Update(func(txn *badger.Txn) error { + key := append(keyPrefixBatch, h...) + return txn.Set(key, batchBytes) + }) + return err +}🧰 Tools
🪛 golangci-lint (1.62.2)
[warning] 49-49: exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..."
(revive)
🪛 GitHub Check: lint / golangci-lint
[failure] 49-49:
exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..." (revive)🪛 GitHub Actions: CI and Release
[error] 49-49: exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..." (revive)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
sequencing/batch_queue.go(1 hunks)sequencing/sequencer.go(4 hunks)sequencing/transaction_queue.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- sequencing/transaction_queue.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
sequencing/batch_queue.go
6-6: File is not goimports-ed with -local github.com/rollkit
(goimports)
[warning] 49-49: exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..."
(revive)
sequencing/sequencer.go
36-36: var keyLastBatchHash is unused
(unused)
37-37: var keyPrefixSeenBatch is unused
(unused)
🪛 GitHub Check: lint / golangci-lint
sequencing/batch_queue.go
[failure] 49-49:
exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..." (revive)
[failure] 6-6:
File is not properly formatted (goimports)
sequencing/sequencer.go
[failure] 36-36:
var keyLastBatchHash is unused (unused)
[failure] 37-37:
var keyPrefixSeenBatch is unused (unused)
🪛 GitHub Actions: CI and Release
sequencing/batch_queue.go
[error] 49-49: exported: comment on exported method BatchQueue.AddBatchToTheTop should be of the form "AddBatchToTheTop ..." (revive)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test / Run Unit Tests
🔇 Additional comments (3)
sequencing/batch_queue.go (2)
16-21: LGTM!The constructor properly initializes the queue with thread-safety in mind.
108-140: LGTM!The implementation correctly handles batch reloading with proper error handling and thread safety.
sequencing/sequencer.go (1)
65-127: LGTM!The changes correctly integrate the new BatchExtender parameter and initialize both queues with proper error handling.
| err = db.Update(func(txn *badger.Txn) error { | ||
| // Get the batch to ensure it exists in the DB before deleting | ||
| key := append(keyPrefixBatch, h...) | ||
| _, err := txn.Get(key) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| // Delete the batch from BadgerDB | ||
| return txn.Delete(h) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect key in batch deletion.
The method uses h instead of key when deleting the batch from BadgerDB, which will cause the deletion to fail.
err = db.Update(func(txn *badger.Txn) error {
// Get the batch to ensure it exists in the DB before deleting
key := append(keyPrefixBatch, h...)
_, err := txn.Get(key)
if err != nil {
return err
}
// Delete the batch from BadgerDB
- return txn.Delete(h)
+ return txn.Delete(key)
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| err = db.Update(func(txn *badger.Txn) error { | |
| // Get the batch to ensure it exists in the DB before deleting | |
| key := append(keyPrefixBatch, h...) | |
| _, err := txn.Get(key) | |
| if err != nil { | |
| return err | |
| } | |
| // Delete the batch from BadgerDB | |
| return txn.Delete(h) | |
| }) | |
| err = db.Update(func(txn *badger.Txn) error { | |
| // Get the batch to ensure it exists in the DB before deleting | |
| key := append(keyPrefixBatch, h...) | |
| _, err := txn.Get(key) | |
| if err != nil { | |
| return err | |
| } | |
| // Delete the batch from BadgerDB | |
| return txn.Delete(key) | |
| }) |
| // Key prefixes for BadgerDB | ||
| var ( | ||
| keyLastBatchHash = []byte("lastBatchHash") | ||
| keyPrefixSeenBatch = []byte("seenBatch") | ||
| keyPrefixBatch = []byte("batch") | ||
| keyPrefixTx = []byte("tx") | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix inconsistent key usage and remove unused constant.
Issues found:
keyLastBatchHashis defined but not used (string literal "lastBatchHash" is used instead)keyPrefixSeenBatchis unused and can be removed
// Key prefixes for BadgerDB
var (
- keyLastBatchHash = []byte("lastBatchHash")
- keyPrefixSeenBatch = []byte("seenBatch")
keyPrefixBatch = []byte("batch")
keyPrefixTx = []byte("tx")
)Also update the LoadLastBatchHashFromDB method to use the constant:
- item, err := txn.Get([]byte("lastBatchHash"))
+ item, err := txn.Get(keyLastBatchHash)Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 golangci-lint (1.62.2)
36-36: var keyLastBatchHash is unused
(unused)
37-37: var keyPrefixSeenBatch is unused
(unused)
🪛 GitHub Check: lint / golangci-lint
[failure] 36-36:
var keyLastBatchHash is unused (unused)
[failure] 37-37:
var keyPrefixSeenBatch is unused (unused)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
sequencing/batch_queue.go (1)
49-73: Reduce code duplication with AddBatch method.The method duplicates the batch persistence logic from
AddBatch. Consider extracting the common persistence logic into a helper method.+func (bq *BatchQueue) persistBatch(batch sequencing.Batch, db *badger.DB) error { + // Get the hash and bytes of the batch + h, err := batch.Hash() + if err != nil { + return err + } + + // Marshal the batch + batchBytes, err := batch.Marshal() + if err != nil { + return err + } + + // Store the batch in BadgerDB + err = db.Update(func(txn *badger.Txn) error { + key := append(keyPrefixBatch, h...) + return txn.Set(key, batchBytes) + }) + return err +} func (bq *BatchQueue) AddBatch(batch sequencing.Batch, db *badger.DB) error { bq.mu.Lock() bq.queue = append(bq.queue, batch) bq.mu.Unlock() - // Get the hash and bytes of the batch - h, err := batch.Hash() - if err != nil { - return err - } - // Marshal the batch - batchBytes, err := batch.Marshal() - if err != nil { - return err - } - // Store the batch in BadgerDB - err = db.Update(func(txn *badger.Txn) error { - key := append(keyPrefixBatch, h...) - return txn.Set(key, batchBytes) - }) - return err + return bq.persistBatch(batch, db) } func (bq *BatchQueue) AddBatchToTheTop(batch sequencing.Batch, db *badger.DB) error { bq.mu.Lock() bq.queue = append([]sequencing.Batch{batch}, bq.queue...) bq.mu.Unlock() - // Get the hash and bytes of the batch - h, err := batch.Hash() - if err != nil { - return err - } - // Marshal the batch - batchBytes, err := batch.Marshal() - if err != nil { - return err - } - // Store the batch in BadgerDB - err = db.Update(func(txn *badger.Txn) error { - key := append(keyPrefixBatch, h...) - return txn.Set(key, batchBytes) - }) - return err + return bq.persistBatch(batch, db) }sequencing/sequencer.go (3)
65-66: Add documentation for the BatchExtender parameter.The function signature has been updated to include a new
extender BatchExtenderparameter, but its purpose and requirements are not documented.-// NewSequencer ... +// NewSequencer creates a new Sequencer instance. +// Parameters: +// - daAddress: The address of the DA layer +// - daAuthToken: Authentication token for the DA layer +// - daNamespace: Namespace for the DA layer +// - rollupId: Unique identifier for the rollup +// - batchTime: Duration between batch submissions +// - metrics: Metrics collector for monitoring +// - dbPath: Path to the BadgerDB database (empty for in-memory) +// - extender: BatchExtender implementation for customizing batch behavior
221-221: Remove unnecessary type conversion.The code unnecessarily converts
keyPrefixSeenBatchto string and back to []byte.- key := append([]byte(keyPrefixSeenBatch), hash...) + key := append(keyPrefixSeenBatch, hash...)🧰 Tools
🪛 golangci-lint (1.62.2)
221-221: unnecessary conversion
(unconvert)
🪛 GitHub Check: lint / golangci-lint
[failure] 221-221:
unnecessary conversion (unconvert)
448-448: Address the TODO comment for DA verification.The TODO comment indicates that DA verification functionality needs to be implemented.
Would you like me to help implement the DA verification or create an issue to track this task?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
sequencing/batch_queue.go(1 hunks)sequencing/sequencer.go(7 hunks)
🧰 Additional context used
🪛 golangci-lint (1.62.2)
sequencing/sequencer.go
36-36: var keyLastBatchHash is unused
(unused)
221-221: unnecessary conversion
(unconvert)
sequencing/batch_queue.go
6-6: File is not goimports-ed with -local github.com/rollkit
(goimports)
🪛 GitHub Check: lint / golangci-lint
sequencing/sequencer.go
[failure] 36-36:
var keyLastBatchHash is unused (unused)
[failure] 221-221:
unnecessary conversion (unconvert)
sequencing/batch_queue.go
[failure] 6-6:
File is not properly formatted (goimports)
🔇 Additional comments (5)
sequencing/batch_queue.go (4)
10-14: LGTM! Well-structured BatchQueue implementation.The struct is properly designed with mutex for thread-safe operations.
23-47: LGTM! Thread-safe batch addition with proper persistence.The implementation correctly handles concurrency and persistence with appropriate error handling.
91-100: Fix incorrect key in batch deletion.The method uses
hinstead ofkeywhen deleting the batch from BadgerDB, which will cause the deletion to fail.err = db.Update(func(txn *badger.Txn) error { // Get the batch to ensure it exists in the DB before deleting key := append(keyPrefixBatch, h...) _, err := txn.Get(key) if err != nil { return err } // Delete the batch from BadgerDB - return txn.Delete(h) + return txn.Delete(key) })
108-140: LGTM! Robust database recovery implementation.The method correctly handles batch recovery with proper locking, iterator cleanup, and error handling.
sequencing/sequencer.go (1)
34-40: Fix inconsistent key usage and remove unused constant.Issues found:
keyLastBatchHashis defined but not used (string literal "lastBatchHash" is used instead)keyPrefixSeenBatchis unused and can be removed// Key prefixes for BadgerDB var ( - keyLastBatchHash = []byte("lastBatchHash") - keyPrefixSeenBatch = []byte("seenBatch") keyPrefixBatch = []byte("batch") keyPrefixTx = []byte("tx") )Also update the
LoadLastBatchHashFromDBmethod to use the constant:- item, err := txn.Get([]byte("lastBatchHash")) + item, err := txn.Get(keyLastBatchHash)🧰 Tools
🪛 golangci-lint (1.62.2)
36-36: var
keyLastBatchHashis unused(unused)
🪛 GitHub Check: lint / golangci-lint
[failure] 36-36:
varkeyLastBatchHashis unused (unused)
| func (c *Sequencer) recover(batch sequencing.Batch, err error) (*sequencing.GetNextBatchResponse, error) { | ||
| // Revert the batch if Hash() errors out by adding it back to the BatchQueue | ||
| revertErr := c.bq.AddBatch(batch, c.db) | ||
| revertErr := c.bq.AddBatchToTheTop(batch, c.db) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so we keep the ordering of the batches, although I'm not very sure that we need it at all. Why Hash() would fail? It's currently just marshalling and hashing:
// Hash returns the hash of a batch.
func (batch *Batch) Hash() ([]byte, error) {
batchBytes, err := batch.Marshal()
if err != nil {
return nil, err
}
hash := sha256.Sum256(batchBytes)
return hash[:], nil
}| bq.mu.Lock() | ||
| bq.queue = append(bq.queue, batch) | ||
| bq.mu.Unlock() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make sense to perform the batch validation steps before adding the batch to the queue? If we can't Hash or Marshal the batch, I would think that we should reject it from the queue.
| bq.mu.Lock() | ||
| bq.queue = append([]sequencing.Batch{batch}, bq.queue...) | ||
| bq.mu.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment about order of the function
| h, err := batch.Hash() | ||
| if err != nil { | ||
| return &sequencing.Batch{Transactions: nil}, err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar comments here. if we are validating the batch before it enters the queue, then we should not be hitting an error here unless there was some sort of data corruption event. So having the checks here is still valid.
However, before returning on error, we should also be removing the batch from the database, otherwise the database would get bloated with corrupt batches that aren't in the queue.
| if err != nil { | ||
| return &sequencing.Batch{Transactions: nil}, err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should nil out the returned batch on error deleting it from the database.
| } | ||
|
|
||
| // Remove the batch from BadgerDB after processing | ||
| err = db.Update(func(txn *badger.Txn) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to address the previous comment, this could be called in a defer function after pull the batch out of the queue. The error can just be logged for debug purposes.
| } | ||
|
|
||
| // LoadFromDB reloads all batches from BadgerDB into the in-memory queue after a crash or restart. | ||
| func (bq *BatchQueue) LoadFromDB(db *badger.DB) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is a reason to clean up the handling of bad batches, otherwise we keep loading the bad batches back into memory because they aren't being deleted from the database
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this file should have testing to cover all functions
| } | ||
|
|
||
| // AddTransaction adds a new transaction to the queue | ||
| func (tq *TransactionQueue) AddTransaction(tx sequencing.Tx, db *badger.DB) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar comments as before, I think the database option should occur first before updating in memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
file requires testing
| for uint64(totalBytes(batch)+headTailSize) > max { | ||
| batch = batch[:len(batch)-1] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to arbitrarily cut bytes off of txns in the queue which will corrupt the txns.
Instead of truncation, the for loop should be removing entire txns from the queue until the batch is under the max.
A future optimization of the batch would be to make sure the txns in the queue are ordered by a priority. The priority could be customized to either be size, gas, timestamp etc.
For this implementation though I wouldn't worry about the queue ordering as how to prioritize txns should be a separate discussion.
| batchLen := len(batch) | ||
|
|
||
| // Add head and tail of the batch | ||
| if head != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need any of these nil checks before append statements. A nil slice is just considered an empty slice.
Overview
BatchExtender allows adding txs at the beginning or end of a batch, without going over the max bytes.
Summary by CodeRabbit
New Features
Refactor
Chores
Bug Fixes