-
Notifications
You must be signed in to change notification settings - Fork 79
refactor: system program relax output tree ordering #1984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a new no_std ArrayMap crate and tinyvec dependency, integrates ArrayMap for per-tree leaf-index tracking in create_outputs_cpi_data, introduces TreeLeafTracker extensions, two new SystemProgramError variants, a small Pubkey API, and updates tests and workspace manifests. No API breaking changes reported. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant SysProc as SystemProcessor
participant Tracker as TreeLeafTracker (ArrayMap)
participant Queue as OutputQueue
participant Tree as StateTree
participant SeqNum as MerkleTreeSeq
Client->>SysProc: create_outputs_cpi_data(request)
SysProc->>Queue: load output queue account
SysProc->>Tree: load tree account (V1/V2)
alt tree already tracked
SysProc->>Tracker: get_or_insert_tuple(tree_pubkey, initial)
Tracker-->>SysProc: (prev_tuple, is_new=false)
SysProc->>Tracker: increment_current_tuple()
else new tree
SysProc->>Tracker: get_or_insert_tuple(tree_pubkey, initial)
Tracker-->>SysProc: (initial_tuple, is_new=true)
SysProc->>SeqNum: initialize/update sequence entry
end
SysProc->>SysProc: derive output_leaf_indices using tracked leaf_index
SysProc-->>Client: return CPI data or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
programs/system/src/processor/create_outputs_cpi_data.rs (1)
51-52
: Remove unnecessary attribute.The
#[allow(unused_assignments)]
attribute oncurrent_index
appears unnecessary since the variable is actively used throughout the loop starting at line 63.Apply this diff to remove the unnecessary attribute:
- #[allow(unused_assignments)] let mut current_index: i16 = -1;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (3)
programs/system/Cargo.toml
(1 hunks)programs/system/src/errors.rs
(2 hunks)programs/system/src/processor/create_outputs_cpi_data.rs
(3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-11T21:59:25.201Z
Learnt from: CR
PR: Lightprotocol/light-protocol#0
File: program-libs/account-checks/CLAUDE.md:0-0
Timestamp: 2025-10-11T21:59:25.201Z
Learning: Applies to program-libs/account-checks/src/error.rs : Maintain stable mapping of AccountError to ProgramError, including Pinocchio code mapping (1–11), in error.rs
Applied to files:
programs/system/src/errors.rs
🧬 Code graph analysis (1)
programs/system/src/processor/create_outputs_cpi_data.rs (3)
program-libs/batched-merkle-tree/src/queue.rs (1)
pubkey
(433-435)program-libs/batched-merkle-tree/src/merkle_tree.rs (1)
pubkey
(920-922)programs/system/src/context.rs (1)
new
(178-200)
🔇 Additional comments (7)
programs/system/Cargo.toml (1)
59-59
: LGTM! ArrayVec dependency addition is appropriate.The
arrayvec
dependency is correctly added with workspace-level version management. It's consumed by the new tree leaf index tracking logic increate_outputs_cpi_data.rs
.Based on learnings: arrayvec 0.7.6 is the latest stable release with no-std support and a capacity-checking API suitable for fixed-size collections.
programs/system/src/errors.rs (2)
143-146
: LGTM! New error variants align with capacity limits.The two new error variants (
TooManyOutputV2Queues
andTooManyOutputV1Trees
) correctly reflect the 30-element capacity limit of theArrayVec
tracking structure increate_outputs_cpi_data.rs
.
230-231
: Error code mapping is stable and sequential.The new error codes (6066, 6067) maintain the stable sequential mapping pattern established in the codebase, following the previous highest code 6065.
Based on learnings: Error code mappings should remain stable for backward compatibility.
programs/system/src/processor/create_outputs_cpi_data.rs (4)
1-1
: LGTM! Import is necessary for the new tracking structure.The
arrayvec::ArrayVec
import supports the per-tree leaf index tracking introduced in this refactor.
47-49
: ArrayVec capacity of 30 aligns with error messages.The
ArrayVec
with capacity 30 provides stack-based storage for tracking tree leaf indices. The capacity limit is enforced by the error variantsTooManyOutputV2Queues
andTooManyOutputV1Trees
at lines 106 and 147.Based on learnings:
ArrayVec::try_push
returnsCapacityError
when full, enabling fallible insertion without panics.
66-179
: Tree reuse logic correctly tracks per-tree leaf indices.The refactored logic properly:
- Checks if a tree pubkey exists in
tree_leaf_indices
(lines 74, 116)- Reuses and increments the existing leaf index for seen trees (lines 76-78, 118-120)
- Initializes metadata and adds new trees to tracking (lines 80-110, 122-151)
- Handles error cases with appropriate discriminator checks (lines 153-178)
The approach enables tree reuse across output accounts while maintaining correct leaf index progression.
217-219
: Leaf index derivation is correct.The output leaf index is correctly derived from the tracked
leaf_index
with appropriate bounds checking viatry_from
, converting fromu64
tou32
and catching overflow withPackedAccountIndexOutOfBounds
.
7b3dbd8
to
e338d27
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (12)
program-libs/compressed-account/src/pubkey.rs (1)
78-80
: Non-copy accessor is good; add inline and consider AsRef<[u8; 32]>array_ref avoids copies. Add #[inline] and optionally implement AsRef<[u8; 32]> for ergonomic interop.
Apply inline:
- pub fn array_ref(&self) -> &[u8; 32] { + #[inline] + pub fn array_ref(&self) -> &[u8; 32] { &self.0 }Additionally, consider adding:
impl AsRef<[u8; 32]> for Pubkey { fn as_ref(&self) -> &[u8; 32] { &self.0 } }Cargo.toml (1)
4-4
: Workspace wiring for array-map and tinyvec looks correct
- Member program-libs/array-map and workspace dep light-array-map are consistent.
- tinyvec 1.10.0 is fine; default feature set is no_std-friendly.
To future-proof no_std expectations, explicitly disable default features at the workspace level:
-tinyvec = "1.10.0" +tinyvec = { version = "1.10.0", default-features = false }This avoids surprises if tinyvec ever changes its defaults. Based on learnings.
Also applies to: 204-205, 210-210
program-libs/array-map/Cargo.toml (1)
1-14
: Manifest looks good; alloc feature gating is clearalloc toggles tinyvec/alloc correctly; workspace tinyvec link is fine.
Consider setting tinyvec default-features = false at the workspace (root Cargo.toml) to lock no_std behavior across the workspace (see prior comment). Based on learnings.
program-tests/system-test/tests/test.rs (1)
525-545
: Commented-out failure test: update or remove to reflect relaxed uniquenessThe “OutputMerkleTreeNotUnique” failure case is now commented out. This can drift into dead code. Either:
- Remove it, or
- Replace it with a positive test asserting that duplicate output trees are accepted under the new relaxed rules.
Minimal cleanup (remove the block):
- // // output Merkle tree is not unique (we need at least 2 outputs for this test) - // if num_outputs > 1 { - // let mut inputs_struct = inputs_struct.clone(); - // let mut remaining_accounts = remaining_accounts.clone(); - // let remaining_mt_acc = remaining_accounts - // [inputs_struct.output_compressed_accounts[1].merkle_tree_index as usize] - // .clone(); - // remaining_accounts.push(remaining_mt_acc); - // inputs_struct.output_compressed_accounts[1].merkle_tree_index = - // (remaining_accounts.len() - 1) as u8; - // create_instruction_and_failing_transaction( - // rpc, - // payer, - // inputs_struct, - // remaining_accounts.clone(), - // SystemProgramError::OutputMerkleTreeNotUnique.into(), - // ) - // .await - // .unwrap(); - // }Optionally, add a separate success-path test (outside failing_transaction_inputs_inner) to validate duplicates are accepted end-to-end.
Please confirm whether duplicates should always be allowed for both V1 and V2 outputs so we can add a targeted success test.
programs/system/src/errors.rs (2)
144-147
: New error variants: OK, but avoid hardcoding “max 30”Variants TooManyOutputV2Queues and TooManyOutputV1Trees look fine.
If the limit is a const, consider deriving the message from it to avoid divergence when changing capacity later.
249-256
: Blanket From loses V1/V2 contextMapping CapacityExceeded → TooManyOutputV2Queues unconditionally may misreport errors in V1 paths (where TooManyOutputV1Trees is intended).
Prefer context-aware mapping at call sites, or provide helper constructors instead of a blanket From:
-impl From<ArrayMapError> for SystemProgramError { - fn from(e: ArrayMapError) -> Self { - match e { - ArrayMapError::CapacityExceeded => SystemProgramError::TooManyOutputV2Queues, - ArrayMapError::IndexOutOfBounds => SystemProgramError::OutputMerkleTreeIndexOutOfBounds, - } - } -}For example, add:
impl SystemProgramError { pub fn from_array_map_v2(e: ArrayMapError) -> Self { match e { ArrayMapError::CapacityExceeded => SystemProgramError::TooManyOutputV2Queues, ArrayMapError::IndexOutOfBounds => SystemProgramError::OutputMerkleTreeIndexOutOfBounds, } } pub fn from_array_map_v1(e: ArrayMapError) -> Self { match e { ArrayMapError::CapacityExceeded => SystemProgramError::TooManyOutputV1Trees, ArrayMapError::IndexOutOfBounds => SystemProgramError::OutputMerkleTreeIndexOutOfBounds, } } }Then map explicitly where the error originates (V1 vs V2). This keeps error semantics precise.
Please confirm whether any V1 code paths currently rely on the blanket conversion via “?”; if so, we should switch them to the context-aware helpers.
program-libs/array-map/tests/array_map_tests.rs (2)
43-45
: Avoid temporary String allocations in assertions.Compare as &str to elide temps: map(...).map(String::as_str) == Some("one"/"two").
Apply:
- assert_eq!(map.get_by_key(&1), Some(&"one".to_string())); - assert_eq!(map.get_by_key(&2), Some(&"two".to_string())); + assert_eq!(map.get_by_key(&1).map(String::as_str), Some("one")); + assert_eq!(map.get_by_key(&2).map(String::as_str), Some("two"));
141-186
: Optional: add tests for [u8;32] pubkey fast‑paths.Exercise get_by_pubkey/find_by_pubkey to prevent regressions and validate the [u8;32] specialization.
program-libs/array-map/src/lib.rs (2)
173-184
: Drop unsafe/const; use safe equality for 32‑byte keys.pubkey_eq uses read_unaligned in a const fn. This is MSRV‑sensitive and unnecessary. Prefer safe equality; the compiler emits efficient memcmp/vectorized code.
Apply:
-#[inline(always)] -pub const fn pubkey_eq(p1: &[u8; 32], p2: &[u8; 32]) -> bool { - let p1_ptr = p1.as_ptr() as *const u64; - let p2_ptr = p2.as_ptr() as *const u64; - unsafe { - read_unaligned(p1_ptr) == read_unaligned(p2_ptr) - && read_unaligned(p1_ptr.add(1)) == read_unaligned(p2_ptr.add(1)) - && read_unaligned(p1_ptr.add(2)) == read_unaligned(p2_ptr.add(2)) - && read_unaligned(p1_ptr.add(3)) == read_unaligned(p2_ptr.add(3)) - } -} +#[inline(always)] +pub fn pubkey_eq(p1: &[u8; 32], p2: &[u8; 32]) -> bool { + p1 == p2 +}If you must retain a const fn, please confirm your MSRV supports const core::ptr::read_unaligned.
120-156
: Consider removing [u8;32] specialization if not needed.The generic get_by_key/find already leverage optimized equality; the specialized pubkey methods may be redundant. If kept for clarity, ensure they stay thin wrappers to avoid drift.
programs/system/src/processor/create_outputs_cpi_data.rs (1)
49-51
: Optional: avoid sentinel i16 for “no index”.Use Option for current_index to make states explicit and avoid casts.
programs/system/src/processor/tree_leaf_tracker_ext.rs (1)
39-75
: Broaden key bound and handle theoretical u64 overflow.
- Consider K: Clone (clone() on insert) instead of Copy to support more key types like [u8;32] without Copy if needed in future.
- For robustness, use checked_add(1).ok_or(...) to guard u64 overflow (unlikely but safer).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (11)
Cargo.toml
(2 hunks)program-libs/array-map/Cargo.toml
(1 hunks)program-libs/array-map/src/lib.rs
(1 hunks)program-libs/array-map/tests/array_map_tests.rs
(1 hunks)program-libs/compressed-account/src/pubkey.rs
(1 hunks)program-tests/system-test/tests/test.rs
(1 hunks)programs/system/Cargo.toml
(1 hunks)programs/system/src/errors.rs
(4 hunks)programs/system/src/processor/create_outputs_cpi_data.rs
(7 hunks)programs/system/src/processor/mod.rs
(1 hunks)programs/system/src/processor/tree_leaf_tracker_ext.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- programs/system/Cargo.toml
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-10-11T21:59:25.201Z
Learnt from: CR
PR: Lightprotocol/light-protocol#0
File: program-libs/account-checks/CLAUDE.md:0-0
Timestamp: 2025-10-11T21:59:25.201Z
Learning: Applies to program-libs/account-checks/src/error.rs : Maintain stable mapping of AccountError to ProgramError, including Pinocchio code mapping (1–11), in error.rs
Applied to files:
programs/system/src/processor/create_outputs_cpi_data.rs
programs/system/src/errors.rs
📚 Learning: 2025-10-11T21:59:25.201Z
Learnt from: CR
PR: Lightprotocol/light-protocol#0
File: program-libs/account-checks/CLAUDE.md:0-0
Timestamp: 2025-10-11T21:59:25.201Z
Learning: Applies to program-libs/account-checks/src/**/*.rs : Return AccountError variants (codes 12006–12021) and rely on automatic ProgramError conversions; avoid returning raw ProgramError directly
Applied to files:
programs/system/src/errors.rs
📚 Learning: 2025-10-11T21:59:25.201Z
Learnt from: CR
PR: Lightprotocol/light-protocol#0
File: program-libs/account-checks/CLAUDE.md:0-0
Timestamp: 2025-10-11T21:59:25.201Z
Learning: Applies to program-libs/account-checks/src/account_iterator.rs : Use AccountIterator for sequential account retrieval to get precise file:line:column error locations; avoid manual index handling
Applied to files:
programs/system/src/errors.rs
🧬 Code graph analysis (2)
program-libs/array-map/tests/array_map_tests.rs (1)
program-libs/array-map/src/lib.rs (9)
new
(26-31)len
(33-35)is_empty
(37-39)last_accessed_index
(41-43)get
(45-47)get_by_key
(61-63)find_index
(83-85)get_u8
(53-55)get_mut_u8
(57-59)
programs/system/src/processor/create_outputs_cpi_data.rs (1)
programs/system/src/context.rs (1)
new
(178-200)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
- GitHub Check: stateless-js-v1
- GitHub Check: system-programs (sdk-test-program, ["cargo-test-sbf -p sdk-native-test", "cargo-test-sbf -p sdk-v...
- GitHub Check: system-programs (sdk-libs, light-sdk-macros light-sdk light-program-test light-client, cargo test...
- GitHub Check: system-programs (sdk-anchor-test-program, ["cargo-test-sbf -p sdk-anchor-test", "cargo-test-sbf -...
- GitHub Check: cli-v2
- GitHub Check: Test program-libs-slow
- GitHub Check: Test batched-merkle-tree-simulate
- GitHub Check: Test program-libs-fast
- GitHub Check: cli-v1
- GitHub Check: lint
- GitHub Check: system-programs (compressed-token-batched-tree, ["cargo-test-sbf -p compressed-token-test -- test...
- GitHub Check: system-programs (compressed-token-and-e2e, ["cargo-test-sbf -p compressed-token-test -- --skip te...
- GitHub Check: system-programs (account-compression-and-registry, ["cargo-test-sbf -p account-compression-test",...
- GitHub Check: system-programs (system-cpi-test-v2-functional-account-infos, ["cargo-test-sbf -p system-cpi-v2-t...
- GitHub Check: system-programs (light-system-program-address, ["cargo-test-sbf -p system-test -- test_with_addre...
- GitHub Check: system-programs (system-cpi-test, ["cargo-test-sbf -p system-cpi-test", "cargo test -p light-syst...
- GitHub Check: system-programs (light-system-program-compression, ["cargo-test-sbf -p system-test -- test_with_c...
- GitHub Check: Forester e2e test
- GitHub Check: system-programs (system-cpi-test-v2-functional-read-only, ["cargo-test-sbf -p system-cpi-v2-test ...
- GitHub Check: stateless-js-v2
🔇 Additional comments (3)
programs/system/src/processor/mod.rs (1)
10-10
: LGTM: export addedPublicly exporting tree_leaf_tracker_ext looks good and aligns with the new tracking code.
programs/system/src/errors.rs (1)
231-233
: u32 mappings added: OK6066/6067 assignments are consistent with the sequence.
programs/system/src/processor/create_outputs_cpi_data.rs (1)
54-56
: Verify tracker capacity (N = 30).Ensure 30 ≥ worst-case unique trees per instruction (e.g., bound by MAX_OUTPUT_ACCOUNTS) to avoid unexpected TooManyOutput* errors.
// Only set up metadata if this is a new tree (first time seeing this pubkey) | ||
if is_new { | ||
// TODO: depulicate logic | ||
context.set_network_fee( | ||
output_queue.metadata.rollover_metadata.network_fee, | ||
current_index as u8, | ||
); | ||
hashed_merkle_tree = output_queue.hashed_merkle_tree_pubkey; | ||
rollover_fee = output_queue.metadata.rollover_metadata.rollover_fee; | ||
is_batched = true; | ||
|
||
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] = | ||
MerkleTreeSequenceNumber { | ||
tree_pubkey: output_queue.metadata.associated_merkle_tree, | ||
queue_pubkey: *output_queue.pubkey(), | ||
tree_type: (TreeType::StateV2 as u64).into(), | ||
seq: initial_leaf_index.into(), | ||
}; | ||
|
||
context.get_index_or_insert( | ||
account.merkle_tree_index(), | ||
remaining_accounts, | ||
"Output queue for V2 state trees (Merkle tree for V1 state trees)", | ||
)?; | ||
|
||
index_merkle_tree_account += 1; | ||
next_account_index += 1; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: stale hashed_merkle_tree/is_batched/rollover_fee when revisiting V2 tree.
If a previously seen tree reappears later, is_new is false and these vars aren’t updated, producing incorrect leaf hashes/flags.
Move assignments outside the is_new block:
- // Only set up metadata if this is a new tree (first time seeing this pubkey)
- if is_new {
- // TODO: depulicate logic
- context.set_network_fee(
- output_queue.metadata.rollover_metadata.network_fee,
- current_index as u8,
- );
- hashed_merkle_tree = output_queue.hashed_merkle_tree_pubkey;
- rollover_fee = output_queue.metadata.rollover_metadata.rollover_fee;
- is_batched = true;
+ // Always set per-tree vars for the current branch
+ hashed_merkle_tree = output_queue.hashed_merkle_tree_pubkey;
+ rollover_fee = output_queue.metadata.rollover_metadata.rollover_fee;
+ is_batched = true;
+
+ // Only set up one-time metadata when this tree is first seen
+ if is_new {
+ context.set_network_fee(
+ output_queue.metadata.rollover_metadata.network_fee,
+ current_index as u8,
+ );
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] =
MerkleTreeSequenceNumber {
tree_pubkey: output_queue.metadata.associated_merkle_tree,
queue_pubkey: *output_queue.pubkey(),
tree_type: (TreeType::StateV2 as u64).into(),
seq: initial_leaf_index.into(),
};
context.get_index_or_insert(
account.merkle_tree_index(),
remaining_accounts,
"Output queue for V2 state trees (Merkle tree for V1 state trees)",
)?;
index_merkle_tree_account += 1;
next_account_index += 1;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Only set up metadata if this is a new tree (first time seeing this pubkey) | |
if is_new { | |
// TODO: depulicate logic | |
context.set_network_fee( | |
output_queue.metadata.rollover_metadata.network_fee, | |
current_index as u8, | |
); | |
hashed_merkle_tree = output_queue.hashed_merkle_tree_pubkey; | |
rollover_fee = output_queue.metadata.rollover_metadata.rollover_fee; | |
is_batched = true; | |
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] = | |
MerkleTreeSequenceNumber { | |
tree_pubkey: output_queue.metadata.associated_merkle_tree, | |
queue_pubkey: *output_queue.pubkey(), | |
tree_type: (TreeType::StateV2 as u64).into(), | |
seq: initial_leaf_index.into(), | |
}; | |
context.get_index_or_insert( | |
account.merkle_tree_index(), | |
remaining_accounts, | |
"Output queue for V2 state trees (Merkle tree for V1 state trees)", | |
)?; | |
index_merkle_tree_account += 1; | |
next_account_index += 1; | |
} | |
// Always set per-tree vars for the current branch | |
hashed_merkle_tree = output_queue.hashed_merkle_tree_pubkey; | |
rollover_fee = output_queue.metadata.rollover_metadata.rollover_fee; | |
is_batched = true; | |
// Only set up one-time metadata when this tree is first seen | |
if is_new { | |
context.set_network_fee( | |
output_queue.metadata.rollover_metadata.network_fee, | |
current_index as u8, | |
); | |
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] = | |
MerkleTreeSequenceNumber { | |
tree_pubkey: output_queue.metadata.associated_merkle_tree, | |
queue_pubkey: *output_queue.pubkey(), | |
tree_type: (TreeType::StateV2 as u64).into(), | |
seq: initial_leaf_index.into(), | |
}; | |
context.get_index_or_insert( | |
account.merkle_tree_index(), | |
remaining_accounts, | |
"Output queue for V2 state trees (Merkle tree for V1 state trees)", | |
)?; | |
index_merkle_tree_account += 1; | |
next_account_index += 1; | |
} |
// Only set up metadata if this is a new tree (first time seeing this pubkey) | ||
if is_new { | ||
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] = | ||
MerkleTreeSequenceNumber { | ||
tree_pubkey: *pubkey, | ||
queue_pubkey: *pubkey, | ||
tree_type: (TreeType::StateV1 as u64).into(), | ||
seq: (tree.sequence_number() as u64 + 1).into(), | ||
}; | ||
|
||
let merkle_context = context | ||
.get_legacy_merkle_context(current_index as u8) | ||
.unwrap(); | ||
hashed_merkle_tree = merkle_context.hashed_pubkey; | ||
rollover_fee = merkle_context.rollover_fee; | ||
is_batched = false; | ||
|
||
context.get_index_or_insert( | ||
account.merkle_tree_index(), | ||
remaining_accounts, | ||
"Output queue for V2 state trees (Merkle tree for V1 state trees)", | ||
)?; | ||
|
||
index_merkle_tree_account += 1; | ||
next_account_index += 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: stale hashed_merkle_tree/is_batched/rollover_fee when revisiting V1 tree.
Same issue for StateTree; set these unconditionally for the current branch.
Apply:
- // Only set up metadata if this is a new tree (first time seeing this pubkey)
- if is_new {
- cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] =
+ // Always set per-tree vars for the current branch
+ let merkle_context = context
+ .get_legacy_merkle_context(current_index as u8)
+ .unwrap();
+ hashed_merkle_tree = merkle_context.hashed_pubkey;
+ rollover_fee = merkle_context.rollover_fee;
+ is_batched = false;
+
+ // Only set up one-time metadata when this tree is first seen
+ if is_new {
+ cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] =
MerkleTreeSequenceNumber {
tree_pubkey: *pubkey,
queue_pubkey: *pubkey,
tree_type: (TreeType::StateV1 as u64).into(),
seq: (tree.sequence_number() as u64 + 1).into(),
};
-
- let merkle_context = context
- .get_legacy_merkle_context(current_index as u8)
- .unwrap();
- hashed_merkle_tree = merkle_context.hashed_pubkey;
- rollover_fee = merkle_context.rollover_fee;
- is_batched = false;
context.get_index_or_insert(
account.merkle_tree_index(),
remaining_accounts,
"Output queue for V2 state trees (Merkle tree for V1 state trees)",
)?;
index_merkle_tree_account += 1;
next_account_index += 1;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Only set up metadata if this is a new tree (first time seeing this pubkey) | |
if is_new { | |
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] = | |
MerkleTreeSequenceNumber { | |
tree_pubkey: *pubkey, | |
queue_pubkey: *pubkey, | |
tree_type: (TreeType::StateV1 as u64).into(), | |
seq: (tree.sequence_number() as u64 + 1).into(), | |
}; | |
let merkle_context = context | |
.get_legacy_merkle_context(current_index as u8) | |
.unwrap(); | |
hashed_merkle_tree = merkle_context.hashed_pubkey; | |
rollover_fee = merkle_context.rollover_fee; | |
is_batched = false; | |
context.get_index_or_insert( | |
account.merkle_tree_index(), | |
remaining_accounts, | |
"Output queue for V2 state trees (Merkle tree for V1 state trees)", | |
)?; | |
index_merkle_tree_account += 1; | |
next_account_index += 1; | |
} | |
// Always set per-tree vars for the current branch | |
let merkle_context = context | |
.get_legacy_merkle_context(current_index as u8) | |
.unwrap(); | |
hashed_merkle_tree = merkle_context.hashed_pubkey; | |
rollover_fee = merkle_context.rollover_fee; | |
is_batched = false; | |
// Only set up one-time metadata when this tree is first seen | |
if is_new { | |
cpi_ix_data.output_sequence_numbers[index_merkle_tree_account as usize] = | |
MerkleTreeSequenceNumber { | |
tree_pubkey: *pubkey, | |
queue_pubkey: *pubkey, | |
tree_type: (TreeType::StateV1 as u64).into(), | |
seq: (tree.sequence_number() as u64 + 1).into(), | |
}; | |
context.get_index_or_insert( | |
account.merkle_tree_index(), | |
remaining_accounts, | |
"Output queue for V2 state trees (Merkle tree for V1 state trees)", | |
)?; | |
index_merkle_tree_account += 1; | |
next_account_index += 1; | |
} |
🤖 Prompt for AI Agents
In programs/system/src/processor/create_outputs_cpi_data.rs around lines 125 to
150, the code only assigns hashed_merkle_tree, rollover_fee and is_batched
inside the if is_new block which leaves these variables stale when revisiting an
existing V1 tree; call context.get_legacy_merkle_context(current_index as u8)
unconditionally when taking the V1 tree branch and assign hashed_merkle_tree,
rollover_fee and is_batched from that result before using them, while keeping
the index/account insertion and index_merkle_tree_account/next_account_index
increment only inside the is_new block.
Summary by CodeRabbit
New Features
Refactor
Other