Skip to content

Commit 21ef3d0

Browse files
authored
forward fix uninitialized param
Differential Revision: D75536694 Pull Request resolved: #11218
1 parent 2e9c71c commit 21ef3d0

File tree

1 file changed

+4
-2
lines changed
  • examples/qualcomm/oss_scripts/llama/runner

1 file changed

+4
-2
lines changed

examples/qualcomm/oss_scripts/llama/runner/runner.cpp

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -152,8 +152,10 @@ Error Runner::load() {
152152

153153
// Use attention mask length to retrieve AR length and context length
154154
// Cache len equals to context_len - ar_len
155-
int32_t prompt_processor_ar_len, token_generator_ar_len, max_cache_len,
156-
max_ar_len;
155+
int32_t prompt_processor_ar_len = 0;
156+
int32_t token_generator_ar_len = 0;
157+
int32_t max_cache_len = 0;
158+
int32_t max_ar_len = 0;
157159
// atten mask: [1, AR-N, CL]
158160
auto atten_mask_meta_token = method_meta->input_tensor_meta(1);
159161
token_generator_ar_len = atten_mask_meta_token->sizes()[1];

0 commit comments

Comments
 (0)