Skip to content

Commit 815276d

Browse files
author
Alexei Starovoitov
committed
Merge branch 'bpf-replace-path-sensitive-with-path-insensitive-live-stack-analysis'
Eduard Zingerman says: ==================== bpf: replace path-sensitive with path-insensitive live stack analysis Consider the following program, assuming checkpoint is created for a state at instruction (3): 1: call bpf_get_prandom_u32() 2: *(u64 *)(r10 - 8) = 42 -- checkpoint #1 -- 3: if r0 != 0 goto +1 4: exit; 5: r0 = *(u64 *)(r10 - 8) 6: exit The verifier processes this program by exploring two paths: - 1 -> 2 -> 3 -> 4 - 1 -> 2 -> 3 -> 5 -> 6 When instruction (5) is processed, the current liveness tracking mechanism moves up the register parent links and records a "read" mark for stack slot -8 at checkpoint #1, stopping because of the "write" mark recorded at instruction (2). This patch set replaces the existing liveness tracking mechanism with a path-insensitive data flow analysis. The program above is processed as follows: - a data structure representing live stack slots for instructions 1-6 in frame #0 is allocated; - when instruction (2) is processed, record that slot -8 is written at instruction (2) in frame #0; - when instruction (5) is processed, record that slot -8 is read at instruction (5) in frame #0; - when instruction (6) is processed, propagate read mark for slot -8 up the control flow graph to instructions 3 and 2. The key difference is that the new mechanism operates on a control flow graph and associates read and write marks with pairs of (call chain, instruction index). In contrast, the old mechanism operates on verifier states and register parent links, associating read and write marks with verifier states. Motivation ========== As it stands, this patch set makes liveness tracking slightly less precise, as it no longer distinguishes individual program paths taken by the verifier during symbolic execution. See the "Impact on verification performance" section for details. However, this change is intended as a stepping stone toward the following goals: - Short term, integrate precision tracking into liveness analysis and remove the following code: - verifier backedge states accumulation in is_state_visited(); - most of the logic for precision tracking; - jump history tracking. - Long term, help with more efficient loop verification handling. Why integrating precision tracking? ----------------------------------- In a sense, precision tracking is very similar to liveness tracking. The data flow equations for liveness tracking look as follows: live_after = U [state[s].live_before for s in insn_successors(i)] state[i].live_before = (live_after / state[i].must_write) U state[i].may_read While data flow equations for precision tracking look as follows: precise_after = U [state[s].precise_before for s in insn_successors(i)] // if some of the instruction outputs are precise, // assume its inputs to be precise induced_precise = ⎧ state[i].may_read if (state[i].may_write ∩ precise_after) ≠ ∅ ⎨ ⎩ ∅ otherwise state[i].precise_before = (precise_after / state[i].must_write) ∩ induced_precise Where: - `may_read` set represents a union of all possibly read slots (any slot in `may_read` set might be by the instruction); - `must_write` set represents an intersection of all possibly written slots (any slot in `must_write` set is guaranteed to be written by the instruction). - `may_write` set represents a union of all possibly written slots (any slot in `may_write` set might be written by the instruction). This means that precision tracking can be implemented as a logical extension of liveness tracking: - track registers as well as stack slots; - add bit masks to represent `precise_before` and `may_write`; - add above equations for `precise_before` computation; - (linked registers require some additional consideration). Such extension would allow removal of: - precision propagation logic in verifier.c: - backtrack_insn() - mark_chain_precision() - propagate_{precision,backedges}() - push_jmp_history() and related data structures, which are only used by precision tracking; - add_scc_backedge() and related backedge state accumulation in is_state_visited(), superseded by per-callchain function state accumulated by liveness analysis. The hope here is that unifying liveness and precision tracking will reduce overall amount of code and make it easier to reason about. How this helps with loops? -------------------------- As it stands, this patch set shares the same deficiency as the current liveness tracking mechanism. Liveness marks on stack slots cannot be used to prune states when processing iterator-based loops: - such states still have branches to be explored; - meaning that not all stack slot reads have been discovered. For example: 1: while(iter_next()) { 2: if (...) 3: r0 = *(u64 *)(r10 - 8) 4: if (...) 5: r0 = *(u64 *)(r10 - 16) 6: ... 7: } For any checkpoint state created at instruction (1), it is only possible to rely on read marks for slots fp[-8] and fp[-16] once all child states of (1) have been explored. Thus, when the verifier transitions from (7) to (1), it cannot rely on read marks. However, sacrificing path-sensitivity makes it possible to run analysis defined in this patch set before main verification pass, if estimates for value ranges are available. E.g. for the following program: 1: while(iter_next()) { 2: r0 = r10 3: r0 += r2 4: r0 = *(u64 *)(r2 + 0) 5: ... 6: } If an estimate for `r2` range is available before the main verification pass, it can be used to populate read marks at instruction (4) and run the liveness analysis. Thus making conservative liveness information available during loops verification. Such estimates can be provided by some form of value range analysis. Value range analysis is also necessary to address loop verification from another angle: computing boundaries for loop induction variables and iteration counts. The hope here is that the new liveness tracking mechanism will support the broader goal of making loop verification more efficient. Validation ========== The change was tested on three program sets: - bpf selftests - sched_ext - Meta's internal set of programs Commit [#8] enables a special mode where both the current and new liveness analyses are enabled simultaneously. This mode signals an error if the new algorithm considers a stack slot dead while the current algorithm assumes it is alive. This mode was very useful for debugging. At the time of posting, no such errors have been reported for the above program sets. [#8] "bpf: signal error if old liveness is more conservative than new" Impact on memory consumption ============================ Debug patch [1] extends the kernel and veristat to count the amount of memory allocated for storing analysis data. This patch is not included in the submission. The maximal observed impact for the above program sets is 2.6Mb. Data below is shown in bytes. For bpf selftests top 5 consumers look as follows: File Program liveness mem ----------------------- ---------------- ------------ pyperf180.bpf.o on_event 2629740 pyperf600.bpf.o on_event 2287662 pyperf100.bpf.o on_event 1427022 test_verif_scale3.bpf.o balancer_ingress 1121283 pyperf_subprogs.bpf.o on_event 756900 For sched_ext top 5 consumers loog as follows: File Program liveness mem --------- ------------------------------- ------------ bpf.bpf.o lavd_enqueue 164686 bpf.bpf.o lavd_select_cpu 157393 bpf.bpf.o layered_enqueue 154817 bpf.bpf.o lavd_init 127865 bpf.bpf.o layered_dispatch 110129 For Meta's internal set of programs top consumer is 1Mb. [1] kernel-patches/bpf@085588e Impact on verification performance ================================== Veristat results below are reported using `-f insns_pct>1 -f !insns<500` filter and -t option (BPF_F_TEST_STATE_FREQ flag). master vs patch-set, selftests (out of ~4K programs) ---------------------------------------------------- File Program Insns (A) Insns (B) Insns (DIFF) -------------------------------- -------------------------------------- --------- --------- --------------- cpumask_success.bpf.o test_global_mask_nested_deep_array_rcu 1622 1655 +33 (+2.03%) strobemeta_bpf_loop.bpf.o on_event 2163 2684 +521 (+24.09%) test_cls_redirect.bpf.o cls_redirect 36001 42515 +6514 (+18.09%) test_cls_redirect_dynptr.bpf.o cls_redirect 2299 2339 +40 (+1.74%) test_cls_redirect_subprogs.bpf.o cls_redirect 69545 78497 +8952 (+12.87%) test_l4lb_noinline.bpf.o balancer_ingress 2993 3084 +91 (+3.04%) test_xdp_noinline.bpf.o balancer_ingress_v4 3539 3616 +77 (+2.18%) test_xdp_noinline.bpf.o balancer_ingress_v6 3608 3685 +77 (+2.13%) master vs patch-set, sched_ext (out of 148 programs) ---------------------------------------------------- File Program Insns (A) Insns (B) Insns (DIFF) --------- ---------------- --------- --------- --------------- bpf.bpf.o chaos_dispatch 2257 2287 +30 (+1.33%) bpf.bpf.o lavd_enqueue 20735 22101 +1366 (+6.59%) bpf.bpf.o lavd_select_cpu 22100 24409 +2309 (+10.45%) bpf.bpf.o layered_dispatch 25051 25606 +555 (+2.22%) bpf.bpf.o p2dq_dispatch 961 990 +29 (+3.02%) bpf.bpf.o rusty_quiescent 526 534 +8 (+1.52%) bpf.bpf.o rusty_runnable 541 547 +6 (+1.11%) Perf report =========== In relative terms, the analysis does not consume much CPU time. For example, here is a perf report collected for pyperf180 selftest: # Children Self Command Shared Object Symbol # ........ ........ ........ .................... ........................................ ... 1.22% 1.22% veristat [kernel.kallsyms] [k] bpf_update_live_stack ... Changelog ========= v1: https://lore.kernel.org/bpf/[email protected]/T/ v1 -> v2: - compute_postorder() fixed to handle jumps with offset -1 (syzbot). - is_state_visited() in patch #9 fixed access to uninitialized `err` (kernel test robot, Dan Carpenter). - Selftests added. - Fixed bug with write marks propagation from callee to caller, see verifier_live_stack.c:caller_stack_write() test case. - Added a patch for __not_msg() annotation for test_loader based tests. v2: https://lore.kernel.org/bpf/20250918-callchain-sensitive-liveness-v2-0-214ed2653eee@gmail.com/ v2 -> v3: - Added __diag_ignore_all("-Woverride-init", ...) in liveness.c for bpf_insn_successors() (suggested by Alexei). Signed-off-by: Eduard Zingerman <[email protected]> ==================== Link: https://patch.msgid.link/20250918-callchain-sensitive-liveness-v3-0-c3cd27bacc60@gmail.com Signed-off-by: Alexei Starovoitov <[email protected]>
2 parents 8cd189e + fdcecdf commit 815276d

27 files changed

+1718
-979
lines changed

Documentation/bpf/verifier.rst

Lines changed: 0 additions & 264 deletions
Original file line numberDiff line numberDiff line change
@@ -347,270 +347,6 @@ However, only the value of register ``r1`` is important to successfully finish
347347
verification. The goal of the liveness tracking algorithm is to spot this fact
348348
and figure out that both states are actually equivalent.
349349

350-
Data structures
351-
~~~~~~~~~~~~~~~
352-
353-
Liveness is tracked using the following data structures::
354-
355-
enum bpf_reg_liveness {
356-
REG_LIVE_NONE = 0,
357-
REG_LIVE_READ32 = 0x1,
358-
REG_LIVE_READ64 = 0x2,
359-
REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64,
360-
REG_LIVE_WRITTEN = 0x4,
361-
REG_LIVE_DONE = 0x8,
362-
};
363-
364-
struct bpf_reg_state {
365-
...
366-
struct bpf_reg_state *parent;
367-
...
368-
enum bpf_reg_liveness live;
369-
...
370-
};
371-
372-
struct bpf_stack_state {
373-
struct bpf_reg_state spilled_ptr;
374-
...
375-
};
376-
377-
struct bpf_func_state {
378-
struct bpf_reg_state regs[MAX_BPF_REG];
379-
...
380-
struct bpf_stack_state *stack;
381-
}
382-
383-
struct bpf_verifier_state {
384-
struct bpf_func_state *frame[MAX_CALL_FRAMES];
385-
struct bpf_verifier_state *parent;
386-
...
387-
}
388-
389-
* ``REG_LIVE_NONE`` is an initial value assigned to ``->live`` fields upon new
390-
verifier state creation;
391-
392-
* ``REG_LIVE_WRITTEN`` means that the value of the register (or stack slot) is
393-
defined by some instruction verified between this verifier state's parent and
394-
verifier state itself;
395-
396-
* ``REG_LIVE_READ{32,64}`` means that the value of the register (or stack slot)
397-
is read by a some child state of this verifier state;
398-
399-
* ``REG_LIVE_DONE`` is a marker used by ``clean_verifier_state()`` to avoid
400-
processing same verifier state multiple times and for some sanity checks;
401-
402-
* ``->live`` field values are formed by combining ``enum bpf_reg_liveness``
403-
values using bitwise or.
404-
405-
Register parentage chains
406-
~~~~~~~~~~~~~~~~~~~~~~~~~
407-
408-
In order to propagate information between parent and child states, a *register
409-
parentage chain* is established. Each register or stack slot is linked to a
410-
corresponding register or stack slot in its parent state via a ``->parent``
411-
pointer. This link is established upon state creation in ``is_state_visited()``
412-
and might be modified by ``set_callee_state()`` called from
413-
``__check_func_call()``.
414-
415-
The rules for correspondence between registers / stack slots are as follows:
416-
417-
* For the current stack frame, registers and stack slots of the new state are
418-
linked to the registers and stack slots of the parent state with the same
419-
indices.
420-
421-
* For the outer stack frames, only callee saved registers (r6-r9) and stack
422-
slots are linked to the registers and stack slots of the parent state with the
423-
same indices.
424-
425-
* When function call is processed a new ``struct bpf_func_state`` instance is
426-
allocated, it encapsulates a new set of registers and stack slots. For this
427-
new frame, parent links for r6-r9 and stack slots are set to nil, parent links
428-
for r1-r5 are set to match caller r1-r5 parent links.
429-
430-
This could be illustrated by the following diagram (arrows stand for
431-
``->parent`` pointers)::
432-
433-
... ; Frame #0, some instructions
434-
--- checkpoint #0 ---
435-
1 : r6 = 42 ; Frame #0
436-
--- checkpoint #1 ---
437-
2 : call foo() ; Frame #0
438-
... ; Frame #1, instructions from foo()
439-
--- checkpoint #2 ---
440-
... ; Frame #1, instructions from foo()
441-
--- checkpoint #3 ---
442-
exit ; Frame #1, return from foo()
443-
3 : r1 = r6 ; Frame #0 <- current state
444-
445-
+-------------------------------+-------------------------------+
446-
| Frame #0 | Frame #1 |
447-
Checkpoint +-------------------------------+-------------------------------+
448-
#0 | r0 | r1-r5 | r6-r9 | fp-8 ... |
449-
+-------------------------------+
450-
^ ^ ^ ^
451-
| | | |
452-
Checkpoint +-------------------------------+
453-
#1 | r0 | r1-r5 | r6-r9 | fp-8 ... |
454-
+-------------------------------+
455-
^ ^ ^
456-
|_______|_______|_______________
457-
| | |
458-
nil nil | | | nil nil
459-
| | | | | | |
460-
Checkpoint +-------------------------------+-------------------------------+
461-
#2 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... |
462-
+-------------------------------+-------------------------------+
463-
^ ^ ^ ^ ^
464-
nil nil | | | | |
465-
| | | | | | |
466-
Checkpoint +-------------------------------+-------------------------------+
467-
#3 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... |
468-
+-------------------------------+-------------------------------+
469-
^ ^
470-
nil nil | |
471-
| | | |
472-
Current +-------------------------------+
473-
state | r0 | r1-r5 | r6-r9 | fp-8 ... |
474-
+-------------------------------+
475-
\
476-
r6 read mark is propagated via these links
477-
all the way up to checkpoint #1.
478-
The checkpoint #1 contains a write mark for r6
479-
because of instruction (1), thus read propagation
480-
does not reach checkpoint #0 (see section below).
481-
482-
Liveness marks tracking
483-
~~~~~~~~~~~~~~~~~~~~~~~
484-
485-
For each processed instruction, the verifier tracks read and written registers
486-
and stack slots. The main idea of the algorithm is that read marks propagate
487-
back along the state parentage chain until they hit a write mark, which 'screens
488-
off' earlier states from the read. The information about reads is propagated by
489-
function ``mark_reg_read()`` which could be summarized as follows::
490-
491-
mark_reg_read(struct bpf_reg_state *state, ...):
492-
parent = state->parent
493-
while parent:
494-
if state->live & REG_LIVE_WRITTEN:
495-
break
496-
if parent->live & REG_LIVE_READ64:
497-
break
498-
parent->live |= REG_LIVE_READ64
499-
state = parent
500-
parent = state->parent
501-
502-
Notes:
503-
504-
* The read marks are applied to the **parent** state while write marks are
505-
applied to the **current** state. The write mark on a register or stack slot
506-
means that it is updated by some instruction in the straight-line code leading
507-
from the parent state to the current state.
508-
509-
* Details about REG_LIVE_READ32 are omitted.
510-
511-
* Function ``propagate_liveness()`` (see section :ref:`read_marks_for_cache_hits`)
512-
might override the first parent link. Please refer to the comments in the
513-
``propagate_liveness()`` and ``mark_reg_read()`` source code for further
514-
details.
515-
516-
Because stack writes could have different sizes ``REG_LIVE_WRITTEN`` marks are
517-
applied conservatively: stack slots are marked as written only if write size
518-
corresponds to the size of the register, e.g. see function ``save_register_state()``.
519-
520-
Consider the following example::
521-
522-
0: (*u64)(r10 - 8) = 0 ; define 8 bytes of fp-8
523-
--- checkpoint #0 ---
524-
1: (*u32)(r10 - 8) = 1 ; redefine lower 4 bytes
525-
2: r1 = (*u32)(r10 - 8) ; read lower 4 bytes defined at (1)
526-
3: r2 = (*u32)(r10 - 4) ; read upper 4 bytes defined at (0)
527-
528-
As stated above, the write at (1) does not count as ``REG_LIVE_WRITTEN``. Should
529-
it be otherwise, the algorithm above wouldn't be able to propagate the read mark
530-
from (3) to checkpoint #0.
531-
532-
Once the ``BPF_EXIT`` instruction is reached ``update_branch_counts()`` is
533-
called to update the ``->branches`` counter for each verifier state in a chain
534-
of parent verifier states. When the ``->branches`` counter reaches zero the
535-
verifier state becomes a valid entry in a set of cached verifier states.
536-
537-
Each entry of the verifier states cache is post-processed by a function
538-
``clean_live_states()``. This function marks all registers and stack slots
539-
without ``REG_LIVE_READ{32,64}`` marks as ``NOT_INIT`` or ``STACK_INVALID``.
540-
Registers/stack slots marked in this way are ignored in function ``stacksafe()``
541-
called from ``states_equal()`` when a state cache entry is considered for
542-
equivalence with a current state.
543-
544-
Now it is possible to explain how the example from the beginning of the section
545-
works::
546-
547-
0: call bpf_get_prandom_u32()
548-
1: r1 = 0
549-
2: if r0 == 0 goto +1
550-
3: r0 = 1
551-
--- checkpoint[0] ---
552-
4: r0 = r1
553-
5: exit
554-
555-
* At instruction #2 branching point is reached and state ``{ r0 == 0, r1 == 0, pc == 4 }``
556-
is pushed to states processing queue (pc stands for program counter).
557-
558-
* At instruction #4:
559-
560-
* ``checkpoint[0]`` states cache entry is created: ``{ r0 == 1, r1 == 0, pc == 4 }``;
561-
* ``checkpoint[0].r0`` is marked as written;
562-
* ``checkpoint[0].r1`` is marked as read;
563-
564-
* At instruction #5 exit is reached and ``checkpoint[0]`` can now be processed
565-
by ``clean_live_states()``. After this processing ``checkpoint[0].r1`` has a
566-
read mark and all other registers and stack slots are marked as ``NOT_INIT``
567-
or ``STACK_INVALID``
568-
569-
* The state ``{ r0 == 0, r1 == 0, pc == 4 }`` is popped from the states queue
570-
and is compared against a cached state ``{ r1 == 0, pc == 4 }``, the states
571-
are considered equivalent.
572-
573-
.. _read_marks_for_cache_hits:
574-
575-
Read marks propagation for cache hits
576-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
577-
578-
Another point is the handling of read marks when a previously verified state is
579-
found in the states cache. Upon cache hit verifier must behave in the same way
580-
as if the current state was verified to the program exit. This means that all
581-
read marks, present on registers and stack slots of the cached state, must be
582-
propagated over the parentage chain of the current state. Example below shows
583-
why this is important. Function ``propagate_liveness()`` handles this case.
584-
585-
Consider the following state parentage chain (S is a starting state, A-E are
586-
derived states, -> arrows show which state is derived from which)::
587-
588-
r1 read
589-
<------------- A[r1] == 0
590-
C[r1] == 0
591-
S ---> A ---> B ---> exit E[r1] == 1
592-
|
593-
` ---> C ---> D
594-
|
595-
` ---> E ^
596-
|___ suppose all these
597-
^ states are at insn #Y
598-
|
599-
suppose all these
600-
states are at insn #X
601-
602-
* Chain of states ``S -> A -> B -> exit`` is verified first.
603-
604-
* While ``B -> exit`` is verified, register ``r1`` is read and this read mark is
605-
propagated up to state ``A``.
606-
607-
* When chain of states ``C -> D`` is verified the state ``D`` turns out to be
608-
equivalent to state ``B``.
609-
610-
* The read mark for ``r1`` has to be propagated to state ``C``, otherwise state
611-
``C`` might get mistakenly marked as equivalent to state ``E`` even though
612-
values for register ``r1`` differ between ``C`` and ``E``.
613-
614350
Understanding eBPF verifier messages
615351
====================================
616352

include/linux/bpf_verifier.h

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -26,28 +26,6 @@
2626
/* Patch buffer size */
2727
#define INSN_BUF_SIZE 32
2828

29-
/* Liveness marks, used for registers and spilled-regs (in stack slots).
30-
* Read marks propagate upwards until they find a write mark; they record that
31-
* "one of this state's descendants read this reg" (and therefore the reg is
32-
* relevant for states_equal() checks).
33-
* Write marks collect downwards and do not propagate; they record that "the
34-
* straight-line code that reached this state (from its parent) wrote this reg"
35-
* (and therefore that reads propagated from this state or its descendants
36-
* should not propagate to its parent).
37-
* A state with a write mark can receive read marks; it just won't propagate
38-
* them to its parent, since the write mark is a property, not of the state,
39-
* but of the link between it and its parent. See mark_reg_read() and
40-
* mark_stack_slot_read() in kernel/bpf/verifier.c.
41-
*/
42-
enum bpf_reg_liveness {
43-
REG_LIVE_NONE = 0, /* reg hasn't been read or written this branch */
44-
REG_LIVE_READ32 = 0x1, /* reg was read, so we're sensitive to initial value */
45-
REG_LIVE_READ64 = 0x2, /* likewise, but full 64-bit content matters */
46-
REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64,
47-
REG_LIVE_WRITTEN = 0x4, /* reg was written first, screening off later reads */
48-
REG_LIVE_DONE = 0x8, /* liveness won't be updating this register anymore */
49-
};
50-
5129
#define ITER_PREFIX "bpf_iter_"
5230

5331
enum bpf_iter_state {
@@ -212,8 +190,6 @@ struct bpf_reg_state {
212190
* allowed and has the same effect as bpf_sk_release(sk).
213191
*/
214192
u32 ref_obj_id;
215-
/* parentage chain for liveness checking */
216-
struct bpf_reg_state *parent;
217193
/* Inside the callee two registers can be both PTR_TO_STACK like
218194
* R1=fp-8 and R2=fp-8, but one of them points to this function stack
219195
* while another to the caller's stack. To differentiate them 'frameno'
@@ -226,7 +202,6 @@ struct bpf_reg_state {
226202
* patching which only happens after main verification finished.
227203
*/
228204
s32 subreg_def;
229-
enum bpf_reg_liveness live;
230205
/* if (!precise && SCALAR_VALUE) min/max/tnum don't affect safety */
231206
bool precise;
232207
};
@@ -445,6 +420,7 @@ struct bpf_verifier_state {
445420

446421
bool speculative;
447422
bool in_sleepable;
423+
bool cleaned;
448424

449425
/* first and last insn idx of this verifier state */
450426
u32 first_insn_idx;
@@ -665,6 +641,7 @@ struct bpf_subprog_info {
665641
/* 'start' has to be the first field otherwise find_subprog() won't work */
666642
u32 start; /* insn idx of function entry point */
667643
u32 linfo_idx; /* The idx to the main_prog->aux->linfo */
644+
u32 postorder_start; /* The idx to the env->cfg.insn_postorder */
668645
u16 stack_depth; /* max. stack depth used by this function */
669646
u16 stack_extra;
670647
/* offsets in range [stack_depth .. fastcall_stack_off)
@@ -744,6 +721,8 @@ struct bpf_scc_info {
744721
struct bpf_scc_visit visits[];
745722
};
746723

724+
struct bpf_liveness;
725+
747726
/* single container for all structs
748727
* one verifier_env per bpf_check() call
749728
*/
@@ -794,7 +773,10 @@ struct bpf_verifier_env {
794773
struct {
795774
int *insn_state;
796775
int *insn_stack;
797-
/* vector of instruction indexes sorted in post-order */
776+
/*
777+
* vector of instruction indexes sorted in post-order, grouped by subprogram,
778+
* see bpf_subprog_info->postorder_start.
779+
*/
798780
int *insn_postorder;
799781
int cur_stack;
800782
/* current position in the insn_postorder vector */
@@ -842,6 +824,7 @@ struct bpf_verifier_env {
842824
struct bpf_insn insn_buf[INSN_BUF_SIZE];
843825
struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
844826
struct bpf_scc_callchain callchain_buf;
827+
struct bpf_liveness *liveness;
845828
/* array of pointers to bpf_scc_info indexed by SCC id */
846829
struct bpf_scc_info **scc_info;
847830
u32 scc_cnt;
@@ -1065,4 +1048,21 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
10651048
void print_insn_state(struct bpf_verifier_env *env, const struct bpf_verifier_state *vstate,
10661049
u32 frameno);
10671050

1051+
struct bpf_subprog_info *bpf_find_containing_subprog(struct bpf_verifier_env *env, int off);
1052+
int bpf_jmp_offset(struct bpf_insn *insn);
1053+
int bpf_insn_successors(struct bpf_prog *prog, u32 idx, u32 succ[2]);
1054+
void bpf_fmt_stack_mask(char *buf, ssize_t buf_sz, u64 stack_mask);
1055+
bool bpf_calls_callback(struct bpf_verifier_env *env, int insn_idx);
1056+
1057+
int bpf_stack_liveness_init(struct bpf_verifier_env *env);
1058+
void bpf_stack_liveness_free(struct bpf_verifier_env *env);
1059+
int bpf_update_live_stack(struct bpf_verifier_env *env);
1060+
int bpf_mark_stack_read(struct bpf_verifier_env *env, u32 frameno, u32 insn_idx, u64 mask);
1061+
void bpf_mark_stack_write(struct bpf_verifier_env *env, u32 frameno, u64 mask);
1062+
int bpf_reset_stack_write_marks(struct bpf_verifier_env *env, u32 insn_idx);
1063+
int bpf_commit_stack_write_marks(struct bpf_verifier_env *env);
1064+
int bpf_live_stack_query_init(struct bpf_verifier_env *env, struct bpf_verifier_state *st);
1065+
bool bpf_stack_slot_alive(struct bpf_verifier_env *env, u32 frameno, u32 spi);
1066+
void bpf_reset_live_stack_callchain(struct bpf_verifier_env *env);
1067+
10681068
#endif /* _LINUX_BPF_VERIFIER_H */

kernel/bpf/Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse
66
endif
77
CFLAGS_core.o += -Wno-override-init $(cflags-nogcse-yy)
88

9-
obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o
9+
obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o liveness.o
1010
obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
1111
obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o bloom_filter.o
1212
obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o

0 commit comments

Comments
 (0)