Skip to content

Commit e2ca070

Browse files
jahurleydavem330
authored andcommitted
net: sched: protect against stack overflow in TC act_mirred
TC hooks allow the application of filters and actions to packets at both ingress and egress of the network stack. It is possible, with poor configuration, that this can produce loops whereby an ingress hook calls a mirred egress action that has an egress hook that redirects back to the first ingress etc. The TC core classifier protects against loops when doing reclassifies but there is no protection against a packet looping between multiple hooks and recursively calling act_mirred. This can lead to stack overflow panics. Add a per CPU counter to act_mirred that is incremented for each recursive call of the action function when processing a packet. If a limit is passed then the packet is dropped and CPU counter reset. Note that this patch does not protect against loops in TC datapaths. Its aim is to prevent stack overflow kernel panics that can be a consequence of such loops. Signed-off-by: John Hurley <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 720f22f commit e2ca070

File tree

1 file changed

+14
-0
lines changed

1 file changed

+14
-0
lines changed

net/sched/act_mirred.c

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,9 @@
2727
static LIST_HEAD(mirred_list);
2828
static DEFINE_SPINLOCK(mirred_list_lock);
2929

30+
#define MIRRED_RECURSION_LIMIT 4
31+
static DEFINE_PER_CPU(unsigned int, mirred_rec_level);
32+
3033
static bool tcf_mirred_is_act_redirect(int action)
3134
{
3235
return action == TCA_EGRESS_REDIR || action == TCA_INGRESS_REDIR;
@@ -210,13 +213,22 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
210213
struct sk_buff *skb2 = skb;
211214
bool m_mac_header_xmit;
212215
struct net_device *dev;
216+
unsigned int rec_level;
213217
int retval, err = 0;
214218
bool use_reinsert;
215219
bool want_ingress;
216220
bool is_redirect;
217221
int m_eaction;
218222
int mac_len;
219223

224+
rec_level = __this_cpu_inc_return(mirred_rec_level);
225+
if (unlikely(rec_level > MIRRED_RECURSION_LIMIT)) {
226+
net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n",
227+
netdev_name(skb->dev));
228+
__this_cpu_dec(mirred_rec_level);
229+
return TC_ACT_SHOT;
230+
}
231+
220232
tcf_lastuse_update(&m->tcf_tm);
221233
bstats_cpu_update(this_cpu_ptr(m->common.cpu_bstats), skb);
222234

@@ -278,6 +290,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
278290
res->ingress = want_ingress;
279291
res->qstats = this_cpu_ptr(m->common.cpu_qstats);
280292
skb_tc_reinsert(skb, res);
293+
__this_cpu_dec(mirred_rec_level);
281294
return TC_ACT_CONSUMED;
282295
}
283296
}
@@ -293,6 +306,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
293306
if (tcf_mirred_is_act_redirect(m_eaction))
294307
retval = TC_ACT_SHOT;
295308
}
309+
__this_cpu_dec(mirred_rec_level);
296310

297311
return retval;
298312
}

0 commit comments

Comments
 (0)