Skip to content

Add nf_conntrack usage analysis #8

@BSWANG

Description

@BSWANG

In container platform, generally using iptables to do nat between service and containers. So container's host always suffering nf_conntrack full issue. eg :

[Mon Apr 23 10:58:07 2018] net_ratelimit: 5988 callbacks suppressed
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:07 2018] nf_conntrack: table full, dropping packet
[Mon Apr 23 10:58:12 2018] net_ratelimit: 6464 callbacks suppressed

Containers IP or state which use up the conntrack bucket can be found at /proc/net/nf_conntrack. Or using conntrack-tools do some basic analysis.

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions