From: Patrick McHardy <kaber@trash.net>
To: "Timo Teräs" <timo.teras@iki.fi>
Cc: netfilter-devel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: bad nat connection tracking performance with ip_gre
Date: Tue, 18 Aug 2009 16:58:40 +0200 [thread overview]
Message-ID: <4A8AC1A0.6000602@trash.net> (raw)
In-Reply-To: <4A8AB25A.4000105@iki.fi>
Timo Teräs wrote:
> Patrick McHardy wrote:
>> Timo Teräs wrote:
>>> LOCALLY GENERATED PACKET, hogs CPU
>>> ----------------------------------
>>>
>>> IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344
>>> TOS=0x00 PREC=0x00 TTL=8 ID=41664 DF PROTO=UDP SPT=47920
>>> DPT=1234 LEN=1324 UID=1007 GID=1007
>>> 1. raw:OUTPUT
>>> 2. mangle:OUTPUT
>>> 3. filter:OUTPUT
>>> 4. mangle:POSTROUTING
>>>
>>
>> Please include the complete output, I need to see the devices logged
>> at each hook.
>
> The devices are identical for each hook grouped under same line.
>
> Here are the interesting lines from one packet:
>
> Generation:
>
> raw:OUTPUT:policy:2 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42
> LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977
> DPT=1234 LEN=1324 UID=1007 GID=1007 mangle:OUTPUT:policy:1 IN= OUT=eth1
> SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8
> ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
> (the nat hook is called for initial packet only):
> nat:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42
> LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36593 DF PROTO=UDP SPT=33977
> DPT=1234 LEN=1324 UID=1007 GID=1007
> filter:OUTPUT:policy:1 IN= OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42
> LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF PROTO=UDP SPT=33977
> DPT=1234 LEN=1324 UID=1007 GID=1007 mangle:POSTROUTING:policy:1 IN=
> OUT=eth1 SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00
> TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324
> mangle:POSTROUTING:policy:1 IN= OUT=eth1 SRC=10.252.5.1
> DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF
> PROTO=UDP SPT=33977 DPT=1234 LEN=1324 UID=1007 GID=1007
> Looped back by multicast routing:
>
> raw:PREROUTING:policy:1 IN=eth1 OUT= MAC= SRC=10.252.5.1
> DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00 TTL=8 ID=36594 DF
> PROTO=UDP SPT=33977 DPT=1234 LEN=1324 mangle:PREROUTING:policy:1 IN=eth1
> OUT= MAC= SRC=10.252.5.1 DST=239.255.12.42 LEN=1344 TOS=0x00 PREC=0x00
> TTL=8 ID=36594 DF PROTO=UDP SPT=33977 DPT=1234 LEN=1324
> The cpu hogging happens somewhere below this, since the more
> multicast destinations I have the more CPU it takes.
So you're sending to multiple destinations? That obviously increases
the time spent in netfilter and the remaining networking stack.
> Multicast forwarded (I hacked this into the code; but similar
> dump happens on local sendto()):
>
> Actually, now that I think, here we should have the inner IP
> contents, and not the incomplete outer yet. So apparently
> the ipgre_header() messes the network_header position.
It shouldn't even have been called at this point. Please retry this
without your changes.
> mangle:FORWARD:policy:1 IN=eth1 OUT=gre1 SRC=0.0.0.0 DST=re.mo.te.ip
> LEN=0 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47 filter:FORWARD:rule:2
> IN=eth1 OUT=gre1 SRC=0.0.0.0 DST=re.mo.te.ip LEN=0 TOS=0x00 PREC=0x00
> TTL=64 ID=0 DF PROTO=47
This looks really broken. Why is the protocol already 47 before it even
reaches the gre tunnel?
> ip_gre xmit sends out:
There should be a POSTROUTING hook here.
> raw:OUTPUT:rule:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372
> TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47 raw:OUTPUT:policy:2 IN=
> OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00
> TTL=64 ID=0 DF PROTO=47 mangle:OUTPUT:policy:1 IN= OUT=eth0
> SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0
> DF PROTO=47
> (nat hook for initial packets)
> nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip LEN=1372
> TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
> filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=lo.ca.l.ip DST=re.mo.te.ip
> LEN=1372 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=47
> - Timo
>
next prev parent reply other threads:[~2009-08-18 14:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-18 10:14 bad nat connection tracking performance with ip_gre Timo Teräs
2009-08-18 10:38 ` Patrick McHardy
2009-08-18 12:45 ` Timo Teräs
2009-08-18 13:01 ` Patrick McHardy
2009-08-18 13:53 ` Timo Teräs
2009-08-18 14:58 ` Patrick McHardy [this message]
2009-08-18 17:39 ` Timo Teräs
2009-08-18 19:36 ` Timo Teräs
2009-08-19 8:40 ` Timo Teräs
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A8AC1A0.6000602@trash.net \
--to=kaber@trash.net \
--cc=netdev@vger.kernel.org \
--cc=netfilter-devel@vger.kernel.org \
--cc=timo.teras@iki.fi \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).