From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
"David S. Miller" <davem@davemloft.net>,
netdev@vger.kernel.org, Florian Westphal <fw@strlen.de>,
Daniel Borkmann <dborkman@redhat.com>
Subject: Re: [net-next PATCH 3/3] net: frag queue per hash bucket locking
Date: Thu, 04 Apr 2013 00:11:00 +0200 [thread overview]
Message-ID: <1365027060.12728.30.camel@localhost> (raw)
In-Reply-To: <1364583693.3232.257.camel@localhost>
On Fri, 2013-03-29 at 20:01 +0100, Jesper Dangaard Brouer wrote:
> On Fri, 2013-03-29 at 01:33 +0100, Hannes Frederic Sowa wrote:
> > On Thu, Mar 28, 2013 at 04:39:42PM -0700, Eric Dumazet wrote:
> > > On Fri, 2013-03-29 at 00:30 +0100, Hannes Frederic Sowa wrote:
> > > > On Thu, Mar 28, 2013 at 01:22:44PM -0700, Eric Dumazet wrote:
> > > > > On Thu, 2013-03-28 at 19:57 +0100, Hannes Frederic Sowa wrote:
> > > > >
> > > > > > I assume that it has to do with the usage of this code in
> > > > > > ipv6/netfilter/nf_conntrack_reasm.c, which could be invoked from process
> > > > > > context, if I read it correctly.
> > > > >
> > > > > Then there would be a possible deadlock in current code.
> > > >
> > > > Netfilter currently does a local_bh_disable() before entering inet_fragment
> > > > (and later enables it, again).
> > > >
> > >
> > > Good, so no need for the _bh() as I suspected.
> >
> > Ack.
> >
> > I replaced the _bh spin_locks with plain spinlocks and tested the code
> > with sending fragments and receiving fragments (netfilter and reassmbly
> > logic) with lockdep and didn't get any splats. Looks good so far.
>
> Well, it's great to see, that you are working on solving my patch
> proposal. While I'm on Easter vacation ;-) Much appreciated.
> I'm officially back from vacation Tuesday, and I'll repost then (after
> testing it on my 10G testlab).
When I rebased patch-03 (on top of net-next commit a210576c) and
removed the _bh spinlock, I saw a performance regression. BUT this
was caused by some unrelated change in-between. See tests below.
Test (A) is what I reported before for patch-02, accepted in commit 1b5ab0de.
Test (B) verifying-retest of commit 1b5ab0de correspond to patch-02.
Test (C) is what I reported before for patch-03
Test (D) is net-next master HEAD (commit a210576c), which reveals some
(unknown) performance regression (compared against test (B)). And (D)
function as a new base-test.
(#) Test-type: 20G64K 20G3F 20G64K+DoS 20G3F+DoS 20G64K+MQ 20G3F+MQ
---------- ------- ------- ---------- --------- -------- -------
(A) Patch-02 : 18848.7 13230.1 4103.04 5310.36 130.0 440.2
(B) 1b5ab0de : 18841.5 13156.8 4101.08 5314.57 129.0 424.2
(C) Patch-03v1: 18838.0 13490.5 4405.11 6814.72 196.6 461.6
(D) a210576c : 18321.5 11250.4 3635.34 5160.13 119.1 405.2
(E) with _bh : 17247.3 11492.6 3994.74 6405.29 166.7 413.6
(F) without bh: 17471.3 11298.7 3818.05 6102.11 165.7 406.3
Test (E) and (F) is patch-03, with and without the _bh spinlocks.
I cannot explain the slow down for 20G64K (but its an artificial
"labtest" so I'm not worried). But the other results does show
improvements. And test (E) "with _bh" version is slightly better.
p.s. Damm, it too a bit longer, than expected, to test this fairly small
correction to the patch...
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2013-04-03 22:11 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-27 15:54 [net-next PATCH 0/3] net: frag performance followup Jesper Dangaard Brouer
2013-03-27 15:55 ` [net-next PATCH 1/3] net: frag, avoid several CPUs grabbing same frag queue during LRU evictor loop Jesper Dangaard Brouer
2013-03-27 16:14 ` Eric Dumazet
2013-03-27 17:10 ` David Miller
2013-03-27 15:55 ` [net-next PATCH 2/3] net: use the frag lru_lock to protect netns_frags.nqueues update Jesper Dangaard Brouer
2013-03-27 16:21 ` Eric Dumazet
2013-03-27 17:10 ` David Miller
2013-03-27 15:56 ` [net-next PATCH 3/3] net: frag queue per hash bucket locking Jesper Dangaard Brouer
2013-03-27 17:25 ` Eric Dumazet
2013-03-28 18:57 ` Hannes Frederic Sowa
2013-03-28 19:03 ` David Miller
2013-03-28 19:10 ` Hannes Frederic Sowa
2013-03-28 19:19 ` David Miller
2013-03-28 20:22 ` Eric Dumazet
2013-03-28 23:30 ` Hannes Frederic Sowa
2013-03-28 23:39 ` Eric Dumazet
2013-03-29 0:33 ` Hannes Frederic Sowa
2013-03-29 19:01 ` Jesper Dangaard Brouer
2013-03-29 19:05 ` Eric Dumazet
2013-03-29 19:22 ` David Miller
2013-04-02 15:23 ` Jesper Dangaard Brouer
2013-04-03 22:11 ` Jesper Dangaard Brouer [this message]
2013-04-04 7:52 ` [net-next PATCH V2] " Jesper Dangaard Brouer
2013-04-04 9:03 ` Hannes Frederic Sowa
2013-04-04 9:27 ` Jesper Dangaard Brouer
2013-04-04 9:38 ` [net-next PATCH V3] " Jesper Dangaard Brouer
2013-04-04 9:58 ` Hannes Frederic Sowa
2013-04-04 16:24 ` Eric Dumazet
2013-04-04 21:38 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1365027060.12728.30.camel@localhost \
--to=jbrouer@redhat.com \
--cc=davem@davemloft.net \
--cc=dborkman@redhat.com \
--cc=eric.dumazet@gmail.com \
--cc=fw@strlen.de \
--cc=hannes@stressinduktion.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).