From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Florian Westphal <fw@strlen.de>,
netdev@vger.kernel.org, Thomas Graf <tgraf@suug.ch>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Cong Wang <amwang@redhat.com>,
Herbert Xu <herbert@gondor.hengli.com.au>
Subject: Re: [net-next PATCH V3-evictor] net: frag evictor, avoid killing warm frag queues
Date: Thu, 06 Dec 2012 13:26:00 +0100 [thread overview]
Message-ID: <1354796760.20888.217.camel@localhost> (raw)
In-Reply-To: <1354699462.20888.207.camel@localhost>
On Wed, 2012-12-05 at 10:24 +0100, Jesper Dangaard Brouer wrote:
>
> The previous evictor patch of letting new fragments enter, worked
> amazingly well. But I suspect, this might also be related to a
> bug/problem in the evictor loop (which were being hidden by that
> patch).
The evictor loop does not contain a bug, just a SMP scalability issue
(which is fixed by later patches). The first evictor patch, which
does not let new fragments enter, only worked amazingly well because
its hiding this (and other) scalability issues, and implicit allowing
frags already "in" to exceed the mem usage for 1 jiffie. Thus,
invalidating the patch, as the improvement were only a side effect.
> My new *theory* is that the evictor loop, will be looping too much, if
> it finds a fragment which is INET_FRAG_COMPLETE ... in that case, we
> don't advance the LRU list, and thus will pickup the exact same
> inet_frag_queue again in the loop... to get out of the loop we need
> another CPU or packet to change the LRU list for us... I'll test that
> theory... (its could also be CPUs fighting over the same LRU head
> element that cause this) ... more to come...
The above theory does happen, but does not cause excessive looping.
The CPUs are just fighting about who gets to free the inet_frag_queue
and who gets to unlink it from its data structures (I guess, resulting
cache bouncing between CPUs).
CPUs are fighting for the same LRU head (inet_frag_queue) element,
which is bad for scalability. We could fix this by unlinking the
element once a CPU graps it, but it would require us to change a
read_lock to a write_lock, thus we might not gain much performance.
I already (implicit) fix this is a later patch, where I'm moving the
LRU lists to be per CPU. So, I don't know if it's worth fixing.
(And yes, I'm using thresh 4Mb/3Mb as my default setting now, but I'm
also experimenting with other thresh sizes)
p.s. Thank you Eric for being so persistent, so I realized this patch
were not good. We can hopefully now, move on to the other patches,
which fixes the real scalability issues.
--Jesper
next prev parent reply other threads:[~2012-12-06 12:26 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-29 16:10 [net-next PATCH V2 0/9] net: fragmentation performance scalability on NUMA/SMP systems Jesper Dangaard Brouer
2012-11-29 16:11 ` [net-next PATCH V2 1/9] net: frag evictor, avoid killing warm frag queues Jesper Dangaard Brouer
2012-11-29 17:44 ` David Miller
2012-11-29 22:17 ` Jesper Dangaard Brouer
2012-11-29 23:01 ` Eric Dumazet
2012-11-30 10:04 ` Jesper Dangaard Brouer
2012-11-30 14:52 ` Eric Dumazet
2012-11-30 15:45 ` Jesper Dangaard Brouer
2012-11-30 16:37 ` Eric Dumazet
2012-11-30 21:37 ` Jesper Dangaard Brouer
2012-11-30 22:25 ` Eric Dumazet
2012-11-30 23:23 ` Jesper Dangaard Brouer
2012-11-30 23:47 ` Stephen Hemminger
2012-12-01 0:03 ` Eric Dumazet
2012-12-01 0:13 ` Stephen Hemminger
2012-11-30 23:58 ` Eric Dumazet
2012-12-04 13:30 ` [net-next PATCH V3-evictor] " Jesper Dangaard Brouer
2012-12-04 14:32 ` [net-next PATCH V3-evictor] net: frag evictor,avoid " David Laight
2012-12-04 14:47 ` [net-next PATCH V3-evictor] net: frag evictor, avoid " Eric Dumazet
2012-12-04 17:51 ` Jesper Dangaard Brouer
2012-12-05 9:24 ` Jesper Dangaard Brouer
2012-12-06 12:26 ` Jesper Dangaard Brouer [this message]
2012-12-06 12:32 ` Florian Westphal
2012-12-06 13:29 ` David Laight
2012-12-06 21:38 ` David Miller
2012-12-06 13:55 ` Jesper Dangaard Brouer
2012-12-06 14:47 ` Eric Dumazet
2012-12-06 15:23 ` Jesper Dangaard Brouer
2012-11-29 23:32 ` [net-next PATCH V2 1/9] " Eric Dumazet
2012-11-30 12:01 ` Jesper Dangaard Brouer
2012-11-30 14:57 ` Eric Dumazet
2012-11-29 16:11 ` [net-next PATCH V2 2/9] net: frag cache line adjust inet_frag_queue.net Jesper Dangaard Brouer
2012-11-29 16:12 ` [net-next PATCH V2 3/9] net: frag, move LRU list maintenance outside of rwlock Jesper Dangaard Brouer
2012-11-29 17:43 ` Eric Dumazet
2012-11-29 17:48 ` David Miller
2012-11-29 17:54 ` Eric Dumazet
2012-11-29 18:05 ` David Miller
2012-11-29 18:24 ` Eric Dumazet
2012-11-29 18:31 ` David Miller
2012-11-29 18:33 ` Eric Dumazet
2012-11-29 18:36 ` David Miller
2012-11-29 22:33 ` Jesper Dangaard Brouer
2012-11-29 16:12 ` [net-next PATCH V2 4/9] net: frag helper functions for mem limit tracking Jesper Dangaard Brouer
2012-11-29 16:13 ` [net-next PATCH V2 5/9] net: frag, per CPU resource, mem limit and LRU list accounting Jesper Dangaard Brouer
2012-11-29 17:06 ` Eric Dumazet
2012-11-29 17:31 ` David Miller
2012-12-03 14:02 ` Jesper Dangaard Brouer
2012-12-03 17:25 ` David Miller
2012-11-29 16:14 ` [net-next PATCH V2 6/9] net: frag, implement dynamic percpu alloc of frag_cpu_limit Jesper Dangaard Brouer
2012-11-29 16:15 ` [net-next PATCH V2 7/9] net: frag, move nqueues counter under LRU lock protection Jesper Dangaard Brouer
2012-11-29 16:15 ` [net-next PATCH V2 8/9] net: frag queue locking per hash bucket Jesper Dangaard Brouer
2012-11-29 17:08 ` Eric Dumazet
2012-11-30 12:55 ` Jesper Dangaard Brouer
2012-11-29 16:16 ` [net-next PATCH V2 9/9] net: increase frag queue hash size and cache-line Jesper Dangaard Brouer
2012-11-29 16:39 ` [net-next PATCH V2 9/9] net: increase frag queue hash size andcache-line David Laight
2012-11-29 16:55 ` [net-next PATCH V2 9/9] net: increase frag queue hash size and cache-line Eric Dumazet
2012-11-29 20:53 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1354796760.20888.217.camel@localhost \
--to=jbrouer@redhat.com \
--cc=amwang@redhat.com \
--cc=davem@davemloft.net \
--cc=eric.dumazet@gmail.com \
--cc=fw@strlen.de \
--cc=herbert@gondor.hengli.com.au \
--cc=netdev@vger.kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=tgraf@suug.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).