From: Steffen Klassert <steffen.klassert@secunet.com>
To: Dan Streetman <dan.streetman@canonical.com>
Cc: Dan Streetman <ddstreet@ieee.org>,
Jay Vosburgh <jay.vosburgh@canonical.com>,
<netdev@vger.kernel.org>
Subject: Re: xfrm4_garbage_collect reaching limit
Date: Wed, 30 Sep 2015 11:54:19 +0200 [thread overview]
Message-ID: <20150930095419.GH7701@secunet.com> (raw)
In-Reply-To: <CAOZ2QJMFNRza8cLnB4Wu0TZCi4pQE0UL7Ch=+D7igMvjNjpONQ@mail.gmail.com>
On Mon, Sep 21, 2015 at 10:51:11AM -0400, Dan Streetman wrote:
> On Fri, Sep 18, 2015 at 1:00 AM, Dan Streetman <ddstreet@ieee.org> wrote:
> > On Wed, Sep 16, 2015 at 4:45 AM, Steffen Klassert
> > <steffen.klassert@secunet.com> wrote:
> >>
> >> What about the patch below? With this we are independent of the number
> >> of cpus. It should cover most, if not all usecases.
> >
> > yep that works, thanks! I'll give it a test also, but I don't see how
> > it would fail.
>
> Yep, on a test setup that previously failed within several hours, it
> ran over the weekend successfully. Thanks!
>
> Tested-by: Dan Streetman <dan.streetman@canonical.com>
>
> >
> >>
> >> While we are at it, we could think about increasing the flowcache
> >> percpu limit. This value was choosen back in 2003, so maybe we could
> >> have more than 4k cache entries per cpu these days.
> >>
> >>
> >> Subject: [PATCH RFC] xfrm: Let the flowcache handle its size by default.
> >>
> >> The xfrm flowcache size is limited by the flowcache limit
> >> (4096 * number of online cpus) and the xfrm garbage collector
> >> threshold (2 * 32768), whatever is reached first. This means
> >> that we can hit the garbage collector limit only on systems
> >> with more than 16 cpus. On such systems we simply refuse
> >> new allocations if we reach the limit, so new flows are dropped.
> >> On syslems with 16 or less cpus, we hit the flowcache limit.
> >> In this case, we shrink the flow cache instead of refusing new
> >> flows.
> >>
> >> We increase the xfrm garbage collector threshold to INT_MAX
> >> to get the same behaviour, independent of the number of cpus.
> >>
> >> The xfrm garbage collector threshold can still be set below
> >> the flowcache limit to reduce the memory usage of the flowcache.
> >>
> >> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
I've applied this to ipsec-next now. It can be considered as a fix too,
but we still can tweak the value via the sysctl in the meantime. So
it is better to test it a bit longer before it hits the mainline.
Thanks a lot for your work Dan!
next prev parent reply other threads:[~2015-09-30 9:54 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-10 21:01 xfrm4_garbage_collect reaching limit Dan Streetman
2015-09-11 9:48 ` Steffen Klassert
2015-09-15 3:14 ` Dan Streetman
2015-09-16 8:45 ` Steffen Klassert
2015-09-18 4:23 ` David Miller
2015-09-18 4:49 ` Steffen Klassert
2015-09-18 5:00 ` Dan Streetman
2015-09-21 14:51 ` Dan Streetman
2015-09-30 9:54 ` Steffen Klassert [this message]
2015-09-21 14:52 ` Dan Streetman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150930095419.GH7701@secunet.com \
--to=steffen.klassert@secunet.com \
--cc=dan.streetman@canonical.com \
--cc=ddstreet@ieee.org \
--cc=jay.vosburgh@canonical.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).