netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Miller <davem@davemloft.net>
To: fw@strlen.de
Cc: netdev@vger.kernel.org, tgraf@suug.ch
Subject: Re: [PATCH net] rhashtable: avoid large lock-array allocations
Date: Sun, 14 Aug 2016 21:13:18 -0700 (PDT)	[thread overview]
Message-ID: <20160814.211318.898694274813674125.davem@davemloft.net> (raw)
In-Reply-To: <1470968023-14338-1-git-send-email-fw@strlen.de>

From: Florian Westphal <fw@strlen.de>
Date: Fri, 12 Aug 2016 04:13:43 +0200

> Sander reports following splat after netfilter nat bysrc table got
> converted to rhashtable:
> 
> swapper/0: page allocation failure: order:3, mode:0x2084020(GFP_ATOMIC|__GFP_COMP)
>  CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.8.0-rc1 [..]
>  [<ffffffff811633ed>] warn_alloc_failed+0xdd/0x140
>  [<ffffffff811638b1>] __alloc_pages_nodemask+0x3e1/0xcf0
>  [<ffffffff811a72ed>] alloc_pages_current+0x8d/0x110
>  [<ffffffff8117cb7f>] kmalloc_order+0x1f/0x70
>  [<ffffffff811aec19>] __kmalloc+0x129/0x140
>  [<ffffffff8146d561>] bucket_table_alloc+0xc1/0x1d0
>  [<ffffffff8146da1d>] rhashtable_insert_rehash+0x5d/0xe0
>  [<ffffffff819fcfff>] nf_nat_setup_info+0x2ef/0x400
> 
> The failure happens when allocating the spinlock array.
> Even with GFP_KERNEL its unlikely for such a large allocation
> to succeed.
> 
> Thomas Graf pointed me at inet_ehash_locks_alloc(), so in addition
> to adding NOWARN for atomic allocations this also makes the bucket-array
> sizing more conservative.
> 
> In commit 095dc8e0c3686 ("tcp: fix/cleanup inet_ehash_locks_alloc()"),
> Eric Dumazet says: "Budget 2 cache lines per cpu worth of 'spinlocks'".
> IOW, consider size needed by a single spinlock when determining
> number of locks per cpu.
> 
> Currently, rhashtable just allocates 128 locks per cpu which gives
> factor of 4 more than what inet hashtable uses with same number of
> cpus.
> 
> For LOCKDEP, we now allocate a lot less locks than before (1 per cpu on
> my test box) so we no longer need to pretend we only have two cpus.
> 
> Some sizes (64 byte L1 cache, 4 byte per spinlock, numbers in bytes):
> 
> cpus:    1   2   4    8   16    32   64
> old:    1k  1k  4k   8k  16k   16k  16k
> new:   128 256 512   1k   2k    4k   8k
> 
> With 72-byte spinlock (LOCKDEP):
> cpus :   1   2   4    8   16    32   64
> old:    9k 18k 18k  18k  18k   18k  18k
> new:    72 144 288  575  ~1k ~2.3k  ~4k
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Suggested-by: Thomas Graf <tgraf@suug.ch>
> Signed-off-by: Florian Westphal <fw@strlen.de>

Applied, thanks Florian.

      parent reply	other threads:[~2016-08-15  4:13 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-12  2:13 [PATCH net] rhashtable: avoid large lock-array allocations Florian Westphal
2016-08-12  2:27 ` kbuild test robot
2016-08-12  2:31   ` Florian Westphal
2016-08-15  4:13 ` David Miller [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160814.211318.898694274813674125.davem@davemloft.net \
    --to=davem@davemloft.net \
    --cc=fw@strlen.de \
    --cc=netdev@vger.kernel.org \
    --cc=tgraf@suug.ch \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).