linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Christoph Lameter <cl@linux.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>,
	linux-mm@kvack.org, cgroups@vger.kernel.org
Subject: Re: cgroups: warning for metadata allocation with GFP_NOFAIL (was Re: folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL)
Date: Fri, 10 Nov 2023 13:38:26 +0000	[thread overview]
Message-ID: <ZU4yUoiiJYzml0rS@casper.infradead.org> (raw)
In-Reply-To: <ZUqO2O9BXMo2/fA5@casper.infradead.org>

On Tue, Nov 07, 2023 at 07:24:08PM +0000, Matthew Wilcox wrote:
> On Mon, Nov 06, 2023 at 06:57:05PM -0800, Christoph Lameter wrote:
> > Right.. Well lets add the cgoup folks to this.
> > 
> > The code that simply uses the GFP_NOFAIL to allocate cgroup metadata using
> > an order > 1:
> > 
> > int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s,
> > 				 gfp_t gfp, bool new_slab)
> > {
> > 	unsigned int objects = objs_per_slab(s, slab);
> > 	unsigned long memcg_data;
> > 	void *vec;
> > 
> > 	gfp &= ~OBJCGS_CLEAR_MASK;
> > 	vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
> > 			   slab_nid(slab));
> 
> But, but but, why does this incur an allocation larger than PAGE_SIZE?
> 
> sizeof(void *) is 8.  We have N objects allocated from the slab.  I
> happen to know this is used for buffer_head, so:
> 
> buffer_head         1369   1560    104   39    1 : tunables    0    0    0 : slabdata     40     40      0
> 
> we get 39 objects per slab.  and we're only allocating one page per slab.
> 39 * 8 is only 312.
> 
> Maybe Christoph is playing with min_slab_order or something, so we're
> getting 8 pages per slab.  That's still only 2496 bytes.  Why are we
> calling into the large kmalloc path?  What's really going on here?

Christoph?


  parent reply	other threads:[~2023-11-10 13:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-01  0:13 folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL Christoph Lameter (Ampere)
2023-11-01  8:08 ` Matthew Wilcox
2023-11-07  2:57   ` cgroups: warning for metadata allocation with GFP_NOFAIL (was Re: folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL) Christoph Lameter
2023-11-07 18:05     ` Roman Gushchin
2023-11-07 18:18       ` Shakeel Butt
2023-11-08 10:33       ` Michal Hocko
2023-11-09  6:37         ` Shakeel Butt
2023-11-09 17:36           ` Roman Gushchin
2023-11-07 19:24     ` Matthew Wilcox
2023-11-07 21:33       ` Roman Gushchin
2023-11-07 21:37         ` Matthew Wilcox
2023-11-10 13:38       ` Matthew Wilcox [this message]
2023-11-13 19:48         ` Christoph Lameter
2023-11-13 22:48           ` Matthew Wilcox
2023-11-14 17:29             ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZU4yUoiiJYzml0rS@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=cgroups@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=linux-mm@kvack.org \
    --cc=roman.gushchin@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).