patches.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Matthew Wilcox <willy@infradead.org>,
	 Suren Baghdasaryan <surenb@google.com>,
	Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	 Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 patches@lists.linux.dev
Subject: Re: [RFC v2 2/7] mm, slub: add opt-in slub_percpu_array
Date: Wed, 29 Nov 2023 09:46:38 +0900	[thread overview]
Message-ID: <CAB=+i9SQBhYBjOSHDeqgJJ5YARqZCS3oUUutzr4m+2V+ZvySpg@mail.gmail.com> (raw)
In-Reply-To: <077e8e97-e88f-0b8e-2788-4031458be090@suse.cz>

On Wed, Nov 29, 2023 at 2:37 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 8/21/23 16:57, Hyeonggon Yoo wrote:
> > Hi,
> >
> > On Fri, Aug 11, 2023 at 1:36 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> Oops, looks like I forgot reply, sorry (preparing v3 now).

It's fine, you were busy removing SLAB :)
thanks for replying.

> >
> >>  /*
> >>   * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
> >>   * have the fastpath folded into their functions. So no function call
> >> @@ -3465,7 +3564,11 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list
> >>         if (unlikely(object))
> >>                 goto out;
> >>
> >> -       object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
> >> +       if (s->cpu_array)
> >> +               object = alloc_from_pca(s);
> >> +
> >> +       if (!object)
> >> +               object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
> >>
> >>         maybe_wipe_obj_freeptr(s, object);
> >>         init = slab_want_init_on_alloc(gfpflags, s);
> >> @@ -3715,6 +3818,34 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
> >>         discard_slab(s, slab);
> >>  }
> >
> >>  #ifndef CONFIG_SLUB_TINY
> >>  /*
> >>   * Fastpath with forced inlining to produce a kfree and kmem_cache_free that
> >> @@ -3740,6 +3871,11 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
> >>         unsigned long tid;
> >>         void **freelist;
> >>
> >> +       if (s->cpu_array && cnt == 1) {
> >> +               if (free_to_pca(s, head))
> >> +                       return;
> >> +       }
> >> +
> >>  redo:
> >>         /*
> >>          * Determine the currently cpus per cpu slab.
> >> @@ -3793,6 +3929,11 @@ static void do_slab_free(struct kmem_cache *s,
> >>  {
> >>         void *tail_obj = tail ? : head;
> >>
> >> +       if (s->cpu_array && cnt == 1) {
> >> +               if (free_to_pca(s, head))
> >> +                       return;
> >> +       }
> >> +
> >>         __slab_free(s, slab, head, tail_obj, cnt, addr);
> >>  }
> >>  #endif /* CONFIG_SLUB_TINY */
> >
> > Is this functionality needed for SLUB_TINY?
>
> Due to the prefill semantics, I think it has to be be even in TINY, or we
> risk running out of memory reserves. Also later I want to investigate
> extending this approach for supporting allocations in very constrained
> contexts (NMI) so e.g. bpf doesn't have to reimplement the slab allocator,
> and that would also not be good to limit to !SLUB_TINY.

I've got the point, thanks for the explanation!

> >> @@ -4060,6 +4201,45 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> >>  }
> >>  EXPORT_SYMBOL(kmem_cache_alloc_bulk);
> >>
> >> +int kmem_cache_prefill_percpu_array(struct kmem_cache *s, unsigned int count,
> >> +               gfp_t gfp)
> >> +{
> >> +       struct slub_percpu_array *pca;
> >> +       void *objects[32];
> >> +       unsigned int used;
> >> +       unsigned int allocated;
> >> +
> >> +       if (!s->cpu_array)
> >> +               return -EINVAL;
> >> +
> >> +       /* racy but we don't care */
> >> +       pca = raw_cpu_ptr(s->cpu_array);
> >> +
> >> +       used = READ_ONCE(pca->used);
> >
> > Hmm for the prefill to be meaningful,
> > remote allocation should be possible, right?
>
> Remote in what sense?

TL;DR) What I wanted to ask was:
"How pre-filling a number of objects works when the pre-filled objects
are not shared between CPUs"

IIUC the prefill is opportunistically filling the array so (hopefully)
expecting there are
some objects filled in it.

Let's say CPU X calls kmem_cache_prefill_percpu_array(32) and all 32
objects are filled into CPU X's array.
But if CPU Y can't allocate from CPU X's array (which I referred to as
"remote allocation"), the semantics differ from
the maple tree's perspective because preallocated objects were shared
between CPUs before, but now it's not?

Thanks!

--
Hyeonggon

  reply	other threads:[~2023-11-29  0:46 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-10 16:36 [RFC v2 0/7] SLUB percpu array caches and maple tree nodes Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 1/7] mm, slub: fix bulk alloc and free stats Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 2/7] mm, slub: add opt-in slub_percpu_array Vlastimil Babka
2023-08-21 14:57   ` Hyeonggon Yoo
2023-11-28 17:37     ` Vlastimil Babka
2023-11-29  0:46       ` Hyeonggon Yoo [this message]
2023-11-29 13:25         ` Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 3/7] maple_tree: use slub percpu array Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 4/7] maple_tree: avoid bulk alloc/free to use percpu array more Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 5/7] maple_tree: Remove MA_STATE_PREALLOC Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 6/7] maple_tree: replace preallocation with slub percpu array prefill Vlastimil Babka
2023-08-10 16:36 ` [RFC v2 7/7] tools: Add SLUB percpu array functions for testing Vlastimil Babka
2023-08-18 16:44 ` [RFC v2 0/7] SLUB percpu array caches and maple tree nodes Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAB=+i9SQBhYBjOSHDeqgJJ5YARqZCS3oUUutzr4m+2V+ZvySpg@mail.gmail.com' \
    --to=42.hyeyoo@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=patches@lists.linux.dev \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).