linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Christoph Lameter <cl@gentwo.org>,
	David Rientjes <rientjes@google.com>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Harry Yoo <harry.yoo@oracle.com>,
	Uladzislau Rezki <urezki@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	rcu@vger.kernel.org, maple-tree@lists.infradead.org
Subject: Re: [PATCH v5 13/14] maple_tree: Add single node allocation support to maple state
Date: Tue, 26 Aug 2025 11:10:49 -0400	[thread overview]
Message-ID: <6rvzsp6i6p6kc63acbg7hmqlsfx5htvyg5rax3llrauwwyzg4e@f436k2inorfe> (raw)
In-Reply-To: <CAJuCfpEjaw+4Ay-Yx=unHev+M4M9FmNmz_PSYmtsFn3EToLBxg@mail.gmail.com>

* Suren Baghdasaryan <surenb@google.com> [250822 16:25]:
> On Wed, Jul 23, 2025 at 6:35 AM Vlastimil Babka <vbabka@suse.cz> wrote:
> >
> > From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
> >
> > The fast path through a write will require replacing a single node in
> > the tree.  Using a sheaf (32 nodes) is too heavy for the fast path, so
> > special case the node store operation by just allocating one node in the
> > maple state.
> >
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> > ---
> >  include/linux/maple_tree.h |  4 +++-
> >  lib/maple_tree.c           | 47 ++++++++++++++++++++++++++++++++++++++++------
> >  2 files changed, 44 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
> > index 3cf1ae9dde7ce43fa20ae400c01fefad048c302e..61eb5e7d09ad0133978e3ac4b2af66710421e769 100644
> > --- a/include/linux/maple_tree.h
> > +++ b/include/linux/maple_tree.h
> > @@ -443,6 +443,7 @@ struct ma_state {
> >         unsigned long min;              /* The minimum index of this node - implied pivot min */
> >         unsigned long max;              /* The maximum index of this node - implied pivot max */
> >         struct slab_sheaf *sheaf;       /* Allocated nodes for this operation */
> > +       struct maple_node *alloc;       /* allocated nodes */
> >         unsigned long node_request;
> >         enum maple_status status;       /* The status of the state (active, start, none, etc) */
> >         unsigned char depth;            /* depth of tree descent during write */
> > @@ -491,8 +492,9 @@ struct ma_wr_state {
> >                 .status = ma_start,                                     \
> >                 .min = 0,                                               \
> >                 .max = ULONG_MAX,                                       \
> > -               .node_request= 0,                                       \
> >                 .sheaf = NULL,                                          \
> > +               .alloc = NULL,                                          \
> > +               .node_request= 0,                                       \
> >                 .mas_flags = 0,                                         \
> >                 .store_type = wr_invalid,                               \
> >         }
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index 3c3c14a76d98ded3b619c178d64099b464a2ca23..9aa782b1497f224e7366ebbd65f997523ee0c8ab 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -1101,16 +1101,23 @@ static int mas_ascend(struct ma_state *mas)
> >   *
> >   * Return: A pointer to a maple node.
> >   */
> > -static inline struct maple_node *mas_pop_node(struct ma_state *mas)
> > +static __always_inline struct maple_node *mas_pop_node(struct ma_state *mas)
> >  {
> >         struct maple_node *ret;
> >
> > +       if (mas->alloc) {
> > +               ret = mas->alloc;
> > +               mas->alloc = NULL;
> > +               goto out;
> > +       }
> > +
> >         if (WARN_ON_ONCE(!mas->sheaf))
> >                 return NULL;
> >
> >         ret = kmem_cache_alloc_from_sheaf(maple_node_cache, GFP_NOWAIT, mas->sheaf);
> > -       memset(ret, 0, sizeof(*ret));
> >
> > +out:
> > +       memset(ret, 0, sizeof(*ret));
> >         return ret;
> >  }
> >
> > @@ -1121,9 +1128,34 @@ static inline struct maple_node *mas_pop_node(struct ma_state *mas)
> >   */
> >  static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp)
> >  {
> > -       if (unlikely(mas->sheaf)) {
> > -               unsigned long refill = mas->node_request;
> > +       if (!mas->node_request)
> > +               return;
> > +
> > +       if (mas->node_request == 1) {
> > +               if (mas->sheaf)
> > +                       goto use_sheaf;
> > +
> > +               if (mas->alloc)
> > +                       return;
> >
> > +               mas->alloc = mt_alloc_one(gfp);
> > +               if (!mas->alloc)
> > +                       goto error;
> > +
> > +               mas->node_request = 0;
> > +               return;
> > +       }
> > +
> > +use_sheaf:
> > +       if (unlikely(mas->alloc)) {
> 
> When would this condition happen?


This would be the case if we have one node allocated and requested more
than one node.  That is, a chained request for nodes that ends up having
the alloc set and requesting a sheaf.

> Do we really need to free mas->alloc
> here or it can be reused for the next 1-node allocation?

Most calls end in mas_destroy() so that won't happen today.

We could reduce the number of allocations requested to the sheaf and let
the code find the mas->alloc first and use that.

But remember, we are getting into this situation where code did a
mas_preallocate() then figured they needed to do something else (error
recovery, or changed the vma flags and now it can merge..) and will now
need additional nodes.  So this is a rare case, so I figured just free
it was the safest thing.


> > +               mt_free_one(mas->alloc);
> > +               mas->alloc = NULL;
> > +       }
> > +
> > +       if (mas->sheaf) {
> > +               unsigned long refill;
> > +
> > +               refill = mas->node_request;
> >                 if(kmem_cache_sheaf_size(mas->sheaf) >= refill) {
> >                         mas->node_request = 0;
> >                         return;
> > @@ -5386,8 +5418,11 @@ void mas_destroy(struct ma_state *mas)
> >         mas->node_request = 0;
> >         if (mas->sheaf)
> >                 mt_return_sheaf(mas->sheaf);
> > -
> >         mas->sheaf = NULL;
> > +
> > +       if (mas->alloc)
> > +               mt_free_one(mas->alloc);
> > +       mas->alloc = NULL;
> >  }
> >  EXPORT_SYMBOL_GPL(mas_destroy);
> >
> > @@ -6074,7 +6109,7 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp)
> >                 mas_alloc_nodes(mas, gfp);
> >         }
> >
> > -       if (!mas->sheaf)
> > +       if (!mas->sheaf && !mas->alloc)
> >                 return false;
> >
> >         mas->status = ma_start;
> >
> > --
> > 2.50.1
> >


  reply	other threads:[~2025-08-26 15:11 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-23 13:34 [PATCH v5 00/14] SLUB percpu sheaves Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 01/14] slab: add opt-in caching layer of " Vlastimil Babka
2025-08-18 10:09   ` Harry Yoo
2025-08-26  8:03     ` Vlastimil Babka
2025-08-19  4:19   ` Suren Baghdasaryan
2025-08-26  8:51     ` Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 02/14] slab: add sheaf support for batching kfree_rcu() operations Vlastimil Babka
2025-07-23 16:39   ` Uladzislau Rezki
2025-07-24 14:30     ` Vlastimil Babka
2025-07-24 17:36       ` Uladzislau Rezki
2025-07-23 13:34 ` [PATCH v5 03/14] slab: sheaf prefilling for guaranteed allocations Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 04/14] slab: determine barn status racily outside of lock Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 05/14] tools: Add testing support for changes to rcu and slab for sheaves Vlastimil Babka
2025-08-22 16:28   ` Suren Baghdasaryan
2025-08-26  9:32     ` Vlastimil Babka
2025-08-27  0:19       ` Suren Baghdasaryan
2025-07-23 13:34 ` [PATCH v5 06/14] tools: Add sheaves support to testing infrastructure Vlastimil Babka
2025-08-22 16:56   ` Suren Baghdasaryan
2025-08-26  9:59     ` Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 07/14] maple_tree: use percpu sheaves for maple_node_cache Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 08/14] mm, vma: use percpu sheaves for vm_area_struct cache Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 09/14] mm, slub: skip percpu sheaves for remote object freeing Vlastimil Babka
2025-08-25  5:22   ` Harry Yoo
2025-08-26 10:11     ` Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 10/14] mm, slab: allow NUMA restricted allocations to use percpu sheaves Vlastimil Babka
2025-08-22 19:58   ` Suren Baghdasaryan
2025-08-25  6:52   ` Harry Yoo
2025-08-26 10:49     ` Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 11/14] testing/radix-tree/maple: Increase readers and reduce delay for faster machines Vlastimil Babka
2025-07-23 13:34 ` [PATCH v5 12/14] maple_tree: Sheaf conversion Vlastimil Babka
2025-08-22 20:18   ` Suren Baghdasaryan
2025-08-26 14:22     ` Liam R. Howlett
2025-08-27  2:07       ` Suren Baghdasaryan
2025-08-28 14:27         ` Liam R. Howlett
2025-07-23 13:34 ` [PATCH v5 13/14] maple_tree: Add single node allocation support to maple state Vlastimil Babka
2025-08-22 20:25   ` Suren Baghdasaryan
2025-08-26 15:10     ` Liam R. Howlett [this message]
2025-08-27  2:03       ` Suren Baghdasaryan
2025-07-23 13:34 ` [PATCH v5 14/14] maple_tree: Convert forking to use the sheaf interface Vlastimil Babka
2025-08-22 20:29   ` Suren Baghdasaryan
2025-08-15 22:53 ` [PATCH v5 00/14] SLUB percpu sheaves Sudarsan Mahendran
2025-08-16  8:05   ` Harry Yoo
2025-08-16 17:35     ` Sudarsan Mahendran
2025-08-16 18:31       ` Vlastimil Babka
2025-08-16 18:33         ` Vlastimil Babka
2025-08-17  4:28           ` Sudarsan Mahendran

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6rvzsp6i6p6kc63acbg7hmqlsfx5htvyg5rax3llrauwwyzg4e@f436k2inorfe \
    --to=liam.howlett@oracle.com \
    --cc=cl@gentwo.org \
    --cc=harry.yoo@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=maple-tree@lists.infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=surenb@google.com \
    --cc=urezki@gmail.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).