From: Wenchao Hao <haowenchao22@gmail.com>
To: Albert Ou <aou@eecs.berkeley.edu>,
Alexandre Ghiti <alex@ghiti.fr>,
Andrew Morton <akpm@linux-foundation.org>,
Barry Song <21cnbao@gmail.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-riscv@lists.infradead.org, Minchan Kim <minchan@kernel.org>,
Palmer Dabbelt <palmer@dabbelt.com>,
Paul Walmsley <pjw@kernel.org>,
Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Wenchao Hao <haowenchao22@gmail.com>,
Xueyuan Chen <xueyuan.chen21@gmail.com>,
Wenchao Hao <haowenchao@xiaomi.com>
Subject: [RFC PATCH 3/3] mm/zsmalloc: drop class lock before freeing zspage
Date: Fri, 8 May 2026 14:19:10 +0800 [thread overview]
Message-ID: <20260508061910.3882831-4-haowenchao@xiaomi.com> (raw)
In-Reply-To: <20260508061910.3882831-1-haowenchao@xiaomi.com>
From: Xueyuan Chen <xueyuan.chen21@gmail.com>
Currently in zs_free(), the class->lock is held until the zspage is
completely freed and the counters are updated. However, freeing pages back
to the buddy allocator requires acquiring the zone lock.
Under heavy memory pressure, zone lock contention can be severe. When this
happens, the CPU holding the class->lock will stall waiting for the zone
lock, thereby blocking all other CPUs attempting to acquire the same
class->lock.
This patch shrinks the critical section of the class->lock to reduce lock
contention. By moving the actual page freeing process outside the
class->lock, we can improve the concurrency performance of zs_free().
Testing on the RADXA O6 platform shows that with 12 CPUs concurrently
performing zs_free() operations, the execution time is reduced by 20%.
Signed-off-by: Xueyuan Chen <xueyuan.chen21@gmail.com>
Signed-off-by: Wenchao Hao <haowenchao@xiaomi.com>
---
mm/zsmalloc.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 47ec0414ce9e..4b01fb215b19 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -880,13 +880,10 @@ static int trylock_zspage(struct zspage *zspage)
return 0;
}
-static void __free_zspage(struct zs_pool *pool, struct size_class *class,
- struct zspage *zspage)
+static inline void __free_zspage_lockless(struct zs_pool *pool, struct zspage *zspage)
{
struct zpdesc *zpdesc, *next;
- assert_spin_locked(&class->lock);
-
VM_BUG_ON(get_zspage_inuse(zspage));
VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0);
@@ -902,7 +899,13 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
} while (zpdesc != NULL);
cache_free_zspage(zspage);
+}
+static void __free_zspage(struct zs_pool *pool, struct size_class *class,
+ struct zspage *zspage)
+{
+ assert_spin_locked(&class->lock);
+ __free_zspage_lockless(pool, zspage);
class_stat_sub(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
}
@@ -1467,6 +1470,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
unsigned long obj;
struct size_class *class;
int fullness;
+ struct zspage *zspage_to_free = NULL;
if (IS_ERR_OR_NULL((void *)handle))
return;
@@ -1502,10 +1506,22 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
obj_free(class->size, obj);
fullness = fix_fullness_group(class, zspage);
- if (fullness == ZS_INUSE_RATIO_0)
- free_zspage(pool, class, zspage);
+ if (fullness == ZS_INUSE_RATIO_0) {
+ if (trylock_zspage(zspage)) {
+ remove_zspage(class, zspage);
+ class_stat_sub(class, ZS_OBJS_ALLOCATED,
+ class->objs_per_zspage);
+ zspage_to_free = zspage;
+ } else
+ kick_deferred_free(pool);
+ }
spin_unlock(&class->lock);
+
+ if (likely(zspage_to_free)) {
+ __free_zspage_lockless(pool, zspage_to_free);
+ atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
+ }
cache_free_handle(handle);
}
EXPORT_SYMBOL_GPL(zs_free);
--
2.34.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
prev parent reply other threads:[~2026-05-08 6:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 6:19 [RFC PATCH 0/3] mm/zsmalloc: reduce lock contention in zs_free() Wenchao Hao
2026-05-08 6:19 ` [RFC PATCH 1/3] mm/zsmalloc: encode class index in obj value for lockless class lookup Wenchao Hao
2026-05-08 6:19 ` [RFC PATCH 2/3] mm/zsmalloc: remove pool->lock from zs_free on 64-bit systems Wenchao Hao
2026-05-08 6:19 ` Wenchao Hao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260508061910.3882831-4-haowenchao@xiaomi.com \
--to=haowenchao22@gmail.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alex@ghiti.fr \
--cc=aou@eecs.berkeley.edu \
--cc=haowenchao@xiaomi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=minchan@kernel.org \
--cc=palmer@dabbelt.com \
--cc=pjw@kernel.org \
--cc=senozhatsky@chromium.org \
--cc=xueyuan.chen21@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox