public inbox for mm-commits@vger.kernel.org
 help / color / mirror / Atom feed
* [merged mm-nonmm-stable] lib-list_sort-remove-dummy-cmp-calls-to-speed-up-merge_final.patch removed from -mm tree
@ 2026-04-03  6:42 Andrew Morton
  0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2026-04-03  6:42 UTC (permalink / raw)
  To: mm-commits, richard, marscheng, jserv, hch, eleanor15x,
	chengzhihao1, visitorckw, akpm


The quilt patch titled
     Subject: lib/list_sort: remove dummy cmp() calls to speed up merge_final()
has been removed from the -mm tree.  Its filename was
     lib-list_sort-remove-dummy-cmp-calls-to-speed-up-merge_final.patch

This patch was dropped because it was merged into the mm-nonmm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

------------------------------------------------------
From: Kuan-Wei Chiu <visitorckw@gmail.com>
Subject: lib/list_sort: remove dummy cmp() calls to speed up merge_final()
Date: Fri, 20 Mar 2026 18:09:38 +0000

Historically, list_sort() implemented a hack in merge_final():
    if (unlikely(!++count))
        cmp(priv, b, b);

This was introduced 16 years ago in commit 835cc0c8477f ("lib: more
scalable list_sort()") so that callers could periodically invoke
cond_resched() within their comparison functions when merging highly
unbalanced lists.

An audit of the kernel tree reveals that fs/ubifs/ was the sole user of
this mechanism.  Recent discussions and inspections by Richard Weinberger
confirm that UBIFS lists are strictly bounded in size (a few thousand
elements at most), meaning it does not strictly rely on these dummy
callbacks to prevent soft lockups.

For the vast majority of list_sort() users (such as block layer IO
schedulers and file systems), this hack results in completely wasted
function calls.  In the worst-case scenario (merging an already sorted
list where 'a' is exhausted quickly), it results in approximately
(N/2)/256 unnecessary cmp() invocations.

Remove the dummy cmp(priv, b, b) fallback from merge_final().  This saves
unnecessary function calls, avoids branching overhead in the tight loop,
and slightly speeds up the final merge step for all generic list_sort()
users.

[akpm@linux-foundation.org: remove now-unused local]
Link: https://lkml.kernel.org/r/20260320180938.1827148-3-visitorckw@gmail.com
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Ching-Chun (Jim) Huang <jserv@ccns.ncku.edu.tw>
Cc: Mars Cheng <marscheng@google.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Yu-Chun Lin <eleanor15x@gmail.com>
Cc: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 lib/list_sort.c |   10 ----------
 1 file changed, 10 deletions(-)

--- a/lib/list_sort.c~lib-list_sort-remove-dummy-cmp-calls-to-speed-up-merge_final
+++ a/lib/list_sort.c
@@ -50,7 +50,6 @@ static void merge_final(void *priv, list
 			struct list_head *a, struct list_head *b)
 {
 	struct list_head *tail = head;
-	u8 count = 0;
 
 	for (;;) {
 		/* if equal, take 'a' -- important for sort stability */
@@ -76,15 +75,6 @@ static void merge_final(void *priv, list
 	/* Finish linking remainder of list b on to tail */
 	tail->next = b;
 	do {
-		/*
-		 * If the merge is highly unbalanced (e.g. the input is
-		 * already sorted), this loop may run many iterations.
-		 * Continue callbacks to the client even though no
-		 * element comparison is needed, so the client's cmp()
-		 * routine can invoke cond_resched() periodically.
-		 */
-		if (unlikely(!++count))
-			cmp(priv, b, b);
 		b->prev = tail;
 		tail = b;
 		b = b->next;
_

Patches currently in -mm which might be from visitorckw@gmail.com are



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2026-04-03  6:42 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-03  6:42 [merged mm-nonmm-stable] lib-list_sort-remove-dummy-cmp-calls-to-speed-up-merge_final.patch removed from -mm tree Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox