public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
@ 2026-03-15 19:39 Kuan-Wei Chiu
  2026-03-16  7:25 ` Richard Weinberger
  2026-03-17  4:05 ` Zhihao Cheng
  0 siblings, 2 replies; 12+ messages in thread
From: Kuan-Wei Chiu @ 2026-03-15 19:39 UTC (permalink / raw)
  To: richard, akpm
  Cc: chengzhihao1, jserv, eleanor15x, marscheng, linux-mtd,
	linux-kernel, Kuan-Wei Chiu

Historically, list_sort() implemented a hack in merge_final():
    if (unlikely(!++count))
        cmp(priv, b, b);

This was designed specifically so that callers could periodically
invoke cond_resched() within their comparison functions when merging
highly unbalanced lists.

However, an audit of the kernel tree reveals that only fs/ubifs/ relies
on this mechanism. For the vast majority of list_sort() users (such as
block layer IO schedulers and file systems), this results in completely
wasted function calls. In the worst-case scenario (merging an already
sorted list where 'a' is exhausted quickly), this results in
approximately (N/2)/256 unnecessary cmp() calls.

To clean up this API while ensuring behavior compatibility:
1. Introduce list_sort_nonatomic(), which explicitly calls
   cond_resched() internally when count overflows.
2. Remove the dummy cmp(priv, b, b) fallback for standard list_sort(),
   saving unnecessary function calls and improving determinism.
3. Convert the sole user (fs/ubifs/) to the new API.

Note that ubifs still maintains cond_resched() inside its own
comparison functions. This patch does not alter the frequency or timing
of those scheduling points, guaranteeing no regressions for ubifs,
while benefiting all other kernel users.

Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
---
 fs/ubifs/gc.c             |   4 +-
 fs/ubifs/replay.c         |   2 +-
 include/linux/list_sort.h |   3 +
 lib/list_sort.c           | 166 +++++++++++++++++++++-----------------
 4 files changed, 100 insertions(+), 75 deletions(-)

diff --git a/fs/ubifs/gc.c b/fs/ubifs/gc.c
index 0bf08b7755b8..dcc5742210b0 100644
--- a/fs/ubifs/gc.c
+++ b/fs/ubifs/gc.c
@@ -277,8 +277,8 @@ static int sort_nodes(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
 	}
 
 	/* Sort data and non-data nodes */
-	list_sort(c, &sleb->nodes, &data_nodes_cmp);
-	list_sort(c, nondata, &nondata_nodes_cmp);
+	list_sort_nonatomic(c, &sleb->nodes, &data_nodes_cmp);
+	list_sort_nonatomic(c, nondata, &nondata_nodes_cmp);
 
 	err = dbg_check_data_nodes_order(c, &sleb->nodes);
 	if (err)
diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
index a9a568f4a868..ad03dd106a54 100644
--- a/fs/ubifs/replay.c
+++ b/fs/ubifs/replay.c
@@ -329,7 +329,7 @@ static int apply_replay_list(struct ubifs_info *c)
 	struct replay_entry *r;
 	int err;
 
-	list_sort(c, &c->replay_list, &replay_entries_cmp);
+	list_sort_nonatomic(c, &c->replay_list, &replay_entries_cmp);
 
 	list_for_each_entry(r, &c->replay_list, list) {
 		cond_resched();
diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
index 453105f74e05..f7af29073d48 100644
--- a/include/linux/list_sort.h
+++ b/include/linux/list_sort.h
@@ -11,4 +11,7 @@ typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
 
 __attribute__((nonnull(2,3)))
 void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
+
+__attribute__((nonnull(2, 3)))
+void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp);
 #endif
diff --git a/lib/list_sort.c b/lib/list_sort.c
index a310ecb7ccc0..788bfc26cf7b 100644
--- a/lib/list_sort.c
+++ b/lib/list_sort.c
@@ -3,6 +3,7 @@
 #include <linux/export.h>
 #include <linux/list_sort.h>
 #include <linux/list.h>
+#include <linux/sched.h>
 
 /*
  * Returns a list organized in an intermediate format suited
@@ -47,7 +48,7 @@ static struct list_head *merge(void *priv, list_cmp_func_t cmp,
  */
 __attribute__((nonnull(2,3,4,5)))
 static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
-			struct list_head *a, struct list_head *b)
+			struct list_head *a, struct list_head *b, bool may_schedule)
 {
 	struct list_head *tail = head;
 	u8 count = 0;
@@ -79,12 +80,11 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
 		/*
 		 * If the merge is highly unbalanced (e.g. the input is
 		 * already sorted), this loop may run many iterations.
-		 * Continue callbacks to the client even though no
-		 * element comparison is needed, so the client's cmp()
-		 * routine can invoke cond_resched() periodically.
+		 * If may_schedule is true, periodically invoke cond_resched()
+		 * to avoid soft lockups.
 		 */
-		if (unlikely(!++count))
-			cmp(priv, b, b);
+		if (may_schedule && unlikely(!++count))
+			cond_resched();
 		b->prev = tail;
 		tail = b;
 		b = b->next;
@@ -95,6 +95,75 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
 	head->prev = tail;
 }
 
+static void __list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp, bool may_schedule)
+{
+	struct list_head *list = head->next, *pending = NULL;
+	size_t count = 0;	/* Count of pending */
+
+	if (list == head->prev)	/* Zero or one elements */
+		return;
+
+	/* Convert to a null-terminated singly-linked list. */
+	head->prev->next = NULL;
+
+	/*
+	 * Data structure invariants:
+	 * - All lists are singly linked and null-terminated; prev
+	 *   pointers are not maintained.
+	 * - pending is a prev-linked "list of lists" of sorted
+	 *   sublists awaiting further merging.
+	 * - Each of the sorted sublists is power-of-two in size.
+	 * - Sublists are sorted by size and age, smallest & newest at front.
+	 * - There are zero to two sublists of each size.
+	 * - A pair of pending sublists are merged as soon as the number
+	 *   of following pending elements equals their size (i.e.
+	 *   each time count reaches an odd multiple of that size).
+	 *   That ensures each later final merge will be at worst 2:1.
+	 * - Each round consists of:
+	 *   - Merging the two sublists selected by the highest bit
+	 *     which flips when count is incremented, and
+	 *   - Adding an element from the input as a size-1 sublist.
+	 */
+	do {
+		size_t bits;
+		struct list_head **tail = &pending;
+
+		/* Find the least-significant clear bit in count */
+		for (bits = count; bits & 1; bits >>= 1)
+			tail = &(*tail)->prev;
+		/* Do the indicated merge */
+		if (likely(bits)) {
+			struct list_head *a = *tail, *b = a->prev;
+
+			a = merge(priv, cmp, b, a);
+			/* Install the merged result in place of the inputs */
+			a->prev = b->prev;
+			*tail = a;
+		}
+
+		/* Move one element from input list to pending */
+		list->prev = pending;
+		pending = list;
+		list = list->next;
+		pending->next = NULL;
+		count++;
+	} while (list);
+
+	/* End of input; merge together all the pending lists. */
+	list = pending;
+	pending = pending->prev;
+	for (;;) {
+		struct list_head *next = pending->prev;
+
+		if (!next)
+			break;
+		list = merge(priv, cmp, pending, list);
+		pending = next;
+	}
+	/* The final merge, rebuilding prev links */
+	merge_final(priv, cmp, head, pending, list, may_schedule);
+}
+
 /**
  * list_sort - sort a list
  * @priv: private data, opaque to list_sort(), passed to @cmp
@@ -185,73 +254,26 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
  * of size 2^k varies from 2^(k-1) (cases 3 and 5 when x == 0) to
  * 2^(k+1) - 1 (second merge of case 5 when x == 2^(k-1) - 1).
  */
-__attribute__((nonnull(2,3)))
+__attribute__((nonnull(2, 3)))
 void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp)
 {
-	struct list_head *list = head->next, *pending = NULL;
-	size_t count = 0;	/* Count of pending */
-
-	if (list == head->prev)	/* Zero or one elements */
-		return;
-
-	/* Convert to a null-terminated singly-linked list. */
-	head->prev->next = NULL;
-
-	/*
-	 * Data structure invariants:
-	 * - All lists are singly linked and null-terminated; prev
-	 *   pointers are not maintained.
-	 * - pending is a prev-linked "list of lists" of sorted
-	 *   sublists awaiting further merging.
-	 * - Each of the sorted sublists is power-of-two in size.
-	 * - Sublists are sorted by size and age, smallest & newest at front.
-	 * - There are zero to two sublists of each size.
-	 * - A pair of pending sublists are merged as soon as the number
-	 *   of following pending elements equals their size (i.e.
-	 *   each time count reaches an odd multiple of that size).
-	 *   That ensures each later final merge will be at worst 2:1.
-	 * - Each round consists of:
-	 *   - Merging the two sublists selected by the highest bit
-	 *     which flips when count is incremented, and
-	 *   - Adding an element from the input as a size-1 sublist.
-	 */
-	do {
-		size_t bits;
-		struct list_head **tail = &pending;
-
-		/* Find the least-significant clear bit in count */
-		for (bits = count; bits & 1; bits >>= 1)
-			tail = &(*tail)->prev;
-		/* Do the indicated merge */
-		if (likely(bits)) {
-			struct list_head *a = *tail, *b = a->prev;
-
-			a = merge(priv, cmp, b, a);
-			/* Install the merged result in place of the inputs */
-			a->prev = b->prev;
-			*tail = a;
-		}
-
-		/* Move one element from input list to pending */
-		list->prev = pending;
-		pending = list;
-		list = list->next;
-		pending->next = NULL;
-		count++;
-	} while (list);
-
-	/* End of input; merge together all the pending lists. */
-	list = pending;
-	pending = pending->prev;
-	for (;;) {
-		struct list_head *next = pending->prev;
-
-		if (!next)
-			break;
-		list = merge(priv, cmp, pending, list);
-		pending = next;
-	}
-	/* The final merge, rebuilding prev links */
-	merge_final(priv, cmp, head, pending, list);
+	__list_sort(priv, head, cmp, false);
 }
 EXPORT_SYMBOL(list_sort);
+
+/**
+ * list_sort_nonatomic - sort a list with conditional rescheduling
+ * @priv: private data, opaque to list_sort(), passed to @cmp
+ * @head: the list to sort
+ * @cmp: the elements comparison function
+ *
+ * This variant of list_sort() periodically invokes cond_resched()
+ * during the sorting process. It should be used for sorting very
+ * long lists where the operation might otherwise cause soft lockups.
+ */
+__attribute__((nonnull(2, 3)))
+void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp)
+{
+	__list_sort(priv, head, cmp, true);
+}
+EXPORT_SYMBOL(list_sort_nonatomic);
-- 
2.53.0.851.ga537e3e6e9-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-15 19:39 [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls Kuan-Wei Chiu
@ 2026-03-16  7:25 ` Richard Weinberger
  2026-03-16 18:04   ` Kuan-Wei Chiu
  2026-03-17  4:05 ` Zhihao Cheng
  1 sibling, 1 reply; 12+ messages in thread
From: Richard Weinberger @ 2026-03-16  7:25 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: Andrew Morton, chengzhihao1, jserv, eleanor15x, marscheng,
	linux-mtd, linux-kernel

----- Ursprüngliche Mail -----
> Von: "Kuan-Wei Chiu" <visitorckw@gmail.com>
> Historically, list_sort() implemented a hack in merge_final():
>    if (unlikely(!++count))
>        cmp(priv, b, b);
> 
> This was designed specifically so that callers could periodically
> invoke cond_resched() within their comparison functions when merging
> highly unbalanced lists.
> 
> However, an audit of the kernel tree reveals that only fs/ubifs/ relies
> on this mechanism. For the vast majority of list_sort() users (such as
> block layer IO schedulers and file systems), this results in completely
> wasted function calls. In the worst-case scenario (merging an already
> sorted list where 'a' is exhausted quickly), this results in
> approximately (N/2)/256 unnecessary cmp() calls.

Why isn't this a problem for other users of list_sort()?
Are the lists they sort guaranteed to be short?

Or did nobody test hard enough on slow machines without preempt? ;-)

Thanks,
//richard

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-16  7:25 ` Richard Weinberger
@ 2026-03-16 18:04   ` Kuan-Wei Chiu
  2026-03-16 21:49     ` Richard Weinberger
  2026-03-17 14:22     ` Christoph Hellwig
  0 siblings, 2 replies; 12+ messages in thread
From: Kuan-Wei Chiu @ 2026-03-16 18:04 UTC (permalink / raw)
  To: Richard Weinberger
  Cc: Andrew Morton, chengzhihao1, jserv, eleanor15x, marscheng,
	linux-mtd, linux-kernel

Hi Richard,

On Mon, Mar 16, 2026 at 08:25:57AM +0100, Richard Weinberger wrote:
> ----- Ursprüngliche Mail -----
> > Von: "Kuan-Wei Chiu" <visitorckw@gmail.com>
> > Historically, list_sort() implemented a hack in merge_final():
> >    if (unlikely(!++count))
> >        cmp(priv, b, b);
> > 
> > This was designed specifically so that callers could periodically
> > invoke cond_resched() within their comparison functions when merging
> > highly unbalanced lists.
> > 
> > However, an audit of the kernel tree reveals that only fs/ubifs/ relies
> > on this mechanism. For the vast majority of list_sort() users (such as
> > block layer IO schedulers and file systems), this results in completely
> > wasted function calls. In the worst-case scenario (merging an already
> > sorted list where 'a' is exhausted quickly), this results in
> > approximately (N/2)/256 unnecessary cmp() calls.
> 
> Why isn't this a problem for other users of list_sort()?
> Are the lists they sort guaranteed to be short?
> 
> Or did nobody test hard enough on slow machines without preempt? ;-)

TBH, I don't really have a clear answer to that.

I tried to dig into the history. It turns out this mechanism was
introduced 16 years ago in commit 835cc0c8477f ("lib: more scalable
list_sort()"). The commit message explicitly mentioned both XFS and
UBIFS as the intended users for this long-list workaround. However,
looking at the tree back then, XFS never actually put a cond_resched()
in their cmp() function. It seems UBIFS has been the sole user of this
trick ever since. Given that it has been this way for 16 years, it
seems other subsystems haven't really encountered any practical issues
with it.

For UBIFS, this patch doesn't alter the frequency, timing, or behavior
of the cond_resched() calls at all, so I am confident that this won't
introduce any regressions.

Regards,
Kuan-Wei

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-16 18:04   ` Kuan-Wei Chiu
@ 2026-03-16 21:49     ` Richard Weinberger
  2026-03-17 14:22     ` Christoph Hellwig
  1 sibling, 0 replies; 12+ messages in thread
From: Richard Weinberger @ 2026-03-16 21:49 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: Andrew Morton, chengzhihao1, jserv, eleanor15x, marscheng,
	linux-mtd, linux-kernel

----- Ursprüngliche Mail -----
> Von: "Kuan-Wei Chiu" <visitorckw@gmail.com>
>> > However, an audit of the kernel tree reveals that only fs/ubifs/ relies
>> > on this mechanism. For the vast majority of list_sort() users (such as
>> > block layer IO schedulers and file systems), this results in completely
>> > wasted function calls. In the worst-case scenario (merging an already
>> > sorted list where 'a' is exhausted quickly), this results in
>> > approximately (N/2)/256 unnecessary cmp() calls.
>> 
>> Why isn't this a problem for other users of list_sort()?
>> Are the lists they sort guaranteed to be short?
>> 
>> Or did nobody test hard enough on slow machines without preempt? ;-)
> 
> TBH, I don't really have a clear answer to that.
> 
> I tried to dig into the history. It turns out this mechanism was
> introduced 16 years ago in commit 835cc0c8477f ("lib: more scalable
> list_sort()"). The commit message explicitly mentioned both XFS and
> UBIFS as the intended users for this long-list workaround. However,
> looking at the tree back then, XFS never actually put a cond_resched()
> in their cmp() function. It seems UBIFS has been the sole user of this
> trick ever since. Given that it has been this way for 16 years, it
> seems other subsystems haven't really encountered any practical issues
> with it.

Traditionally both UBI and UBIFS use cond_resched() heavily, my best guess
is because their mostly used on tiny embedded systems where soft lockups
are more likely.

> For UBIFS, this patch doesn't alter the frequency, timing, or behavior
> of the cond_resched() calls at all, so I am confident that this won't
> introduce any regressions.

Sure. I just want to make sure I understand why UBIFS need special
treatment.

Thanks,
//richard

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-15 19:39 [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls Kuan-Wei Chiu
  2026-03-16  7:25 ` Richard Weinberger
@ 2026-03-17  4:05 ` Zhihao Cheng
  2026-03-17 12:32   ` Kuan-Wei Chiu
  1 sibling, 1 reply; 12+ messages in thread
From: Zhihao Cheng @ 2026-03-17  4:05 UTC (permalink / raw)
  To: Kuan-Wei Chiu, richard, akpm
  Cc: jserv, eleanor15x, marscheng, linux-mtd, linux-kernel

在 2026/3/16 3:39, Kuan-Wei Chiu 写道:
> Historically, list_sort() implemented a hack in merge_final():
>      if (unlikely(!++count))
>          cmp(priv, b, b);
> 
> This was designed specifically so that callers could periodically
> invoke cond_resched() within their comparison functions when merging
> highly unbalanced lists.
> 
> However, an audit of the kernel tree reveals that only fs/ubifs/ relies
> on this mechanism. For the vast majority of list_sort() users (such as
> block layer IO schedulers and file systems), this results in completely
> wasted function calls. In the worst-case scenario (merging an already
> sorted list where 'a' is exhausted quickly), this results in
> approximately (N/2)/256 unnecessary cmp() calls.
> 
> To clean up this API while ensuring behavior compatibility:
> 1. Introduce list_sort_nonatomic(), which explicitly calls
>     cond_resched() internally when count overflows.
> 2. Remove the dummy cmp(priv, b, b) fallback for standard list_sort(),
>     saving unnecessary function calls and improving determinism.
> 3. Convert the sole user (fs/ubifs/) to the new API.
> 
> Note that ubifs still maintains cond_resched() inside its own
> comparison functions. This patch does not alter the frequency or timing
> of those scheduling points, guaranteeing no regressions for ubifs,
> while benefiting all other kernel users.
> 
> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> ---
>   fs/ubifs/gc.c             |   4 +-
>   fs/ubifs/replay.c         |   2 +-
>   include/linux/list_sort.h |   3 +
>   lib/list_sort.c           | 166 +++++++++++++++++++++-----------------
>   4 files changed, 100 insertions(+), 75 deletions(-)

lgtm for UBIFS.

Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>

one small nit below.

> 
[...]
> diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
> index 453105f74e05..f7af29073d48 100644
> --- a/include/linux/list_sort.h
> +++ b/include/linux/list_sort.h
> @@ -11,4 +11,7 @@ typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
>   
>   __attribute__((nonnull(2,3)))
>   void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
> +
> +__attribute__((nonnull(2, 3)))
> +void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp);
>   #endif
> diff --git a/lib/list_sort.c b/lib/list_sort.c
> index a310ecb7ccc0..788bfc26cf7b 100644
> --- a/lib/list_sort.c
> +++ b/lib/list_sort.c
> @@ -3,6 +3,7 @@
>   #include <linux/export.h>
>   #include <linux/list_sort.h>
>   #include <linux/list.h>
> +#include <linux/sched.h>
>   
>   /*
>    * Returns a list organized in an intermediate format suited
> @@ -47,7 +48,7 @@ static struct list_head *merge(void *priv, list_cmp_func_t cmp,
>    */
>   __attribute__((nonnull(2,3,4,5)))
>   static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
> -			struct list_head *a, struct list_head *b)
> +			struct list_head *a, struct list_head *b, bool may_schedule)
>   {
>   	struct list_head *tail = head;
>   	u8 count = 0;
> @@ -79,12 +80,11 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
>   		/*
>   		 * If the merge is highly unbalanced (e.g. the input is
>   		 * already sorted), this loop may run many iterations.
> -		 * Continue callbacks to the client even though no
> -		 * element comparison is needed, so the client's cmp()
> -		 * routine can invoke cond_resched() periodically.
> +		 * If may_schedule is true, periodically invoke cond_resched()
> +		 * to avoid soft lockups.
>   		 */
> -		if (unlikely(!++count))
> -			cmp(priv, b, b);
> +		if (may_schedule && unlikely(!++count))
> +			cond_resched();
The cond_resched() already has a judgment on whether to schedule out, so 
the 'count' could be removed?
>   		b->prev = tail;
>   		tail = b;
>   		b = b->next;



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-17  4:05 ` Zhihao Cheng
@ 2026-03-17 12:32   ` Kuan-Wei Chiu
  2026-03-17 13:22     ` Zhihao Cheng
  0 siblings, 1 reply; 12+ messages in thread
From: Kuan-Wei Chiu @ 2026-03-17 12:32 UTC (permalink / raw)
  To: Zhihao Cheng
  Cc: richard, akpm, jserv, eleanor15x, marscheng, linux-mtd,
	linux-kernel

Hi Zhihao,

On Tue, Mar 17, 2026 at 12:05:54PM +0800, Zhihao Cheng wrote:
> 在 2026/3/16 3:39, Kuan-Wei Chiu 写道:
> > Historically, list_sort() implemented a hack in merge_final():
> >      if (unlikely(!++count))
> >          cmp(priv, b, b);
> > 
> > This was designed specifically so that callers could periodically
> > invoke cond_resched() within their comparison functions when merging
> > highly unbalanced lists.
> > 
> > However, an audit of the kernel tree reveals that only fs/ubifs/ relies
> > on this mechanism. For the vast majority of list_sort() users (such as
> > block layer IO schedulers and file systems), this results in completely
> > wasted function calls. In the worst-case scenario (merging an already
> > sorted list where 'a' is exhausted quickly), this results in
> > approximately (N/2)/256 unnecessary cmp() calls.
> > 
> > To clean up this API while ensuring behavior compatibility:
> > 1. Introduce list_sort_nonatomic(), which explicitly calls
> >     cond_resched() internally when count overflows.
> > 2. Remove the dummy cmp(priv, b, b) fallback for standard list_sort(),
> >     saving unnecessary function calls and improving determinism.
> > 3. Convert the sole user (fs/ubifs/) to the new API.
> > 
> > Note that ubifs still maintains cond_resched() inside its own
> > comparison functions. This patch does not alter the frequency or timing
> > of those scheduling points, guaranteeing no regressions for ubifs,
> > while benefiting all other kernel users.
> > 
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> > ---
> >   fs/ubifs/gc.c             |   4 +-
> >   fs/ubifs/replay.c         |   2 +-
> >   include/linux/list_sort.h |   3 +
> >   lib/list_sort.c           | 166 +++++++++++++++++++++-----------------
> >   4 files changed, 100 insertions(+), 75 deletions(-)
> 
> lgtm for UBIFS.
> 
> Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>

Thanks for your review!

> 
> one small nit below.
> 
> > 
> [...]
> > diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
> > index 453105f74e05..f7af29073d48 100644
> > --- a/include/linux/list_sort.h
> > +++ b/include/linux/list_sort.h
> > @@ -11,4 +11,7 @@ typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
> >   __attribute__((nonnull(2,3)))
> >   void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
> > +
> > +__attribute__((nonnull(2, 3)))
> > +void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp);
> >   #endif
> > diff --git a/lib/list_sort.c b/lib/list_sort.c
> > index a310ecb7ccc0..788bfc26cf7b 100644
> > --- a/lib/list_sort.c
> > +++ b/lib/list_sort.c
> > @@ -3,6 +3,7 @@
> >   #include <linux/export.h>
> >   #include <linux/list_sort.h>
> >   #include <linux/list.h>
> > +#include <linux/sched.h>
> >   /*
> >    * Returns a list organized in an intermediate format suited
> > @@ -47,7 +48,7 @@ static struct list_head *merge(void *priv, list_cmp_func_t cmp,
> >    */
> >   __attribute__((nonnull(2,3,4,5)))
> >   static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
> > -			struct list_head *a, struct list_head *b)
> > +			struct list_head *a, struct list_head *b, bool may_schedule)
> >   {
> >   	struct list_head *tail = head;
> >   	u8 count = 0;
> > @@ -79,12 +80,11 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
> >   		/*
> >   		 * If the merge is highly unbalanced (e.g. the input is
> >   		 * already sorted), this loop may run many iterations.
> > -		 * Continue callbacks to the client even though no
> > -		 * element comparison is needed, so the client's cmp()
> > -		 * routine can invoke cond_resched() periodically.
> > +		 * If may_schedule is true, periodically invoke cond_resched()
> > +		 * to avoid soft lockups.
> >   		 */
> > -		if (unlikely(!++count))
> > -			cmp(priv, b, b);
> > +		if (may_schedule && unlikely(!++count))
> > +			cond_resched();
> The cond_resched() already has a judgment on whether to schedule out, so the
> 'count' could be removed?

However, I think keeping the u8 count rate-limiter makes more sense
here due to the overhead difference.

Evaluating unlikely(!++count) is essentially a single ALU instruction
(register increment) and a zero-flag check, which has virtually zero
cost. On the other hand, cond_resched() is a macro that does much more
than a simple flag check. Depending on the kernel config, it often
invokes __might_resched() (which reads current to check task_struct
states, irq flags, etc.) and makes a call to __cond_resched().
Evaluating all of this heavy machinery on every single iteration of
such a tight loop would probably introduce noticeable overhead.

Actually, your comment brings up another thought I wanted to discuss.

Since we are introducing list_sort_nonatomic(), I wonder if we should
eventually move the cond_resched() out of UBIFS's cmp() functions
entirely and handle it inside list_sort_nonatomic().

Right now, because the cmp() callback is inherently invoked at every
step of the merge process, UBIFS ends up evaluating the cond_resched()
macro every 3 or 4 pointer assignments during the main sort. While
UBIFS needs to prevent soft lockups on huge lists, checking for resched
at such a micro-granularity still feels excessive and likely leaves
performance on the table.

I didn't make this change in the current patch because I don't have the
proper UBIFS hardware/setup to benchmark the performance difference,
and I wanted to keep the current scheduling frequency exactly the same
to guarantee no regressions. But I'd love to hear your thoughts on
whether reducing the frequency and moving it out of UBIFS's cmp() is
something worth doing in the future.

Regards,
Kuan-Wei

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-17 12:32   ` Kuan-Wei Chiu
@ 2026-03-17 13:22     ` Zhihao Cheng
  2026-03-17 14:15       ` Kuan-Wei Chiu
  0 siblings, 1 reply; 12+ messages in thread
From: Zhihao Cheng @ 2026-03-17 13:22 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: richard, akpm, jserv, eleanor15x, marscheng, linux-mtd,
	linux-kernel

在 2026/3/17 20:32, Kuan-Wei Chiu 写道:
> Hi Zhihao,
> 
> On Tue, Mar 17, 2026 at 12:05:54PM +0800, Zhihao Cheng wrote:
>> 在 2026/3/16 3:39, Kuan-Wei Chiu 写道:
>>> Historically, list_sort() implemented a hack in merge_final():
>>>       if (unlikely(!++count))
>>>           cmp(priv, b, b);
>>>
>>> This was designed specifically so that callers could periodically
>>> invoke cond_resched() within their comparison functions when merging
>>> highly unbalanced lists.
>>>
>>> However, an audit of the kernel tree reveals that only fs/ubifs/ relies
>>> on this mechanism. For the vast majority of list_sort() users (such as
>>> block layer IO schedulers and file systems), this results in completely
>>> wasted function calls. In the worst-case scenario (merging an already
>>> sorted list where 'a' is exhausted quickly), this results in
>>> approximately (N/2)/256 unnecessary cmp() calls.
>>>
>>> To clean up this API while ensuring behavior compatibility:
>>> 1. Introduce list_sort_nonatomic(), which explicitly calls
>>>      cond_resched() internally when count overflows.
>>> 2. Remove the dummy cmp(priv, b, b) fallback for standard list_sort(),
>>>      saving unnecessary function calls and improving determinism.
>>> 3. Convert the sole user (fs/ubifs/) to the new API.
>>>
>>> Note that ubifs still maintains cond_resched() inside its own
>>> comparison functions. This patch does not alter the frequency or timing
>>> of those scheduling points, guaranteeing no regressions for ubifs,
>>> while benefiting all other kernel users.
>>>
>>> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
>>> ---
>>>    fs/ubifs/gc.c             |   4 +-
>>>    fs/ubifs/replay.c         |   2 +-
>>>    include/linux/list_sort.h |   3 +
>>>    lib/list_sort.c           | 166 +++++++++++++++++++++-----------------
>>>    4 files changed, 100 insertions(+), 75 deletions(-)
>>
>> lgtm for UBIFS.
>>
>> Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
> 
> Thanks for your review!
> 
>>
>> one small nit below.
>>
>>>
>> [...]
>>> diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
>>> index 453105f74e05..f7af29073d48 100644
>>> --- a/include/linux/list_sort.h
>>> +++ b/include/linux/list_sort.h
>>> @@ -11,4 +11,7 @@ typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
>>>    __attribute__((nonnull(2,3)))
>>>    void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
>>> +
>>> +__attribute__((nonnull(2, 3)))
>>> +void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp);
>>>    #endif
>>> diff --git a/lib/list_sort.c b/lib/list_sort.c
>>> index a310ecb7ccc0..788bfc26cf7b 100644
>>> --- a/lib/list_sort.c
>>> +++ b/lib/list_sort.c
>>> @@ -3,6 +3,7 @@
>>>    #include <linux/export.h>
>>>    #include <linux/list_sort.h>
>>>    #include <linux/list.h>
>>> +#include <linux/sched.h>
>>>    /*
>>>     * Returns a list organized in an intermediate format suited
>>> @@ -47,7 +48,7 @@ static struct list_head *merge(void *priv, list_cmp_func_t cmp,
>>>     */
>>>    __attribute__((nonnull(2,3,4,5)))
>>>    static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
>>> -			struct list_head *a, struct list_head *b)
>>> +			struct list_head *a, struct list_head *b, bool may_schedule)
>>>    {
>>>    	struct list_head *tail = head;
>>>    	u8 count = 0;
>>> @@ -79,12 +80,11 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
>>>    		/*
>>>    		 * If the merge is highly unbalanced (e.g. the input is
>>>    		 * already sorted), this loop may run many iterations.
>>> -		 * Continue callbacks to the client even though no
>>> -		 * element comparison is needed, so the client's cmp()
>>> -		 * routine can invoke cond_resched() periodically.
>>> +		 * If may_schedule is true, periodically invoke cond_resched()
>>> +		 * to avoid soft lockups.
>>>    		 */
>>> -		if (unlikely(!++count))
>>> -			cmp(priv, b, b);
>>> +		if (may_schedule && unlikely(!++count))
>>> +			cond_resched();
>> The cond_resched() already has a judgment on whether to schedule out, so the
>> 'count' could be removed?
> 
> However, I think keeping the u8 count rate-limiter makes more sense
> here due to the overhead difference.
> 
> Evaluating unlikely(!++count) is essentially a single ALU instruction
> (register increment) and a zero-flag check, which has virtually zero
> cost. On the other hand, cond_resched() is a macro that does much more
> than a simple flag check. Depending on the kernel config, it often
> invokes __might_resched() (which reads current to check task_struct
> states, irq flags, etc.) and makes a call to __cond_resched().
> Evaluating all of this heavy machinery on every single iteration of
> such a tight loop would probably introduce noticeable overhead.
> 
> Actually, your comment brings up another thought I wanted to discuss.
> 
> Since we are introducing list_sort_nonatomic(), I wonder if we should
> eventually move the cond_resched() out of UBIFS's cmp() functions
> entirely and handle it inside list_sort_nonatomic().
> 
> Right now, because the cmp() callback is inherently invoked at every
> step of the merge process, UBIFS ends up evaluating the cond_resched()
> macro every 3 or 4 pointer assignments during the main sort. While
> UBIFS needs to prevent soft lockups on huge lists, checking for resched
> at such a micro-granularity still feels excessive and likely leaves
> performance on the table.

In my humble opinion, I don't think frequent 'cond_resched' calling will 
bring observable performance impact, and there are many examples in 
kernel hotspot paths(eg. 
blk_mq_prealloc_tag_set_tags/blk_rq_poll_completion/__blk_mq_alloc_rq_maps 
...). For list_sort(), I prefer the aim of code cleanup is to make the 
code more readable. I am neutral on code cleanup for the current 
implementation of list_sort.
> 
> I didn't make this change in the current patch because I don't have the
> proper UBIFS hardware/setup to benchmark the performance difference,
> and I wanted to keep the current scheduling frequency exactly the same
> to guarantee no regressions. But I'd love to hear your thoughts on
> whether reducing the frequency and moving it out of UBIFS's cmp() is
> something worth doing in the future.
> 
> Regards,
> Kuan-Wei
> .
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-17 13:22     ` Zhihao Cheng
@ 2026-03-17 14:15       ` Kuan-Wei Chiu
  0 siblings, 0 replies; 12+ messages in thread
From: Kuan-Wei Chiu @ 2026-03-17 14:15 UTC (permalink / raw)
  To: Zhihao Cheng
  Cc: richard, akpm, jserv, eleanor15x, marscheng, linux-mtd,
	linux-kernel

Hi Zhihao,

On Tue, Mar 17, 2026 at 09:22:26PM +0800, Zhihao Cheng wrote:
> 在 2026/3/17 20:32, Kuan-Wei Chiu 写道:
> > Hi Zhihao,
> > 
> > On Tue, Mar 17, 2026 at 12:05:54PM +0800, Zhihao Cheng wrote:
> > > 在 2026/3/16 3:39, Kuan-Wei Chiu 写道:
> > > > Historically, list_sort() implemented a hack in merge_final():
> > > >       if (unlikely(!++count))
> > > >           cmp(priv, b, b);
> > > > 
> > > > This was designed specifically so that callers could periodically
> > > > invoke cond_resched() within their comparison functions when merging
> > > > highly unbalanced lists.
> > > > 
> > > > However, an audit of the kernel tree reveals that only fs/ubifs/ relies
> > > > on this mechanism. For the vast majority of list_sort() users (such as
> > > > block layer IO schedulers and file systems), this results in completely
> > > > wasted function calls. In the worst-case scenario (merging an already
> > > > sorted list where 'a' is exhausted quickly), this results in
> > > > approximately (N/2)/256 unnecessary cmp() calls.
> > > > 
> > > > To clean up this API while ensuring behavior compatibility:
> > > > 1. Introduce list_sort_nonatomic(), which explicitly calls
> > > >      cond_resched() internally when count overflows.
> > > > 2. Remove the dummy cmp(priv, b, b) fallback for standard list_sort(),
> > > >      saving unnecessary function calls and improving determinism.
> > > > 3. Convert the sole user (fs/ubifs/) to the new API.
> > > > 
> > > > Note that ubifs still maintains cond_resched() inside its own
> > > > comparison functions. This patch does not alter the frequency or timing
> > > > of those scheduling points, guaranteeing no regressions for ubifs,
> > > > while benefiting all other kernel users.
> > > > 
> > > > Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
> > > > ---
> > > >    fs/ubifs/gc.c             |   4 +-
> > > >    fs/ubifs/replay.c         |   2 +-
> > > >    include/linux/list_sort.h |   3 +
> > > >    lib/list_sort.c           | 166 +++++++++++++++++++++-----------------
> > > >    4 files changed, 100 insertions(+), 75 deletions(-)
> > > 
> > > lgtm for UBIFS.
> > > 
> > > Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
> > 
> > Thanks for your review!
> > 
> > > 
> > > one small nit below.
> > > 
> > > > 
> > > [...]
> > > > diff --git a/include/linux/list_sort.h b/include/linux/list_sort.h
> > > > index 453105f74e05..f7af29073d48 100644
> > > > --- a/include/linux/list_sort.h
> > > > +++ b/include/linux/list_sort.h
> > > > @@ -11,4 +11,7 @@ typedef int __attribute__((nonnull(2,3))) (*list_cmp_func_t)(void *,
> > > >    __attribute__((nonnull(2,3)))
> > > >    void list_sort(void *priv, struct list_head *head, list_cmp_func_t cmp);
> > > > +
> > > > +__attribute__((nonnull(2, 3)))
> > > > +void list_sort_nonatomic(void *priv, struct list_head *head, list_cmp_func_t cmp);
> > > >    #endif
> > > > diff --git a/lib/list_sort.c b/lib/list_sort.c
> > > > index a310ecb7ccc0..788bfc26cf7b 100644
> > > > --- a/lib/list_sort.c
> > > > +++ b/lib/list_sort.c
> > > > @@ -3,6 +3,7 @@
> > > >    #include <linux/export.h>
> > > >    #include <linux/list_sort.h>
> > > >    #include <linux/list.h>
> > > > +#include <linux/sched.h>
> > > >    /*
> > > >     * Returns a list organized in an intermediate format suited
> > > > @@ -47,7 +48,7 @@ static struct list_head *merge(void *priv, list_cmp_func_t cmp,
> > > >     */
> > > >    __attribute__((nonnull(2,3,4,5)))
> > > >    static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
> > > > -			struct list_head *a, struct list_head *b)
> > > > +			struct list_head *a, struct list_head *b, bool may_schedule)
> > > >    {
> > > >    	struct list_head *tail = head;
> > > >    	u8 count = 0;
> > > > @@ -79,12 +80,11 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
> > > >    		/*
> > > >    		 * If the merge is highly unbalanced (e.g. the input is
> > > >    		 * already sorted), this loop may run many iterations.
> > > > -		 * Continue callbacks to the client even though no
> > > > -		 * element comparison is needed, so the client's cmp()
> > > > -		 * routine can invoke cond_resched() periodically.
> > > > +		 * If may_schedule is true, periodically invoke cond_resched()
> > > > +		 * to avoid soft lockups.
> > > >    		 */
> > > > -		if (unlikely(!++count))
> > > > -			cmp(priv, b, b);
> > > > +		if (may_schedule && unlikely(!++count))
> > > > +			cond_resched();
> > > The cond_resched() already has a judgment on whether to schedule out, so the
> > > 'count' could be removed?
> > 
> > However, I think keeping the u8 count rate-limiter makes more sense
> > here due to the overhead difference.
> > 
> > Evaluating unlikely(!++count) is essentially a single ALU instruction
> > (register increment) and a zero-flag check, which has virtually zero
> > cost. On the other hand, cond_resched() is a macro that does much more
> > than a simple flag check. Depending on the kernel config, it often
> > invokes __might_resched() (which reads current to check task_struct
> > states, irq flags, etc.) and makes a call to __cond_resched().
> > Evaluating all of this heavy machinery on every single iteration of
> > such a tight loop would probably introduce noticeable overhead.
> > 
> > Actually, your comment brings up another thought I wanted to discuss.
> > 
> > Since we are introducing list_sort_nonatomic(), I wonder if we should
> > eventually move the cond_resched() out of UBIFS's cmp() functions
> > entirely and handle it inside list_sort_nonatomic().
> > 
> > Right now, because the cmp() callback is inherently invoked at every
> > step of the merge process, UBIFS ends up evaluating the cond_resched()
> > macro every 3 or 4 pointer assignments during the main sort. While
> > UBIFS needs to prevent soft lockups on huge lists, checking for resched
> > at such a micro-granularity still feels excessive and likely leaves
> > performance on the table.
> 
> In my humble opinion, I don't think frequent 'cond_resched' calling will
> bring observable performance impact, and there are many examples in kernel
> hotspot paths(eg.
> blk_mq_prealloc_tag_set_tags/blk_rq_poll_completion/__blk_mq_alloc_rq_maps
> ...). For list_sort(), I prefer the aim of code cleanup is to make the code
> more readable. I am neutral on code cleanup for the current implementation
> of list_sort.

OK. Since this only affects UBIFS, if performance isn't a concern for
UBIFS, I'll drop the !++count check in v2.

Additionally, I plan to remove cond_resched() from UBIFS's cmp()
functions in v2 and move it directly into merge() and merge_final()
inside list_sort. I think this will make the code much more readable
compared to hiding the scheduling points inside the comparison
callbacks.

Regards,
Kuan-Wei

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-16 18:04   ` Kuan-Wei Chiu
  2026-03-16 21:49     ` Richard Weinberger
@ 2026-03-17 14:22     ` Christoph Hellwig
  2026-03-17 14:38       ` Richard Weinberger
  1 sibling, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2026-03-17 14:22 UTC (permalink / raw)
  To: Kuan-Wei Chiu
  Cc: Richard Weinberger, Andrew Morton, chengzhihao1, jserv,
	eleanor15x, marscheng, linux-mtd, linux-kernel

On Tue, Mar 17, 2026 at 02:04:07AM +0800, Kuan-Wei Chiu wrote:
> I tried to dig into the history. It turns out this mechanism was
> introduced 16 years ago in commit 835cc0c8477f ("lib: more scalable
> list_sort()"). The commit message explicitly mentioned both XFS and
> UBIFS as the intended users for this long-list workaround. However,
> looking at the tree back then, XFS never actually put a cond_resched()
> in their cmp() function. It seems UBIFS has been the sole user of this
> trick ever since. Given that it has been this way for 16 years, it
> seems other subsystems haven't really encountered any practical issues
> with it.

.. or it wasn't even needed in the first place.

> For UBIFS, this patch doesn't alter the frequency, timing, or behavior
> of the cond_resched() calls at all, so I am confident that this won't
> introduce any regressions.

I'd be tempted to drop the workaround and remove the cond_resched
from ubifs given that entirely non-preemptible scheduling models are
on their way out.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-17 14:22     ` Christoph Hellwig
@ 2026-03-17 14:38       ` Richard Weinberger
  2026-03-17 14:40         ` Christoph Hellwig
  0 siblings, 1 reply; 12+ messages in thread
From: Richard Weinberger @ 2026-03-17 14:38 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Kuan-Wei Chiu, Andrew Morton, chengzhihao1, jserv, eleanor15x,
	marscheng, linux-mtd, linux-kernel

----- Ursprüngliche Mail -----
>> For UBIFS, this patch doesn't alter the frequency, timing, or behavior
>> of the cond_resched() calls at all, so I am confident that this won't
>> introduce any regressions.
> 
> I'd be tempted to drop the workaround and remove the cond_resched
> from ubifs given that entirely non-preemptible scheduling models are
> on their way out.

arm32 still has no preempt-lazy.
This is one of the biggest platforms where UBIFS is still used.

Thanks,
//richard

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-17 14:38       ` Richard Weinberger
@ 2026-03-17 14:40         ` Christoph Hellwig
  2026-03-17 16:08           ` Kuan-Wei Chiu
  0 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2026-03-17 14:40 UTC (permalink / raw)
  To: Richard Weinberger
  Cc: Christoph Hellwig, Kuan-Wei Chiu, Andrew Morton, chengzhihao1,
	jserv, eleanor15x, marscheng, linux-mtd, linux-kernel

On Tue, Mar 17, 2026 at 03:38:41PM +0100, Richard Weinberger wrote:
> ----- Ursprüngliche Mail -----
> >> For UBIFS, this patch doesn't alter the frequency, timing, or behavior
> >> of the cond_resched() calls at all, so I am confident that this won't
> >> introduce any regressions.
> > 
> > I'd be tempted to drop the workaround and remove the cond_resched
> > from ubifs given that entirely non-preemptible scheduling models are
> > on their way out.
> 
> arm32 still has no preempt-lazy.
> This is one of the biggest platforms where UBIFS is still used.

Time to fix that if it wants to stay alive.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls
  2026-03-17 14:40         ` Christoph Hellwig
@ 2026-03-17 16:08           ` Kuan-Wei Chiu
  0 siblings, 0 replies; 12+ messages in thread
From: Kuan-Wei Chiu @ 2026-03-17 16:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Richard Weinberger, Andrew Morton, chengzhihao1, jserv,
	eleanor15x, marscheng, linux-mtd, linux-kernel

On Tue, Mar 17, 2026 at 07:40:18AM -0700, Christoph Hellwig wrote:
> On Tue, Mar 17, 2026 at 03:38:41PM +0100, Richard Weinberger wrote:
> > ----- Ursprüngliche Mail -----
> > >> For UBIFS, this patch doesn't alter the frequency, timing, or behavior
> > >> of the cond_resched() calls at all, so I am confident that this won't
> > >> introduce any regressions.
> > > 
> > > I'd be tempted to drop the workaround and remove the cond_resched
> > > from ubifs given that entirely non-preemptible scheduling models are
> > > on their way out.
> > 
> > arm32 still has no preempt-lazy.
> > This is one of the biggest platforms where UBIFS is still used.
> 
> Time to fix that if it wants to stay alive.
> 
It seems that dropping cond_resched() right now would cause issues for
UBIFS.

Given that, let's stick with current plan for now. If we reach a point
in the future where it's safe to drop these scheduling points, we can
always revisit this and remove then.

Regards,
Kuan-Wei

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-03-17 16:08 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-15 19:39 [PATCH] lib/list_sort: introduce list_sort_nonatomic() and remove dummy cmp() calls Kuan-Wei Chiu
2026-03-16  7:25 ` Richard Weinberger
2026-03-16 18:04   ` Kuan-Wei Chiu
2026-03-16 21:49     ` Richard Weinberger
2026-03-17 14:22     ` Christoph Hellwig
2026-03-17 14:38       ` Richard Weinberger
2026-03-17 14:40         ` Christoph Hellwig
2026-03-17 16:08           ` Kuan-Wei Chiu
2026-03-17  4:05 ` Zhihao Cheng
2026-03-17 12:32   ` Kuan-Wei Chiu
2026-03-17 13:22     ` Zhihao Cheng
2026-03-17 14:15       ` Kuan-Wei Chiu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox