public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT
@ 2026-01-18 16:15 wen.yang
  2026-01-18 16:15 ` [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup wen.yang
                   ` (5 more replies)
  0 siblings, 6 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:15 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: stable, linux-kernel, Wen Yang

From: Wen Yang <wen.yang@linux.dev>

Backport three upstream commits to fix a warning on PREEMPT_RT kernels
where raising SOFTIRQ from smp_call_functio triggers WARN_ON_ONCE()
in do_softirq_post_smp_call_flush().

The issue occurs when RPS sends IPIs for backlog NAPI, causing softirqs
from irq context on PREEMPT_RT. The solution implements backlog
NAPI threads to avoid IPI-triggered softirqs, which is required for
PREEMPT_RT kernels.

commit 8fcb76b934da ("net: napi_schedule_rps() cleanup") and 
commit 56364c910691 ("net: Remove conditional threaded-NAPI wakeup based on task state.")
are prerequisites.

The remaining dependencies have not been backported, as they modify
structure definitions in header files and represent optimizations
rather than bug fixes, including:
c59647c0dc67 net: add softnet_data.in_net_rx_action
a1aaee7f8f79 net: make napi_threaded_poll() aware of sd->defer_list
87eff2ec57b6 net: optimize napi_threaded_poll() vs RPS/RFS
2b0cfa6e4956 net: add generic percpu page_pool allocator
...

Eric Dumazet (1):
  net: napi_schedule_rps() cleanup

Sebastian Andrzej Siewior (2):
  net: Remove conditional threaded-NAPI wakeup based on task state.
  net: Allow to use SMP threads for backlog NAPI.

 net/core/dev.c | 162 +++++++++++++++++++++++++++++++++++--------------
 1 file changed, 118 insertions(+), 44 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup
  2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
@ 2026-01-18 16:15 ` wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
  2026-01-18 16:28   ` wen.yang
  2026-01-18 16:15 ` [PATCH 6.6 2/3] net: Remove conditional threaded-NAPI wakeup based on task state wen.yang
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:15 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Eric Dumazet, Jason Xing, Paolo Abeni,
	Wen Yang

From: Eric Dumazet <edumazet@google.com>

commit 8fcb76b934daff12cde76adeab3d502eeb0734b1 upstream.

napi_schedule_rps() return value is ignored, remove it.

Change the comment to clarify the intent.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Tested-by: Jason Xing <kerneljasonxing@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 114fc8bc37f8..e35f41e75bdd 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4684,11 +4684,18 @@ static void trigger_rx_softirq(void *data)
 }
 
 /*
- * Check if this softnet_data structure is another cpu one
- * If yes, queue it to our IPI list and return 1
- * If no, return 0
+ * After we queued a packet into sd->input_pkt_queue,
+ * we need to make sure this queue is serviced soon.
+ *
+ * - If this is another cpu queue, link it to our rps_ipi_list,
+ *   and make sure we will process rps_ipi_list from net_rx_action().
+ *   As we do not know yet if we are called from net_rx_action(),
+ *   we have to raise NET_RX_SOFTIRQ. This might change in the future.
+ *
+ * - If this is our own queue, NAPI schedule our backlog.
+ *   Note that this also raises NET_RX_SOFTIRQ.
  */
-static int napi_schedule_rps(struct softnet_data *sd)
+static void napi_schedule_rps(struct softnet_data *sd)
 {
 	struct softnet_data *mysd = this_cpu_ptr(&softnet_data);
 
@@ -4698,11 +4705,10 @@ static int napi_schedule_rps(struct softnet_data *sd)
 		mysd->rps_ipi_list = sd;
 
 		__raise_softirq_irqoff(NET_RX_SOFTIRQ);
-		return 1;
+		return;
 	}
 #endif /* CONFIG_RPS */
 	__napi_schedule_irqoff(&mysd->backlog);
-	return 0;
 }
 
 #ifdef CONFIG_NET_FLOW_LIMIT
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.6 2/3] net: Remove conditional threaded-NAPI wakeup based on task state.
  2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
  2026-01-18 16:15 ` [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup wen.yang
@ 2026-01-18 16:15 ` wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
  2026-01-18 16:28   ` wen.yang
  2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:15 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit 56364c910691f6d10ba88c964c9041b9ab777bd6 upstream.

A NAPI thread is scheduled by first setting NAPI_STATE_SCHED bit. If
successful (the bit was not yet set) then the NAPI_STATE_SCHED_THREADED
is set but only if thread's state is not TASK_INTERRUPTIBLE (is
TASK_RUNNING) followed by task wakeup.

If the task is idle (TASK_INTERRUPTIBLE) then the
NAPI_STATE_SCHED_THREADED bit is not set. The thread is no relying on
the bit but always leaving the wait-loop after returning from schedule()
because there must have been a wakeup.

The smpboot-threads implementation for per-CPU threads requires an
explicit condition and does not support "if we get out of schedule()
then there must be something to do".

Removing this optimisation simplifies the following integration.

Set NAPI_STATE_SCHED_THREADED unconditionally on wakeup and rely on it
in the wait path by removing the `woken' condition.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index e35f41e75bdd..83475b8b3e9d 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4447,13 +4447,7 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
-			/* Avoid doing set_bit() if the thread is in
-			 * INTERRUPTIBLE state, cause napi_thread_wait()
-			 * makes sure to proceed with napi polling
-			 * if the thread is explicitly woken from here.
-			 */
-			if (READ_ONCE(thread->__state) != TASK_INTERRUPTIBLE)
-				set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
@@ -6672,8 +6666,6 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
 
 static int napi_thread_wait(struct napi_struct *napi)
 {
-	bool woken = false;
-
 	set_current_state(TASK_INTERRUPTIBLE);
 
 	while (!kthread_should_stop()) {
@@ -6682,15 +6674,13 @@ static int napi_thread_wait(struct napi_struct *napi)
 		 * Testing SCHED bit is not enough because SCHED bit might be
 		 * set by some other busy poll thread or by napi_disable().
 		 */
-		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
+		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state)) {
 			WARN_ON(!list_empty(&napi->poll_list));
 			__set_current_state(TASK_RUNNING);
 			return 0;
 		}
 
 		schedule();
-		/* woken being true indicates this thread owns this napi. */
-		woken = true;
 		set_current_state(TASK_INTERRUPTIBLE);
 	}
 	__set_current_state(TASK_RUNNING);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
  2026-01-18 16:15 ` [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup wen.yang
  2026-01-18 16:15 ` [PATCH 6.6 2/3] net: Remove conditional threaded-NAPI wakeup based on task state wen.yang
@ 2026-01-18 16:15 ` wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
                     ` (3 more replies)
  2026-01-18 16:17 ` [PATCH 6.1 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
                   ` (2 subsequent siblings)
  5 siblings, 4 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:15 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.

Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
used by drivers which don't do NAPI them self, RPS and parts of the
stack which need to avoid recursive deadlocks while processing a packet.

The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled
then a flow for the skb is computed and based on the flow the skb can be
enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's
NAPI) on the remote CPU isn't trivial because the softirq is only
scheduled on the local CPU and performed after the hardirq is done.
In order to schedule a softirq on the remote CPU, an IPI is sent to the
remote CPU which schedules the backlog-NAPI on the then local CPU.

On PREEMPT_RT interrupts are force-threaded. The soft interrupts are
raised within the interrupt thread and processed after the interrupt
handler completed still within the context of the interrupt thread. The
softirq is handled in the context where it originated.

With force-threaded interrupts enabled, ksoftirqd is woken up if a
softirq is raised from hardirq context. This is the case if it is raised
from an IPI. Additionally there is a warning on PREEMPT_RT if the
softirq is raised from the idle thread.
This was done for two reasons:
- With threaded interrupts the processing should happen in thread
  context (where it originated) and ksoftirqd is the only thread for
  this context if raised from hardirq. Using the currently running task
  instead would "punish" a random task.
- Once ksoftirqd is active it consumes all further softirqs until it
  stops running. This changed recently and is no longer the case.

Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/
PREEMPT_RT setups) I am proposing NAPI-threads for backlog.
The "proper" setup with threaded-NAPI is not doable because the threads
are not pinned to an individual CPU and can be modified by the user.
Additionally a dummy network device would have to be assigned. Also
CPU-hotplug has to be considered if additional CPUs show up.
All this can be probably done/ solved but the smpboot-threads already
provide this infrastructure.

Sending UDP packets over loopback expects that the packet is processed
within the call. Delaying it by handing it over to the thread hurts
performance. It is not beneficial to the outcome if the context switch
happens immediately after enqueue or after a while to process a few
packets in a batch.
There is no need to always use the thread if the backlog NAPI is
requested on the local CPU. This restores the loopback throuput. The
performance drops mostly to the same value after enabling RPS on the
loopback comparing the IPI and the tread result.

Create NAPI-threads for backlog if request during boot. The thread runs
the inner loop from napi_threaded_poll(), the wait part is different. It
checks for NAPI_STATE_SCHED (the backlog NAPI can not be disabled).

The NAPI threads for backlog are optional, it has to be enabled via the boot
argument "thread_backlog_napi". It is mandatory for PREEMPT_RT to avoid the
wakeup of ksoftirqd from the IPI.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linuxt.dev>
---
 net/core/dev.c | 130 +++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 104 insertions(+), 26 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 83475b8b3e9d..678848e116d2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -78,6 +78,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
+#include <linux/smpboot.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/string.h>
@@ -217,6 +218,31 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
 	return &net->dev_index_head[ifindex & (NETDEV_HASHENTRIES - 1)];
 }
 
+#ifndef CONFIG_PREEMPT_RT
+
+static DEFINE_STATIC_KEY_FALSE(use_backlog_threads_key);
+
+static int __init setup_backlog_napi_threads(char *arg)
+{
+	static_branch_enable(&use_backlog_threads_key);
+	return 0;
+}
+early_param("thread_backlog_napi", setup_backlog_napi_threads);
+
+static bool use_backlog_threads(void)
+{
+	return static_branch_unlikely(&use_backlog_threads_key);
+}
+
+#else
+
+static bool use_backlog_threads(void)
+{
+	return true;
+}
+
+#endif
+
 static inline void rps_lock_irqsave(struct softnet_data *sd,
 				    unsigned long *flags)
 {
@@ -4415,6 +4441,7 @@ EXPORT_SYMBOL(__dev_direct_xmit);
 /*************************************************************************
  *			Receiver routines
  *************************************************************************/
+static DEFINE_PER_CPU(struct task_struct *, backlog_napi);
 
 int netdev_max_backlog __read_mostly = 1000;
 EXPORT_SYMBOL(netdev_max_backlog);
@@ -4447,12 +4474,16 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
+			if (use_backlog_threads() && thread == raw_cpu_read(backlog_napi))
+				goto use_local_napi;
+
 			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
 	}
 
+use_local_napi:
 	list_add_tail(&napi->poll_list, &sd->poll_list);
 	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
 }
@@ -4695,6 +4726,11 @@ static void napi_schedule_rps(struct softnet_data *sd)
 
 #ifdef CONFIG_RPS
 	if (sd != mysd) {
+		if (use_backlog_threads()) {
+			__napi_schedule_irqoff(&sd->backlog);
+			return;
+		}
+
 		sd->rps_ipi_next = mysd->rps_ipi_list;
 		mysd->rps_ipi_list = sd;
 
@@ -5979,7 +6015,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 #ifdef CONFIG_RPS
 	struct softnet_data *remsd = sd->rps_ipi_list;
 
-	if (remsd) {
+	if (!use_backlog_threads() && remsd) {
 		sd->rps_ipi_list = NULL;
 
 		local_irq_enable();
@@ -5994,7 +6030,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
 {
 #ifdef CONFIG_RPS
-	return sd->rps_ipi_list != NULL;
+	return !use_backlog_threads() && sd->rps_ipi_list;
 #else
 	return false;
 #endif
@@ -6038,7 +6074,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
 			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 			again = false;
 		} else {
 			skb_queue_splice_tail_init(&sd->input_pkt_queue,
@@ -6688,32 +6724,37 @@ static int napi_thread_wait(struct napi_struct *napi)
 	return -1;
 }
 
-static int napi_threaded_poll(void *data)
+static void napi_threaded_poll_loop(struct napi_struct *napi)
 {
-	struct napi_struct *napi = data;
-	void *have;
-
-	while (!napi_thread_wait(napi)) {
-		unsigned long last_qs = jiffies;
+	unsigned long last_qs = jiffies;
 
-		for (;;) {
-			bool repoll = false;
+	for (;;) {
+		bool repoll = false;
+		void *have;
 
-			local_bh_disable();
+		local_bh_disable();
 
-			have = netpoll_poll_lock(napi);
-			__napi_poll(napi, &repoll);
-			netpoll_poll_unlock(have);
+		have = netpoll_poll_lock(napi);
+		__napi_poll(napi, &repoll);
+		netpoll_poll_unlock(have);
 
-			local_bh_enable();
+		local_bh_enable();
 
-			if (!repoll)
-				break;
+		if (!repoll)
+			break;
 
-			rcu_softirq_qs_periodic(last_qs);
-			cond_resched();
-		}
+		rcu_softirq_qs_periodic(last_qs);
+		cond_resched();
 	}
+}
+
+static int napi_threaded_poll(void *data)
+{
+	struct napi_struct *napi = data;
+
+	while (!napi_thread_wait(napi))
+		napi_threaded_poll_loop(napi);
+
 	return 0;
 }
 
@@ -11238,7 +11279,7 @@ static int dev_cpu_dead(unsigned int oldcpu)
 
 		list_del_init(&napi->poll_list);
 		if (napi->poll == process_backlog)
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 		else
 			____napi_schedule(sd, napi);
 	}
@@ -11246,12 +11287,14 @@ static int dev_cpu_dead(unsigned int oldcpu)
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();
 
+	if (!use_backlog_threads()) {
 #ifdef CONFIG_RPS
-	remsd = oldsd->rps_ipi_list;
-	oldsd->rps_ipi_list = NULL;
+		remsd = oldsd->rps_ipi_list;
+		oldsd->rps_ipi_list = NULL;
 #endif
-	/* send out pending IPI's on offline CPU */
-	net_rps_send_ipi(remsd);
+		/* send out pending IPI's on offline CPU */
+		net_rps_send_ipi(remsd);
+	}
 
 	/* Process offline CPU's input_pkt_queue */
 	while ((skb = __skb_dequeue(&oldsd->process_queue))) {
@@ -11511,6 +11554,38 @@ static struct pernet_operations __net_initdata default_device_ops = {
  *
  */
 
+static int backlog_napi_should_run(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	return test_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+}
+
+static void run_backlog_napi(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+
+	napi_threaded_poll_loop(&sd->backlog);
+}
+
+static void backlog_napi_setup(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	napi->thread = this_cpu_read(backlog_napi);
+	set_bit(NAPI_STATE_THREADED, &napi->state);
+}
+
+static struct smp_hotplug_thread backlog_threads = {
+	.store			= &backlog_napi,
+	.thread_should_run	= backlog_napi_should_run,
+	.thread_fn		= run_backlog_napi,
+	.thread_comm		= "backlog_napi/%u",
+	.setup			= backlog_napi_setup,
+};
+
 /*
  *       This is called single threaded during boot, so no need
  *       to take the rtnl semaphore.
@@ -11561,7 +11636,10 @@ static int __init net_dev_init(void)
 		init_gro_hash(&sd->backlog);
 		sd->backlog.poll = process_backlog;
 		sd->backlog.weight = weight_p;
+		INIT_LIST_HEAD(&sd->backlog.poll_list);
 	}
+	if (use_backlog_threads())
+		smpboot_register_percpu_thread(&backlog_threads);
 
 	dev_boot_phase = 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 0/3] net: Backlog NAPI threading for PREEMPT_RT
  2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
                   ` (2 preceding siblings ...)
  2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
@ 2026-01-18 16:17 ` wen.yang
  2026-01-18 16:28 ` wen.yang
  2026-01-18 17:02 ` [PATCH 6.6 " Wen Yang
  5 siblings, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:17 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: stable, linux-kernel, Wen Yang

From: Wen Yang <wen.yang@linux.dev>

Backport three upstream commits to fix a warning on PREEMPT_RT kernels
where raising SOFTIRQ from smp_call_functio triggers WARN_ON_ONCE()
in do_softirq_post_smp_call_flush().

The issue occurs when RPS sends IPIs for backlog NAPI, causing softirqs
from irq context on PREEMPT_RT. The solution implements backlog
NAPI threads to avoid IPI-triggered softirqs, which is required for
PREEMPT_RT kernels.

commit 8fcb76b934da ("net: napi_schedule_rps() cleanup") and 
commit 56364c910691 ("net: Remove conditional threaded-NAPI wakeup based on task state.")
are prerequisites.

The remaining dependencies have not been backported, as they modify
structure definitions in header files and represent optimizations
rather than bug fixes, including:
c59647c0dc67 net: add softnet_data.in_net_rx_action
a1aaee7f8f79 net: make napi_threaded_poll() aware of sd->defer_list
87eff2ec57b6 net: optimize napi_threaded_poll() vs RPS/RFS
2b0cfa6e4956 net: add generic percpu page_pool allocator
...

Eric Dumazet (1):
  net: napi_schedule_rps() cleanup

Sebastian Andrzej Siewior (2):
  net: Remove conditional threaded-NAPI wakeup based on task state.
  net: Allow to use SMP threads for backlog NAPI.

 net/core/dev.c | 162 +++++++++++++++++++++++++++++++++++--------------
 1 file changed, 118 insertions(+), 44 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 6.1 1/3] net: napi_schedule_rps() cleanup
  2026-01-18 16:15 ` [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup wen.yang
@ 2026-01-18 16:17   ` wen.yang
  2026-01-18 16:28   ` wen.yang
  1 sibling, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:17 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Eric Dumazet, Jason Xing, Paolo Abeni,
	Wen Yang

From: Eric Dumazet <edumazet@google.com>

commit 8fcb76b934daff12cde76adeab3d502eeb0734b1 upstream.

napi_schedule_rps() return value is ignored, remove it.

Change the comment to clarify the intent.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Tested-by: Jason Xing <kerneljasonxing@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 114fc8bc37f8..e35f41e75bdd 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4684,11 +4684,18 @@ static void trigger_rx_softirq(void *data)
 }
 
 /*
- * Check if this softnet_data structure is another cpu one
- * If yes, queue it to our IPI list and return 1
- * If no, return 0
+ * After we queued a packet into sd->input_pkt_queue,
+ * we need to make sure this queue is serviced soon.
+ *
+ * - If this is another cpu queue, link it to our rps_ipi_list,
+ *   and make sure we will process rps_ipi_list from net_rx_action().
+ *   As we do not know yet if we are called from net_rx_action(),
+ *   we have to raise NET_RX_SOFTIRQ. This might change in the future.
+ *
+ * - If this is our own queue, NAPI schedule our backlog.
+ *   Note that this also raises NET_RX_SOFTIRQ.
  */
-static int napi_schedule_rps(struct softnet_data *sd)
+static void napi_schedule_rps(struct softnet_data *sd)
 {
 	struct softnet_data *mysd = this_cpu_ptr(&softnet_data);
 
@@ -4698,11 +4705,10 @@ static int napi_schedule_rps(struct softnet_data *sd)
 		mysd->rps_ipi_list = sd;
 
 		__raise_softirq_irqoff(NET_RX_SOFTIRQ);
-		return 1;
+		return;
 	}
 #endif /* CONFIG_RPS */
 	__napi_schedule_irqoff(&mysd->backlog);
-	return 0;
 }
 
 #ifdef CONFIG_NET_FLOW_LIMIT
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 2/3] net: Remove conditional threaded-NAPI wakeup based on task state.
  2026-01-18 16:15 ` [PATCH 6.6 2/3] net: Remove conditional threaded-NAPI wakeup based on task state wen.yang
@ 2026-01-18 16:17   ` wen.yang
  2026-01-18 16:28   ` wen.yang
  1 sibling, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:17 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit 56364c910691f6d10ba88c964c9041b9ab777bd6 upstream.

A NAPI thread is scheduled by first setting NAPI_STATE_SCHED bit. If
successful (the bit was not yet set) then the NAPI_STATE_SCHED_THREADED
is set but only if thread's state is not TASK_INTERRUPTIBLE (is
TASK_RUNNING) followed by task wakeup.

If the task is idle (TASK_INTERRUPTIBLE) then the
NAPI_STATE_SCHED_THREADED bit is not set. The thread is no relying on
the bit but always leaving the wait-loop after returning from schedule()
because there must have been a wakeup.

The smpboot-threads implementation for per-CPU threads requires an
explicit condition and does not support "if we get out of schedule()
then there must be something to do".

Removing this optimisation simplifies the following integration.

Set NAPI_STATE_SCHED_THREADED unconditionally on wakeup and rely on it
in the wait path by removing the `woken' condition.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index e35f41e75bdd..83475b8b3e9d 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4447,13 +4447,7 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
-			/* Avoid doing set_bit() if the thread is in
-			 * INTERRUPTIBLE state, cause napi_thread_wait()
-			 * makes sure to proceed with napi polling
-			 * if the thread is explicitly woken from here.
-			 */
-			if (READ_ONCE(thread->__state) != TASK_INTERRUPTIBLE)
-				set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
@@ -6672,8 +6666,6 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
 
 static int napi_thread_wait(struct napi_struct *napi)
 {
-	bool woken = false;
-
 	set_current_state(TASK_INTERRUPTIBLE);
 
 	while (!kthread_should_stop()) {
@@ -6682,15 +6674,13 @@ static int napi_thread_wait(struct napi_struct *napi)
 		 * Testing SCHED bit is not enough because SCHED bit might be
 		 * set by some other busy poll thread or by napi_disable().
 		 */
-		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
+		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state)) {
 			WARN_ON(!list_empty(&napi->poll_list));
 			__set_current_state(TASK_RUNNING);
 			return 0;
 		}
 
 		schedule();
-		/* woken being true indicates this thread owns this napi. */
-		woken = true;
 		set_current_state(TASK_INTERRUPTIBLE);
 	}
 	__set_current_state(TASK_RUNNING);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
@ 2026-01-18 16:17   ` wen.yang
  2026-01-18 16:28   ` wen.yang
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:17 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.

Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
used by drivers which don't do NAPI them self, RPS and parts of the
stack which need to avoid recursive deadlocks while processing a packet.

The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled
then a flow for the skb is computed and based on the flow the skb can be
enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's
NAPI) on the remote CPU isn't trivial because the softirq is only
scheduled on the local CPU and performed after the hardirq is done.
In order to schedule a softirq on the remote CPU, an IPI is sent to the
remote CPU which schedules the backlog-NAPI on the then local CPU.

On PREEMPT_RT interrupts are force-threaded. The soft interrupts are
raised within the interrupt thread and processed after the interrupt
handler completed still within the context of the interrupt thread. The
softirq is handled in the context where it originated.

With force-threaded interrupts enabled, ksoftirqd is woken up if a
softirq is raised from hardirq context. This is the case if it is raised
from an IPI. Additionally there is a warning on PREEMPT_RT if the
softirq is raised from the idle thread.
This was done for two reasons:
- With threaded interrupts the processing should happen in thread
  context (where it originated) and ksoftirqd is the only thread for
  this context if raised from hardirq. Using the currently running task
  instead would "punish" a random task.
- Once ksoftirqd is active it consumes all further softirqs until it
  stops running. This changed recently and is no longer the case.

Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/
PREEMPT_RT setups) I am proposing NAPI-threads for backlog.
The "proper" setup with threaded-NAPI is not doable because the threads
are not pinned to an individual CPU and can be modified by the user.
Additionally a dummy network device would have to be assigned. Also
CPU-hotplug has to be considered if additional CPUs show up.
All this can be probably done/ solved but the smpboot-threads already
provide this infrastructure.

Sending UDP packets over loopback expects that the packet is processed
within the call. Delaying it by handing it over to the thread hurts
performance. It is not beneficial to the outcome if the context switch
happens immediately after enqueue or after a while to process a few
packets in a batch.
There is no need to always use the thread if the backlog NAPI is
requested on the local CPU. This restores the loopback throuput. The
performance drops mostly to the same value after enabling RPS on the
loopback comparing the IPI and the tread result.

Create NAPI-threads for backlog if request during boot. The thread runs
the inner loop from napi_threaded_poll(), the wait part is different. It
checks for NAPI_STATE_SCHED (the backlog NAPI can not be disabled).

The NAPI threads for backlog are optional, it has to be enabled via the boot
argument "thread_backlog_napi". It is mandatory for PREEMPT_RT to avoid the
wakeup of ksoftirqd from the IPI.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linuxt.dev>
---
 net/core/dev.c | 130 +++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 104 insertions(+), 26 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 83475b8b3e9d..678848e116d2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -78,6 +78,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
+#include <linux/smpboot.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/string.h>
@@ -217,6 +218,31 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
 	return &net->dev_index_head[ifindex & (NETDEV_HASHENTRIES - 1)];
 }
 
+#ifndef CONFIG_PREEMPT_RT
+
+static DEFINE_STATIC_KEY_FALSE(use_backlog_threads_key);
+
+static int __init setup_backlog_napi_threads(char *arg)
+{
+	static_branch_enable(&use_backlog_threads_key);
+	return 0;
+}
+early_param("thread_backlog_napi", setup_backlog_napi_threads);
+
+static bool use_backlog_threads(void)
+{
+	return static_branch_unlikely(&use_backlog_threads_key);
+}
+
+#else
+
+static bool use_backlog_threads(void)
+{
+	return true;
+}
+
+#endif
+
 static inline void rps_lock_irqsave(struct softnet_data *sd,
 				    unsigned long *flags)
 {
@@ -4415,6 +4441,7 @@ EXPORT_SYMBOL(__dev_direct_xmit);
 /*************************************************************************
  *			Receiver routines
  *************************************************************************/
+static DEFINE_PER_CPU(struct task_struct *, backlog_napi);
 
 int netdev_max_backlog __read_mostly = 1000;
 EXPORT_SYMBOL(netdev_max_backlog);
@@ -4447,12 +4474,16 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
+			if (use_backlog_threads() && thread == raw_cpu_read(backlog_napi))
+				goto use_local_napi;
+
 			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
 	}
 
+use_local_napi:
 	list_add_tail(&napi->poll_list, &sd->poll_list);
 	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
 }
@@ -4695,6 +4726,11 @@ static void napi_schedule_rps(struct softnet_data *sd)
 
 #ifdef CONFIG_RPS
 	if (sd != mysd) {
+		if (use_backlog_threads()) {
+			__napi_schedule_irqoff(&sd->backlog);
+			return;
+		}
+
 		sd->rps_ipi_next = mysd->rps_ipi_list;
 		mysd->rps_ipi_list = sd;
 
@@ -5979,7 +6015,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 #ifdef CONFIG_RPS
 	struct softnet_data *remsd = sd->rps_ipi_list;
 
-	if (remsd) {
+	if (!use_backlog_threads() && remsd) {
 		sd->rps_ipi_list = NULL;
 
 		local_irq_enable();
@@ -5994,7 +6030,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
 {
 #ifdef CONFIG_RPS
-	return sd->rps_ipi_list != NULL;
+	return !use_backlog_threads() && sd->rps_ipi_list;
 #else
 	return false;
 #endif
@@ -6038,7 +6074,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
 			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 			again = false;
 		} else {
 			skb_queue_splice_tail_init(&sd->input_pkt_queue,
@@ -6688,32 +6724,37 @@ static int napi_thread_wait(struct napi_struct *napi)
 	return -1;
 }
 
-static int napi_threaded_poll(void *data)
+static void napi_threaded_poll_loop(struct napi_struct *napi)
 {
-	struct napi_struct *napi = data;
-	void *have;
-
-	while (!napi_thread_wait(napi)) {
-		unsigned long last_qs = jiffies;
+	unsigned long last_qs = jiffies;
 
-		for (;;) {
-			bool repoll = false;
+	for (;;) {
+		bool repoll = false;
+		void *have;
 
-			local_bh_disable();
+		local_bh_disable();
 
-			have = netpoll_poll_lock(napi);
-			__napi_poll(napi, &repoll);
-			netpoll_poll_unlock(have);
+		have = netpoll_poll_lock(napi);
+		__napi_poll(napi, &repoll);
+		netpoll_poll_unlock(have);
 
-			local_bh_enable();
+		local_bh_enable();
 
-			if (!repoll)
-				break;
+		if (!repoll)
+			break;
 
-			rcu_softirq_qs_periodic(last_qs);
-			cond_resched();
-		}
+		rcu_softirq_qs_periodic(last_qs);
+		cond_resched();
 	}
+}
+
+static int napi_threaded_poll(void *data)
+{
+	struct napi_struct *napi = data;
+
+	while (!napi_thread_wait(napi))
+		napi_threaded_poll_loop(napi);
+
 	return 0;
 }
 
@@ -11238,7 +11279,7 @@ static int dev_cpu_dead(unsigned int oldcpu)
 
 		list_del_init(&napi->poll_list);
 		if (napi->poll == process_backlog)
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 		else
 			____napi_schedule(sd, napi);
 	}
@@ -11246,12 +11287,14 @@ static int dev_cpu_dead(unsigned int oldcpu)
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();
 
+	if (!use_backlog_threads()) {
 #ifdef CONFIG_RPS
-	remsd = oldsd->rps_ipi_list;
-	oldsd->rps_ipi_list = NULL;
+		remsd = oldsd->rps_ipi_list;
+		oldsd->rps_ipi_list = NULL;
 #endif
-	/* send out pending IPI's on offline CPU */
-	net_rps_send_ipi(remsd);
+		/* send out pending IPI's on offline CPU */
+		net_rps_send_ipi(remsd);
+	}
 
 	/* Process offline CPU's input_pkt_queue */
 	while ((skb = __skb_dequeue(&oldsd->process_queue))) {
@@ -11511,6 +11554,38 @@ static struct pernet_operations __net_initdata default_device_ops = {
  *
  */
 
+static int backlog_napi_should_run(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	return test_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+}
+
+static void run_backlog_napi(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+
+	napi_threaded_poll_loop(&sd->backlog);
+}
+
+static void backlog_napi_setup(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	napi->thread = this_cpu_read(backlog_napi);
+	set_bit(NAPI_STATE_THREADED, &napi->state);
+}
+
+static struct smp_hotplug_thread backlog_threads = {
+	.store			= &backlog_napi,
+	.thread_should_run	= backlog_napi_should_run,
+	.thread_fn		= run_backlog_napi,
+	.thread_comm		= "backlog_napi/%u",
+	.setup			= backlog_napi_setup,
+};
+
 /*
  *       This is called single threaded during boot, so no need
  *       to take the rtnl semaphore.
@@ -11561,7 +11636,10 @@ static int __init net_dev_init(void)
 		init_gro_hash(&sd->backlog);
 		sd->backlog.poll = process_backlog;
 		sd->backlog.weight = weight_p;
+		INIT_LIST_HEAD(&sd->backlog.poll_list);
 	}
+	if (use_backlog_threads())
+		smpboot_register_percpu_thread(&backlog_threads);
 
 	dev_boot_phase = 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 0/3] net: Backlog NAPI threading for PREEMPT_RT
  2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
                   ` (3 preceding siblings ...)
  2026-01-18 16:17 ` [PATCH 6.1 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
@ 2026-01-18 16:28 ` wen.yang
  2026-01-18 17:02 ` [PATCH 6.6 " Wen Yang
  5 siblings, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: stable, linux-kernel, Wen Yang

From: Wen Yang <wen.yang@linux.dev>

Backport three upstream commits to fix a warning on PREEMPT_RT kernels
where raising SOFTIRQ from smp_call_functio triggers WARN_ON_ONCE()
in do_softirq_post_smp_call_flush().

The issue occurs when RPS sends IPIs for backlog NAPI, causing softirqs
from irq context on PREEMPT_RT. The solution implements backlog
NAPI threads to avoid IPI-triggered softirqs, which is required for
PREEMPT_RT kernels.

commit 8fcb76b934da ("net: napi_schedule_rps() cleanup") and 
commit 56364c910691 ("net: Remove conditional threaded-NAPI wakeup based on task state.")
are prerequisites.

The remaining dependencies have not been backported, as they modify
structure definitions in header files and represent optimizations
rather than bug fixes, including:
c59647c0dc67 net: add softnet_data.in_net_rx_action
a1aaee7f8f79 net: make napi_threaded_poll() aware of sd->defer_list
87eff2ec57b6 net: optimize napi_threaded_poll() vs RPS/RFS
2b0cfa6e4956 net: add generic percpu page_pool allocator
...

Eric Dumazet (1):
  net: napi_schedule_rps() cleanup

Sebastian Andrzej Siewior (2):
  net: Remove conditional threaded-NAPI wakeup based on task state.
  net: Allow to use SMP threads for backlog NAPI.

 net/core/dev.c | 162 +++++++++++++++++++++++++++++++++++--------------
 1 file changed, 118 insertions(+), 44 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 6.1 1/3] net: napi_schedule_rps() cleanup
  2026-01-18 16:15 ` [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
@ 2026-01-18 16:28   ` wen.yang
  1 sibling, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Eric Dumazet, Jason Xing, Paolo Abeni,
	Wen Yang

From: Eric Dumazet <edumazet@google.com>

commit 8fcb76b934daff12cde76adeab3d502eeb0734b1 upstream.

napi_schedule_rps() return value is ignored, remove it.

Change the comment to clarify the intent.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Tested-by: Jason Xing <kerneljasonxing@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 114fc8bc37f8..e35f41e75bdd 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4684,11 +4684,18 @@ static void trigger_rx_softirq(void *data)
 }
 
 /*
- * Check if this softnet_data structure is another cpu one
- * If yes, queue it to our IPI list and return 1
- * If no, return 0
+ * After we queued a packet into sd->input_pkt_queue,
+ * we need to make sure this queue is serviced soon.
+ *
+ * - If this is another cpu queue, link it to our rps_ipi_list,
+ *   and make sure we will process rps_ipi_list from net_rx_action().
+ *   As we do not know yet if we are called from net_rx_action(),
+ *   we have to raise NET_RX_SOFTIRQ. This might change in the future.
+ *
+ * - If this is our own queue, NAPI schedule our backlog.
+ *   Note that this also raises NET_RX_SOFTIRQ.
  */
-static int napi_schedule_rps(struct softnet_data *sd)
+static void napi_schedule_rps(struct softnet_data *sd)
 {
 	struct softnet_data *mysd = this_cpu_ptr(&softnet_data);
 
@@ -4698,11 +4705,10 @@ static int napi_schedule_rps(struct softnet_data *sd)
 		mysd->rps_ipi_list = sd;
 
 		__raise_softirq_irqoff(NET_RX_SOFTIRQ);
-		return 1;
+		return;
 	}
 #endif /* CONFIG_RPS */
 	__napi_schedule_irqoff(&mysd->backlog);
-	return 0;
 }
 
 #ifdef CONFIG_NET_FLOW_LIMIT
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 2/3] net: Remove conditional threaded-NAPI wakeup based on task state.
  2026-01-18 16:15 ` [PATCH 6.6 2/3] net: Remove conditional threaded-NAPI wakeup based on task state wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
@ 2026-01-18 16:28   ` wen.yang
  1 sibling, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit 56364c910691f6d10ba88c964c9041b9ab777bd6 upstream.

A NAPI thread is scheduled by first setting NAPI_STATE_SCHED bit. If
successful (the bit was not yet set) then the NAPI_STATE_SCHED_THREADED
is set but only if thread's state is not TASK_INTERRUPTIBLE (is
TASK_RUNNING) followed by task wakeup.

If the task is idle (TASK_INTERRUPTIBLE) then the
NAPI_STATE_SCHED_THREADED bit is not set. The thread is no relying on
the bit but always leaving the wait-loop after returning from schedule()
because there must have been a wakeup.

The smpboot-threads implementation for per-CPU threads requires an
explicit condition and does not support "if we get out of schedule()
then there must be something to do".

Removing this optimisation simplifies the following integration.

Set NAPI_STATE_SCHED_THREADED unconditionally on wakeup and rely on it
in the wait path by removing the `woken' condition.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index e35f41e75bdd..83475b8b3e9d 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4447,13 +4447,7 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
-			/* Avoid doing set_bit() if the thread is in
-			 * INTERRUPTIBLE state, cause napi_thread_wait()
-			 * makes sure to proceed with napi polling
-			 * if the thread is explicitly woken from here.
-			 */
-			if (READ_ONCE(thread->__state) != TASK_INTERRUPTIBLE)
-				set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
@@ -6672,8 +6666,6 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
 
 static int napi_thread_wait(struct napi_struct *napi)
 {
-	bool woken = false;
-
 	set_current_state(TASK_INTERRUPTIBLE);
 
 	while (!kthread_should_stop()) {
@@ -6682,15 +6674,13 @@ static int napi_thread_wait(struct napi_struct *napi)
 		 * Testing SCHED bit is not enough because SCHED bit might be
 		 * set by some other busy poll thread or by napi_disable().
 		 */
-		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
+		if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state)) {
 			WARN_ON(!list_empty(&napi->poll_list));
 			__set_current_state(TASK_RUNNING);
 			return 0;
 		}
 
 		schedule();
-		/* woken being true indicates this thread owns this napi. */
-		woken = true;
 		set_current_state(TASK_INTERRUPTIBLE);
 	}
 	__set_current_state(TASK_RUNNING);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
@ 2026-01-18 16:28   ` wen.yang
  2026-01-18 16:31   ` wen.yang
  2026-01-19 16:25   ` [PATCH 6.6 " Jakub Kicinski
  3 siblings, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.

Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
used by drivers which don't do NAPI them self, RPS and parts of the
stack which need to avoid recursive deadlocks while processing a packet.

The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled
then a flow for the skb is computed and based on the flow the skb can be
enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's
NAPI) on the remote CPU isn't trivial because the softirq is only
scheduled on the local CPU and performed after the hardirq is done.
In order to schedule a softirq on the remote CPU, an IPI is sent to the
remote CPU which schedules the backlog-NAPI on the then local CPU.

On PREEMPT_RT interrupts are force-threaded. The soft interrupts are
raised within the interrupt thread and processed after the interrupt
handler completed still within the context of the interrupt thread. The
softirq is handled in the context where it originated.

With force-threaded interrupts enabled, ksoftirqd is woken up if a
softirq is raised from hardirq context. This is the case if it is raised
from an IPI. Additionally there is a warning on PREEMPT_RT if the
softirq is raised from the idle thread.
This was done for two reasons:
- With threaded interrupts the processing should happen in thread
  context (where it originated) and ksoftirqd is the only thread for
  this context if raised from hardirq. Using the currently running task
  instead would "punish" a random task.
- Once ksoftirqd is active it consumes all further softirqs until it
  stops running. This changed recently and is no longer the case.

Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/
PREEMPT_RT setups) I am proposing NAPI-threads for backlog.
The "proper" setup with threaded-NAPI is not doable because the threads
are not pinned to an individual CPU and can be modified by the user.
Additionally a dummy network device would have to be assigned. Also
CPU-hotplug has to be considered if additional CPUs show up.
All this can be probably done/ solved but the smpboot-threads already
provide this infrastructure.

Sending UDP packets over loopback expects that the packet is processed
within the call. Delaying it by handing it over to the thread hurts
performance. It is not beneficial to the outcome if the context switch
happens immediately after enqueue or after a while to process a few
packets in a batch.
There is no need to always use the thread if the backlog NAPI is
requested on the local CPU. This restores the loopback throuput. The
performance drops mostly to the same value after enabling RPS on the
loopback comparing the IPI and the tread result.

Create NAPI-threads for backlog if request during boot. The thread runs
the inner loop from napi_threaded_poll(), the wait part is different. It
checks for NAPI_STATE_SCHED (the backlog NAPI can not be disabled).

The NAPI threads for backlog are optional, it has to be enabled via the boot
argument "thread_backlog_napi". It is mandatory for PREEMPT_RT to avoid the
wakeup of ksoftirqd from the IPI.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linuxt.dev>
---
 net/core/dev.c | 130 +++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 104 insertions(+), 26 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 83475b8b3e9d..678848e116d2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -78,6 +78,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
+#include <linux/smpboot.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/string.h>
@@ -217,6 +218,31 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
 	return &net->dev_index_head[ifindex & (NETDEV_HASHENTRIES - 1)];
 }
 
+#ifndef CONFIG_PREEMPT_RT
+
+static DEFINE_STATIC_KEY_FALSE(use_backlog_threads_key);
+
+static int __init setup_backlog_napi_threads(char *arg)
+{
+	static_branch_enable(&use_backlog_threads_key);
+	return 0;
+}
+early_param("thread_backlog_napi", setup_backlog_napi_threads);
+
+static bool use_backlog_threads(void)
+{
+	return static_branch_unlikely(&use_backlog_threads_key);
+}
+
+#else
+
+static bool use_backlog_threads(void)
+{
+	return true;
+}
+
+#endif
+
 static inline void rps_lock_irqsave(struct softnet_data *sd,
 				    unsigned long *flags)
 {
@@ -4415,6 +4441,7 @@ EXPORT_SYMBOL(__dev_direct_xmit);
 /*************************************************************************
  *			Receiver routines
  *************************************************************************/
+static DEFINE_PER_CPU(struct task_struct *, backlog_napi);
 
 int netdev_max_backlog __read_mostly = 1000;
 EXPORT_SYMBOL(netdev_max_backlog);
@@ -4447,12 +4474,16 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
+			if (use_backlog_threads() && thread == raw_cpu_read(backlog_napi))
+				goto use_local_napi;
+
 			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
 	}
 
+use_local_napi:
 	list_add_tail(&napi->poll_list, &sd->poll_list);
 	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
 }
@@ -4695,6 +4726,11 @@ static void napi_schedule_rps(struct softnet_data *sd)
 
 #ifdef CONFIG_RPS
 	if (sd != mysd) {
+		if (use_backlog_threads()) {
+			__napi_schedule_irqoff(&sd->backlog);
+			return;
+		}
+
 		sd->rps_ipi_next = mysd->rps_ipi_list;
 		mysd->rps_ipi_list = sd;
 
@@ -5979,7 +6015,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 #ifdef CONFIG_RPS
 	struct softnet_data *remsd = sd->rps_ipi_list;
 
-	if (remsd) {
+	if (!use_backlog_threads() && remsd) {
 		sd->rps_ipi_list = NULL;
 
 		local_irq_enable();
@@ -5994,7 +6030,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
 {
 #ifdef CONFIG_RPS
-	return sd->rps_ipi_list != NULL;
+	return !use_backlog_threads() && sd->rps_ipi_list;
 #else
 	return false;
 #endif
@@ -6038,7 +6074,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
 			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 			again = false;
 		} else {
 			skb_queue_splice_tail_init(&sd->input_pkt_queue,
@@ -6688,32 +6724,37 @@ static int napi_thread_wait(struct napi_struct *napi)
 	return -1;
 }
 
-static int napi_threaded_poll(void *data)
+static void napi_threaded_poll_loop(struct napi_struct *napi)
 {
-	struct napi_struct *napi = data;
-	void *have;
-
-	while (!napi_thread_wait(napi)) {
-		unsigned long last_qs = jiffies;
+	unsigned long last_qs = jiffies;
 
-		for (;;) {
-			bool repoll = false;
+	for (;;) {
+		bool repoll = false;
+		void *have;
 
-			local_bh_disable();
+		local_bh_disable();
 
-			have = netpoll_poll_lock(napi);
-			__napi_poll(napi, &repoll);
-			netpoll_poll_unlock(have);
+		have = netpoll_poll_lock(napi);
+		__napi_poll(napi, &repoll);
+		netpoll_poll_unlock(have);
 
-			local_bh_enable();
+		local_bh_enable();
 
-			if (!repoll)
-				break;
+		if (!repoll)
+			break;
 
-			rcu_softirq_qs_periodic(last_qs);
-			cond_resched();
-		}
+		rcu_softirq_qs_periodic(last_qs);
+		cond_resched();
 	}
+}
+
+static int napi_threaded_poll(void *data)
+{
+	struct napi_struct *napi = data;
+
+	while (!napi_thread_wait(napi))
+		napi_threaded_poll_loop(napi);
+
 	return 0;
 }
 
@@ -11238,7 +11279,7 @@ static int dev_cpu_dead(unsigned int oldcpu)
 
 		list_del_init(&napi->poll_list);
 		if (napi->poll == process_backlog)
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 		else
 			____napi_schedule(sd, napi);
 	}
@@ -11246,12 +11287,14 @@ static int dev_cpu_dead(unsigned int oldcpu)
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();
 
+	if (!use_backlog_threads()) {
 #ifdef CONFIG_RPS
-	remsd = oldsd->rps_ipi_list;
-	oldsd->rps_ipi_list = NULL;
+		remsd = oldsd->rps_ipi_list;
+		oldsd->rps_ipi_list = NULL;
 #endif
-	/* send out pending IPI's on offline CPU */
-	net_rps_send_ipi(remsd);
+		/* send out pending IPI's on offline CPU */
+		net_rps_send_ipi(remsd);
+	}
 
 	/* Process offline CPU's input_pkt_queue */
 	while ((skb = __skb_dequeue(&oldsd->process_queue))) {
@@ -11511,6 +11554,38 @@ static struct pernet_operations __net_initdata default_device_ops = {
  *
  */
 
+static int backlog_napi_should_run(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	return test_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+}
+
+static void run_backlog_napi(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+
+	napi_threaded_poll_loop(&sd->backlog);
+}
+
+static void backlog_napi_setup(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	napi->thread = this_cpu_read(backlog_napi);
+	set_bit(NAPI_STATE_THREADED, &napi->state);
+}
+
+static struct smp_hotplug_thread backlog_threads = {
+	.store			= &backlog_napi,
+	.thread_should_run	= backlog_napi_should_run,
+	.thread_fn		= run_backlog_napi,
+	.thread_comm		= "backlog_napi/%u",
+	.setup			= backlog_napi_setup,
+};
+
 /*
  *       This is called single threaded during boot, so no need
  *       to take the rtnl semaphore.
@@ -11561,7 +11636,10 @@ static int __init net_dev_init(void)
 		init_gro_hash(&sd->backlog);
 		sd->backlog.poll = process_backlog;
 		sd->backlog.weight = weight_p;
+		INIT_LIST_HEAD(&sd->backlog.poll_list);
 	}
+	if (use_backlog_threads())
+		smpboot_register_percpu_thread(&backlog_threads);
 
 	dev_boot_phase = 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6.1 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
  2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
  2026-01-18 16:28   ` wen.yang
@ 2026-01-18 16:31   ` wen.yang
  2026-01-19 16:25   ` [PATCH 6.6 " Jakub Kicinski
  3 siblings, 0 replies; 22+ messages in thread
From: wen.yang @ 2026-01-18 16:31 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: stable, linux-kernel, Sebastian Andrzej Siewior, Jakub Kicinski,
	Paolo Abeni, Wen Yang

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.

Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
used by drivers which don't do NAPI them self, RPS and parts of the
stack which need to avoid recursive deadlocks while processing a packet.

The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled
then a flow for the skb is computed and based on the flow the skb can be
enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's
NAPI) on the remote CPU isn't trivial because the softirq is only
scheduled on the local CPU and performed after the hardirq is done.
In order to schedule a softirq on the remote CPU, an IPI is sent to the
remote CPU which schedules the backlog-NAPI on the then local CPU.

On PREEMPT_RT interrupts are force-threaded. The soft interrupts are
raised within the interrupt thread and processed after the interrupt
handler completed still within the context of the interrupt thread. The
softirq is handled in the context where it originated.

With force-threaded interrupts enabled, ksoftirqd is woken up if a
softirq is raised from hardirq context. This is the case if it is raised
from an IPI. Additionally there is a warning on PREEMPT_RT if the
softirq is raised from the idle thread.
This was done for two reasons:
- With threaded interrupts the processing should happen in thread
  context (where it originated) and ksoftirqd is the only thread for
  this context if raised from hardirq. Using the currently running task
  instead would "punish" a random task.
- Once ksoftirqd is active it consumes all further softirqs until it
  stops running. This changed recently and is no longer the case.

Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/
PREEMPT_RT setups) I am proposing NAPI-threads for backlog.
The "proper" setup with threaded-NAPI is not doable because the threads
are not pinned to an individual CPU and can be modified by the user.
Additionally a dummy network device would have to be assigned. Also
CPU-hotplug has to be considered if additional CPUs show up.
All this can be probably done/ solved but the smpboot-threads already
provide this infrastructure.

Sending UDP packets over loopback expects that the packet is processed
within the call. Delaying it by handing it over to the thread hurts
performance. It is not beneficial to the outcome if the context switch
happens immediately after enqueue or after a while to process a few
packets in a batch.
There is no need to always use the thread if the backlog NAPI is
requested on the local CPU. This restores the loopback throuput. The
performance drops mostly to the same value after enabling RPS on the
loopback comparing the IPI and the tread result.

Create NAPI-threads for backlog if request during boot. The thread runs
the inner loop from napi_threaded_poll(), the wait part is different. It
checks for NAPI_STATE_SCHED (the backlog NAPI can not be disabled).

The NAPI threads for backlog are optional, it has to be enabled via the boot
argument "thread_backlog_napi". It is mandatory for PREEMPT_RT to avoid the
wakeup of ksoftirqd from the IPI.

Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Wen Yang <wen.yang@linux.dev>
---
 net/core/dev.c | 130 +++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 104 insertions(+), 26 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 83475b8b3e9d..678848e116d2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -78,6 +78,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
+#include <linux/smpboot.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/string.h>
@@ -217,6 +218,31 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
 	return &net->dev_index_head[ifindex & (NETDEV_HASHENTRIES - 1)];
 }
 
+#ifndef CONFIG_PREEMPT_RT
+
+static DEFINE_STATIC_KEY_FALSE(use_backlog_threads_key);
+
+static int __init setup_backlog_napi_threads(char *arg)
+{
+	static_branch_enable(&use_backlog_threads_key);
+	return 0;
+}
+early_param("thread_backlog_napi", setup_backlog_napi_threads);
+
+static bool use_backlog_threads(void)
+{
+	return static_branch_unlikely(&use_backlog_threads_key);
+}
+
+#else
+
+static bool use_backlog_threads(void)
+{
+	return true;
+}
+
+#endif
+
 static inline void rps_lock_irqsave(struct softnet_data *sd,
 				    unsigned long *flags)
 {
@@ -4415,6 +4441,7 @@ EXPORT_SYMBOL(__dev_direct_xmit);
 /*************************************************************************
  *			Receiver routines
  *************************************************************************/
+static DEFINE_PER_CPU(struct task_struct *, backlog_napi);
 
 int netdev_max_backlog __read_mostly = 1000;
 EXPORT_SYMBOL(netdev_max_backlog);
@@ -4447,12 +4474,16 @@ static inline void ____napi_schedule(struct softnet_data *sd,
 		 */
 		thread = READ_ONCE(napi->thread);
 		if (thread) {
+			if (use_backlog_threads() && thread == raw_cpu_read(backlog_napi))
+				goto use_local_napi;
+
 			set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
 			wake_up_process(thread);
 			return;
 		}
 	}
 
+use_local_napi:
 	list_add_tail(&napi->poll_list, &sd->poll_list);
 	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
 }
@@ -4695,6 +4726,11 @@ static void napi_schedule_rps(struct softnet_data *sd)
 
 #ifdef CONFIG_RPS
 	if (sd != mysd) {
+		if (use_backlog_threads()) {
+			__napi_schedule_irqoff(&sd->backlog);
+			return;
+		}
+
 		sd->rps_ipi_next = mysd->rps_ipi_list;
 		mysd->rps_ipi_list = sd;
 
@@ -5979,7 +6015,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 #ifdef CONFIG_RPS
 	struct softnet_data *remsd = sd->rps_ipi_list;
 
-	if (remsd) {
+	if (!use_backlog_threads() && remsd) {
 		sd->rps_ipi_list = NULL;
 
 		local_irq_enable();
@@ -5994,7 +6030,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
 {
 #ifdef CONFIG_RPS
-	return sd->rps_ipi_list != NULL;
+	return !use_backlog_threads() && sd->rps_ipi_list;
 #else
 	return false;
 #endif
@@ -6038,7 +6074,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
 			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 			again = false;
 		} else {
 			skb_queue_splice_tail_init(&sd->input_pkt_queue,
@@ -6688,32 +6724,37 @@ static int napi_thread_wait(struct napi_struct *napi)
 	return -1;
 }
 
-static int napi_threaded_poll(void *data)
+static void napi_threaded_poll_loop(struct napi_struct *napi)
 {
-	struct napi_struct *napi = data;
-	void *have;
-
-	while (!napi_thread_wait(napi)) {
-		unsigned long last_qs = jiffies;
+	unsigned long last_qs = jiffies;
 
-		for (;;) {
-			bool repoll = false;
+	for (;;) {
+		bool repoll = false;
+		void *have;
 
-			local_bh_disable();
+		local_bh_disable();
 
-			have = netpoll_poll_lock(napi);
-			__napi_poll(napi, &repoll);
-			netpoll_poll_unlock(have);
+		have = netpoll_poll_lock(napi);
+		__napi_poll(napi, &repoll);
+		netpoll_poll_unlock(have);
 
-			local_bh_enable();
+		local_bh_enable();
 
-			if (!repoll)
-				break;
+		if (!repoll)
+			break;
 
-			rcu_softirq_qs_periodic(last_qs);
-			cond_resched();
-		}
+		rcu_softirq_qs_periodic(last_qs);
+		cond_resched();
 	}
+}
+
+static int napi_threaded_poll(void *data)
+{
+	struct napi_struct *napi = data;
+
+	while (!napi_thread_wait(napi))
+		napi_threaded_poll_loop(napi);
+
 	return 0;
 }
 
@@ -11238,7 +11279,7 @@ static int dev_cpu_dead(unsigned int oldcpu)
 
 		list_del_init(&napi->poll_list);
 		if (napi->poll == process_backlog)
-			napi->state = 0;
+			napi->state &= NAPIF_STATE_THREADED;
 		else
 			____napi_schedule(sd, napi);
 	}
@@ -11246,12 +11287,14 @@ static int dev_cpu_dead(unsigned int oldcpu)
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();
 
+	if (!use_backlog_threads()) {
 #ifdef CONFIG_RPS
-	remsd = oldsd->rps_ipi_list;
-	oldsd->rps_ipi_list = NULL;
+		remsd = oldsd->rps_ipi_list;
+		oldsd->rps_ipi_list = NULL;
 #endif
-	/* send out pending IPI's on offline CPU */
-	net_rps_send_ipi(remsd);
+		/* send out pending IPI's on offline CPU */
+		net_rps_send_ipi(remsd);
+	}
 
 	/* Process offline CPU's input_pkt_queue */
 	while ((skb = __skb_dequeue(&oldsd->process_queue))) {
@@ -11511,6 +11554,38 @@ static struct pernet_operations __net_initdata default_device_ops = {
  *
  */
 
+static int backlog_napi_should_run(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	return test_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+}
+
+static void run_backlog_napi(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+
+	napi_threaded_poll_loop(&sd->backlog);
+}
+
+static void backlog_napi_setup(unsigned int cpu)
+{
+	struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+	struct napi_struct *napi = &sd->backlog;
+
+	napi->thread = this_cpu_read(backlog_napi);
+	set_bit(NAPI_STATE_THREADED, &napi->state);
+}
+
+static struct smp_hotplug_thread backlog_threads = {
+	.store			= &backlog_napi,
+	.thread_should_run	= backlog_napi_should_run,
+	.thread_fn		= run_backlog_napi,
+	.thread_comm		= "backlog_napi/%u",
+	.setup			= backlog_napi_setup,
+};
+
 /*
  *       This is called single threaded during boot, so no need
  *       to take the rtnl semaphore.
@@ -11561,7 +11636,10 @@ static int __init net_dev_init(void)
 		init_gro_hash(&sd->backlog);
 		sd->backlog.poll = process_backlog;
 		sd->backlog.weight = weight_p;
+		INIT_LIST_HEAD(&sd->backlog.poll_list);
 	}
+	if (use_backlog_threads())
+		smpboot_register_percpu_thread(&backlog_threads);
 
 	dev_boot_phase = 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT
  2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
                   ` (4 preceding siblings ...)
  2026-01-18 16:28 ` wen.yang
@ 2026-01-18 17:02 ` Wen Yang
  5 siblings, 0 replies; 22+ messages in thread
From: Wen Yang @ 2026-01-18 17:02 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: stable, linux-kernel



Sorry, we accidentally wrote 'PATCH 6.1' as' PATCH 6.6 ', please ignore 
this series.
We will resend the correct one, thanks.

--
Best wishes,
Wen


On 1/19/26 00:15, wen.yang@linux.dev wrote:
> From: Wen Yang <wen.yang@linux.dev>
> 
> Backport three upstream commits to fix a warning on PREEMPT_RT kernels
> where raising SOFTIRQ from smp_call_functio triggers WARN_ON_ONCE()
> in do_softirq_post_smp_call_flush().
> 
> The issue occurs when RPS sends IPIs for backlog NAPI, causing softirqs
> from irq context on PREEMPT_RT. The solution implements backlog
> NAPI threads to avoid IPI-triggered softirqs, which is required for
> PREEMPT_RT kernels.
> 
> commit 8fcb76b934da ("net: napi_schedule_rps() cleanup") and
> commit 56364c910691 ("net: Remove conditional threaded-NAPI wakeup based on task state.")
> are prerequisites.
> 
> The remaining dependencies have not been backported, as they modify
> structure definitions in header files and represent optimizations
> rather than bug fixes, including:
> c59647c0dc67 net: add softnet_data.in_net_rx_action
> a1aaee7f8f79 net: make napi_threaded_poll() aware of sd->defer_list
> 87eff2ec57b6 net: optimize napi_threaded_poll() vs RPS/RFS
> 2b0cfa6e4956 net: add generic percpu page_pool allocator
> ...
> 
> Eric Dumazet (1):
>    net: napi_schedule_rps() cleanup
> 
> Sebastian Andrzej Siewior (2):
>    net: Remove conditional threaded-NAPI wakeup based on task state.
>    net: Allow to use SMP threads for backlog NAPI.
> 
>   net/core/dev.c | 162 +++++++++++++++++++++++++++++++++++--------------
>   1 file changed, 118 insertions(+), 44 deletions(-)
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
                     ` (2 preceding siblings ...)
  2026-01-18 16:31   ` wen.yang
@ 2026-01-19 16:25   ` Jakub Kicinski
  2026-01-19 16:30     ` Sebastian Andrzej Siewior
  3 siblings, 1 reply; 22+ messages in thread
From: Jakub Kicinski @ 2026-01-19 16:25 UTC (permalink / raw)
  To: wen.yang
  Cc: Greg Kroah-Hartman, stable, linux-kernel,
	Sebastian Andrzej Siewior, Paolo Abeni

On Mon, 19 Jan 2026 00:15:46 +0800 wen.yang@linux.dev wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.
> 
> Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
> used by drivers which don't do NAPI them self, RPS and parts of the
> stack which need to avoid recursive deadlocks while processing a packet.

This is a rather large change to backport into LTS.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-19 16:25   ` [PATCH 6.6 " Jakub Kicinski
@ 2026-01-19 16:30     ` Sebastian Andrzej Siewior
  2026-01-20  6:03       ` Greg Kroah-Hartman
  0 siblings, 1 reply; 22+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-01-19 16:30 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: wen.yang, Greg Kroah-Hartman, stable, linux-kernel, Paolo Abeni

On 2026-01-19 08:25:34 [-0800], Jakub Kicinski wrote:
> On Mon, 19 Jan 2026 00:15:46 +0800 wen.yang@linux.dev wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > 
> > commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.
> > 
> > Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
> > used by drivers which don't do NAPI them self, RPS and parts of the
> > stack which need to avoid recursive deadlocks while processing a packet.
> 
> This is a rather large change to backport into LTS.

I agree. While I saw these patches flying by, I don't remember a mail
where it was justified why it was needed. Did I miss it?

Sebastian

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-19 16:30     ` Sebastian Andrzej Siewior
@ 2026-01-20  6:03       ` Greg Kroah-Hartman
  2026-01-20  8:01         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 22+ messages in thread
From: Greg Kroah-Hartman @ 2026-01-20  6:03 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Jakub Kicinski, wen.yang, stable, linux-kernel, Paolo Abeni

On Mon, Jan 19, 2026 at 05:30:26PM +0100, Sebastian Andrzej Siewior wrote:
> On 2026-01-19 08:25:34 [-0800], Jakub Kicinski wrote:
> > On Mon, 19 Jan 2026 00:15:46 +0800 wen.yang@linux.dev wrote:
> > > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > 
> > > commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.
> > > 
> > > Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
> > > used by drivers which don't do NAPI them self, RPS and parts of the
> > > stack which need to avoid recursive deadlocks while processing a packet.
> > 
> > This is a rather large change to backport into LTS.
> 
> I agree. While I saw these patches flying by, I don't remember a mail
> where it was justified why it was needed. Did I miss it?

Please see patch 0/3 in this series:
	https://lore.kernel.org/all/cover.1768751557.git.wen.yang@linux.dev/

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-20  6:03       ` Greg Kroah-Hartman
@ 2026-01-20  8:01         ` Sebastian Andrzej Siewior
  2026-01-20  9:21           ` Greg Kroah-Hartman
  0 siblings, 1 reply; 22+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-01-20  8:01 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Jakub Kicinski, wen.yang, stable, linux-kernel, Paolo Abeni

On 2026-01-20 07:03:58 [+0100], Greg Kroah-Hartman wrote:
> On Mon, Jan 19, 2026 at 05:30:26PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2026-01-19 08:25:34 [-0800], Jakub Kicinski wrote:
> > > On Mon, 19 Jan 2026 00:15:46 +0800 wen.yang@linux.dev wrote:
> > > > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > > 
> > > > commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.
> > > > 
> > > > Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
> > > > used by drivers which don't do NAPI them self, RPS and parts of the
> > > > stack which need to avoid recursive deadlocks while processing a packet.
> > > 
> > > This is a rather large change to backport into LTS.
> > 
> > I agree. While I saw these patches flying by, I don't remember a mail
> > where it was justified why it was needed. Did I miss it?
> 
> Please see patch 0/3 in this series:
> 	https://lore.kernel.org/all/cover.1768751557.git.wen.yang@linux.dev/

The reasoning why this is needed is due to PREEMPT_RT. This targets v6.6
and PREEMPT_RT is officially supported upstream since v6.12. For v6.6
you still need the out-of-tree patch. This means not only select the
Kconfig symbol but also a bit futex, ptrace or printk. This queue does
not include the three patches here but has another workaround having
more or less the same effect.

If this is needed only for PREEMPT_RT's sake I would suggest to route it
via the stable-rt instead and replace what is currently there.

Sebastian

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-20  8:01         ` Sebastian Andrzej Siewior
@ 2026-01-20  9:21           ` Greg Kroah-Hartman
  2026-01-20 10:38             ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 22+ messages in thread
From: Greg Kroah-Hartman @ 2026-01-20  9:21 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Jakub Kicinski, wen.yang, stable, linux-kernel, Paolo Abeni

On Tue, Jan 20, 2026 at 09:01:04AM +0100, Sebastian Andrzej Siewior wrote:
> On 2026-01-20 07:03:58 [+0100], Greg Kroah-Hartman wrote:
> > On Mon, Jan 19, 2026 at 05:30:26PM +0100, Sebastian Andrzej Siewior wrote:
> > > On 2026-01-19 08:25:34 [-0800], Jakub Kicinski wrote:
> > > > On Mon, 19 Jan 2026 00:15:46 +0800 wen.yang@linux.dev wrote:
> > > > > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > > > 
> > > > > commit dad6b97702639fba27a2bd3e986982ad6f0db3a7 upstream.
> > > > > 
> > > > > Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
> > > > > used by drivers which don't do NAPI them self, RPS and parts of the
> > > > > stack which need to avoid recursive deadlocks while processing a packet.
> > > > 
> > > > This is a rather large change to backport into LTS.
> > > 
> > > I agree. While I saw these patches flying by, I don't remember a mail
> > > where it was justified why it was needed. Did I miss it?
> > 
> > Please see patch 0/3 in this series:
> > 	https://lore.kernel.org/all/cover.1768751557.git.wen.yang@linux.dev/
> 
> The reasoning why this is needed is due to PREEMPT_RT. This targets v6.6
> and PREEMPT_RT is officially supported upstream since v6.12. For v6.6
> you still need the out-of-tree patch. This means not only select the
> Kconfig symbol but also a bit futex, ptrace or printk. This queue does
> not include the three patches here but has another workaround having
> more or less the same effect.
> 
> If this is needed only for PREEMPT_RT's sake I would suggest to route it
> via the stable-rt instead and replace what is currently there.

It's already merged, should this be reverted?  I forgot RT was only for
6.12 and newer, sorry.

greg k-h

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-20  9:21           ` Greg Kroah-Hartman
@ 2026-01-20 10:38             ` Sebastian Andrzej Siewior
  2026-02-04 13:54               ` Greg Kroah-Hartman
  0 siblings, 1 reply; 22+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-01-20 10:38 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Jakub Kicinski, wen.yang, stable, linux-kernel, Paolo Abeni

On 2026-01-20 10:21:58 [+0100], Greg Kroah-Hartman wrote:
> > > Please see patch 0/3 in this series:
> > > 	https://lore.kernel.org/all/cover.1768751557.git.wen.yang@linux.dev/
> > 
> > The reasoning why this is needed is due to PREEMPT_RT. This targets v6.6
> > and PREEMPT_RT is officially supported upstream since v6.12. For v6.6
> > you still need the out-of-tree patch. This means not only select the
> > Kconfig symbol but also a bit futex, ptrace or printk. This queue does
> > not include the three patches here but has another workaround having
> > more or less the same effect.
> > 
> > If this is needed only for PREEMPT_RT's sake I would suggest to route it
> > via the stable-rt instead and replace what is currently there.
> 
> It's already merged, should this be reverted?  I forgot RT was only for
> 6.12 and newer, sorry.

Jakub doesn't seem to be thrilled about this backport and I don't see a
requirement for it. Based on this yes, please revert it.

If Wen wants this still to happen he should either provide better
reasoning why this is needed based on the latest stable v6.6 as-is or
ask stable-rt team to take this instead the current workaround.

> greg k-h

Sebastian

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-01-20 10:38             ` Sebastian Andrzej Siewior
@ 2026-02-04 13:54               ` Greg Kroah-Hartman
  2026-02-07 20:26                 ` Wen Yang
  0 siblings, 1 reply; 22+ messages in thread
From: Greg Kroah-Hartman @ 2026-02-04 13:54 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Jakub Kicinski, wen.yang, stable, linux-kernel, Paolo Abeni

On Tue, Jan 20, 2026 at 11:38:33AM +0100, Sebastian Andrzej Siewior wrote:
> On 2026-01-20 10:21:58 [+0100], Greg Kroah-Hartman wrote:
> > > > Please see patch 0/3 in this series:
> > > > 	https://lore.kernel.org/all/cover.1768751557.git.wen.yang@linux.dev/
> > > 
> > > The reasoning why this is needed is due to PREEMPT_RT. This targets v6.6
> > > and PREEMPT_RT is officially supported upstream since v6.12. For v6.6
> > > you still need the out-of-tree patch. This means not only select the
> > > Kconfig symbol but also a bit futex, ptrace or printk. This queue does
> > > not include the three patches here but has another workaround having
> > > more or less the same effect.
> > > 
> > > If this is needed only for PREEMPT_RT's sake I would suggest to route it
> > > via the stable-rt instead and replace what is currently there.
> > 
> > It's already merged, should this be reverted?  I forgot RT was only for
> > 6.12 and newer, sorry.
> 
> Jakub doesn't seem to be thrilled about this backport and I don't see a
> requirement for it. Based on this yes, please revert it.
> 
> If Wen wants this still to happen he should either provide better
> reasoning why this is needed based on the latest stable v6.6 as-is or
> ask stable-rt team to take this instead the current workaround.

Ok, both now reverted, thanks for the review!

greg k-h

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI.
  2026-02-04 13:54               ` Greg Kroah-Hartman
@ 2026-02-07 20:26                 ` Wen Yang
  0 siblings, 0 replies; 22+ messages in thread
From: Wen Yang @ 2026-02-07 20:26 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Sebastian Andrzej Siewior
  Cc: Jakub Kicinski, stable, linux-kernel, Paolo Abeni



On 2/4/26 21:54, Greg Kroah-Hartman wrote:
> On Tue, Jan 20, 2026 at 11:38:33AM +0100, Sebastian Andrzej Siewior wrote:
>> On 2026-01-20 10:21:58 [+0100], Greg Kroah-Hartman wrote:
>>>>> Please see patch 0/3 in this series:
>>>>> 	https://lore.kernel.org/all/cover.1768751557.git.wen.yang@linux.dev/
>>>>
>>>> The reasoning why this is needed is due to PREEMPT_RT. This targets v6.6
>>>> and PREEMPT_RT is officially supported upstream since v6.12. For v6.6
>>>> you still need the out-of-tree patch. This means not only select the
>>>> Kconfig symbol but also a bit futex, ptrace or printk. This queue does
>>>> not include the three patches here but has another workaround having
>>>> more or less the same effect.
>>>>
>>>> If this is needed only for PREEMPT_RT's sake I would suggest to route it
>>>> via the stable-rt instead and replace what is currently there.
>>>
>>> It's already merged, should this be reverted?  I forgot RT was only for
>>> 6.12 and newer, sorry.
>>
>> Jakub doesn't seem to be thrilled about this backport and I don't see a
>> requirement for it. Based on this yes, please revert it.
>>
>> If Wen wants this still to happen he should either provide better
>> reasoning why this is needed based on the latest stable v6.6 as-is or
>> ask stable-rt team to take this instead the current workaround.
> 
> Ok, both now reverted, thanks for the review!
> 

Thank you, we are using 6.6/6.1 lts + rt patch, and this issue 
occasionally occurs in production environments.

Based on the above comments, we are also trying further back porting and 
testing, which involves many changes and may take some time.

After it has been fully tested, we will send it out for review again.

--
Best wishes,
Wen

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2026-02-07 20:27 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-18 16:15 [PATCH 6.6 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
2026-01-18 16:15 ` [PATCH 6.6 1/3] net: napi_schedule_rps() cleanup wen.yang
2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
2026-01-18 16:28   ` wen.yang
2026-01-18 16:15 ` [PATCH 6.6 2/3] net: Remove conditional threaded-NAPI wakeup based on task state wen.yang
2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
2026-01-18 16:28   ` wen.yang
2026-01-18 16:15 ` [PATCH 6.6 3/3] net: Allow to use SMP threads for backlog NAPI wen.yang
2026-01-18 16:17   ` [PATCH 6.1 " wen.yang
2026-01-18 16:28   ` wen.yang
2026-01-18 16:31   ` wen.yang
2026-01-19 16:25   ` [PATCH 6.6 " Jakub Kicinski
2026-01-19 16:30     ` Sebastian Andrzej Siewior
2026-01-20  6:03       ` Greg Kroah-Hartman
2026-01-20  8:01         ` Sebastian Andrzej Siewior
2026-01-20  9:21           ` Greg Kroah-Hartman
2026-01-20 10:38             ` Sebastian Andrzej Siewior
2026-02-04 13:54               ` Greg Kroah-Hartman
2026-02-07 20:26                 ` Wen Yang
2026-01-18 16:17 ` [PATCH 6.1 0/3] net: Backlog NAPI threading for PREEMPT_RT wen.yang
2026-01-18 16:28 ` wen.yang
2026-01-18 17:02 ` [PATCH 6.6 " Wen Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox