* [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI
@ 2024-03-05 11:53 Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 1/4] net: Remove conditional threaded-NAPI wakeup based on task state Sebastian Andrzej Siewior
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-05 11:53 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner,
Wander Lairson Costa, Yan Zhai
The RPS code and "deferred skb free" both send IPI/ function call
to a remote CPU in which a softirq is raised. This leads to a warning on
PREEMPT_RT because raising softiqrs from function call led to undesired
behaviour in the past. I had duct tape in RT for the "deferred skb free"
and Wander Lairson Costa reported the RPS case.
This series only provides support for SMP threads for backlog NAPI, I
did not attach a patch to make it default and remove the IPI related
code to avoid confusion. I can post it for reference it asked.
The RedHat performance team was so kind to provide some testing here.
The series (with the IPI code removed) has been tested and no regression
vs without the series has been found. For testing iperf3 was used on 25G
interface, provided by mlx5, ix40e or ice driver and RPS was enabled. I
can provide the individual test results if needed.
Changes:
- v3…v4 https://lore.kernel.org/all/20240228121000.526645-1-bigeasy@linutronix.de/
- Rebase on top of current net-next, collect Acks.
- Add struct softnet_data as an argument to kick_defer_list_purge().
- Add sd_has_rps_ipi_waiting() check to napi_threaded_poll_loop() which was
accidentally removed.
- v2…v3 https://lore.kernel.org/all/20240221172032.78737-1-bigeasy@linutronix.de/
- Move the "if use_backlog_threads()" case into the CONFIG_RPS block
within napi_schedule_rps().
- Use __napi_schedule_irqoff() instead of napi_schedule_rps() in
kick_defer_list_purge().
- v1…v2 https://lore.kernel.org/all/20230929162121.1822900-1-bigeasy@linutronix.de/
- Patch #1 is new. It ensures that NAPI_STATE_SCHED_THREADED is always
set (instead conditional based on task state) and the smboot thread
logic relies on this bit now. In v1 NAPI_STATE_SCHED was used but is
racy.
- The defer list clean up is split out and also relies on
NAPI_STATE_SCHED_THREADED. This fixes a different race.
- RFC…v1 https://lore.kernel.org/all/20230814093528.117342-1-bigeasy@linutronix.de/
- Patch #2 has been removed. Removing the warning is still an option.
- There are two patches in the series:
- Patch #1 always creates backlog threads
- Patch #2 creates the backlog threads if requested at boot time,
mandatory on PREEMPT_RT.
So it is either or and I wanted to show how both look like.
- The kernel test robot reported a performance regression with
loopback (stress-ng --udp X --udp-ops Y) against the RFC version.
The regression is now avoided by using local-NAPI if backlog
processing is requested on the local CPU.
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v4 net-next 1/4] net: Remove conditional threaded-NAPI wakeup based on task state.
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
@ 2024-03-05 11:53 ` Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 2/4] net: Allow to use SMP threads for backlog NAPI Sebastian Andrzej Siewior
` (4 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-05 11:53 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner,
Wander Lairson Costa, Yan Zhai, Sebastian Andrzej Siewior
A NAPI thread is scheduled by first setting NAPI_STATE_SCHED bit. If
successful (the bit was not yet set) then the NAPI_STATE_SCHED_THREADED
is set but only if thread's state is not TASK_INTERRUPTIBLE (is
TASK_RUNNING) followed by task wakeup.
If the task is idle (TASK_INTERRUPTIBLE) then the
NAPI_STATE_SCHED_THREADED bit is not set. The thread is no relying on
the bit but always leaving the wait-loop after returning from schedule()
because there must have been a wakeup.
The smpboot-threads implementation for per-CPU threads requires an
explicit condition and does not support "if we get out of schedule()
then there must be something to do".
Removing this optimisation simplifies the following integration.
Set NAPI_STATE_SCHED_THREADED unconditionally on wakeup and rely on it
in the wait path by removing the `woken' condition.
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/core/dev.c | 14 ++------------
1 file changed, 2 insertions(+), 12 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index fe054cbd41e92..fe1055fe0b55c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4436,13 +4436,7 @@ static inline void ____napi_schedule(struct softnet_data *sd,
*/
thread = READ_ONCE(napi->thread);
if (thread) {
- /* Avoid doing set_bit() if the thread is in
- * INTERRUPTIBLE state, cause napi_thread_wait()
- * makes sure to proceed with napi polling
- * if the thread is explicitly woken from here.
- */
- if (READ_ONCE(thread->__state) != TASK_INTERRUPTIBLE)
- set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+ set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
wake_up_process(thread);
return;
}
@@ -6723,8 +6717,6 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
static int napi_thread_wait(struct napi_struct *napi)
{
- bool woken = false;
-
set_current_state(TASK_INTERRUPTIBLE);
while (!kthread_should_stop()) {
@@ -6733,15 +6725,13 @@ static int napi_thread_wait(struct napi_struct *napi)
* Testing SCHED bit is not enough because SCHED bit might be
* set by some other busy poll thread or by napi_disable().
*/
- if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {
+ if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state)) {
WARN_ON(!list_empty(&napi->poll_list));
__set_current_state(TASK_RUNNING);
return 0;
}
schedule();
- /* woken being true indicates this thread owns this napi. */
- woken = true;
set_current_state(TASK_INTERRUPTIBLE);
}
__set_current_state(TASK_RUNNING);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v4 net-next 2/4] net: Allow to use SMP threads for backlog NAPI.
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 1/4] net: Remove conditional threaded-NAPI wakeup based on task state Sebastian Andrzej Siewior
@ 2024-03-05 11:53 ` Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 3/4] net: Use backlog-NAPI to clean up the defer_list Sebastian Andrzej Siewior
` (3 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-05 11:53 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner,
Wander Lairson Costa, Yan Zhai, Sebastian Andrzej Siewior
Backlog NAPI is a per-CPU NAPI struct only (with no device behind it)
used by drivers which don't do NAPI them self, RPS and parts of the
stack which need to avoid recursive deadlocks while processing a packet.
The non-NAPI driver use the CPU local backlog NAPI. If RPS is enabled
then a flow for the skb is computed and based on the flow the skb can be
enqueued on a remote CPU. Scheduling/ raising the softirq (for backlog's
NAPI) on the remote CPU isn't trivial because the softirq is only
scheduled on the local CPU and performed after the hardirq is done.
In order to schedule a softirq on the remote CPU, an IPI is sent to the
remote CPU which schedules the backlog-NAPI on the then local CPU.
On PREEMPT_RT interrupts are force-threaded. The soft interrupts are
raised within the interrupt thread and processed after the interrupt
handler completed still within the context of the interrupt thread. The
softirq is handled in the context where it originated.
With force-threaded interrupts enabled, ksoftirqd is woken up if a
softirq is raised from hardirq context. This is the case if it is raised
from an IPI. Additionally there is a warning on PREEMPT_RT if the
softirq is raised from the idle thread.
This was done for two reasons:
- With threaded interrupts the processing should happen in thread
context (where it originated) and ksoftirqd is the only thread for
this context if raised from hardirq. Using the currently running task
instead would "punish" a random task.
- Once ksoftirqd is active it consumes all further softirqs until it
stops running. This changed recently and is no longer the case.
Instead of keeping the backlog NAPI in ksoftirqd (in force-threaded/
PREEMPT_RT setups) I am proposing NAPI-threads for backlog.
The "proper" setup with threaded-NAPI is not doable because the threads
are not pinned to an individual CPU and can be modified by the user.
Additionally a dummy network device would have to be assigned. Also
CPU-hotplug has to be considered if additional CPUs show up.
All this can be probably done/ solved but the smpboot-threads already
provide this infrastructure.
Sending UDP packets over loopback expects that the packet is processed
within the call. Delaying it by handing it over to the thread hurts
performance. It is not beneficial to the outcome if the context switch
happens immediately after enqueue or after a while to process a few
packets in a batch.
There is no need to always use the thread if the backlog NAPI is
requested on the local CPU. This restores the loopback throuput. The
performance drops mostly to the same value after enabling RPS on the
loopback comparing the IPI and the tread result.
Create NAPI-threads for backlog if request during boot. The thread runs
the inner loop from napi_threaded_poll(), the wait part is different. It
checks for NAPI_STATE_SCHED (the backlog NAPI can not be disabled).
The NAPI threads for backlog are optional, it has to be enabled via the boot
argument "thread_backlog_napi". It is mandatory for PREEMPT_RT to avoid the
wakeup of ksoftirqd from the IPI.
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/core/dev.c | 153 +++++++++++++++++++++++++++++++++++++------------
1 file changed, 116 insertions(+), 37 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index fe1055fe0b55c..24601c8db2d70 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -78,6 +78,7 @@
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/sched/mm.h>
+#include <linux/smpboot.h>
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/string.h>
@@ -197,6 +198,31 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
return &net->dev_index_head[ifindex & (NETDEV_HASHENTRIES - 1)];
}
+#ifndef CONFIG_PREEMPT_RT
+
+static DEFINE_STATIC_KEY_FALSE(use_backlog_threads_key);
+
+static int __init setup_backlog_napi_threads(char *arg)
+{
+ static_branch_enable(&use_backlog_threads_key);
+ return 0;
+}
+early_param("thread_backlog_napi", setup_backlog_napi_threads);
+
+static bool use_backlog_threads(void)
+{
+ return static_branch_unlikely(&use_backlog_threads_key);
+}
+
+#else
+
+static bool use_backlog_threads(void)
+{
+ return true;
+}
+
+#endif
+
static inline void rps_lock_irqsave(struct softnet_data *sd,
unsigned long *flags)
{
@@ -4404,6 +4430,7 @@ EXPORT_SYMBOL(__dev_direct_xmit);
/*************************************************************************
* Receiver routines
*************************************************************************/
+static DEFINE_PER_CPU(struct task_struct *, backlog_napi);
int netdev_max_backlog __read_mostly = 1000;
EXPORT_SYMBOL(netdev_max_backlog);
@@ -4436,12 +4463,16 @@ static inline void ____napi_schedule(struct softnet_data *sd,
*/
thread = READ_ONCE(napi->thread);
if (thread) {
+ if (use_backlog_threads() && thread == raw_cpu_read(backlog_napi))
+ goto use_local_napi;
+
set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
wake_up_process(thread);
return;
}
}
+use_local_napi:
list_add_tail(&napi->poll_list, &sd->poll_list);
WRITE_ONCE(napi->list_owner, smp_processor_id());
/* If not called from net_rx_action()
@@ -4687,6 +4718,11 @@ static void napi_schedule_rps(struct softnet_data *sd)
#ifdef CONFIG_RPS
if (sd != mysd) {
+ if (use_backlog_threads()) {
+ __napi_schedule_irqoff(&sd->backlog);
+ return;
+ }
+
sd->rps_ipi_next = mysd->rps_ipi_list;
mysd->rps_ipi_list = sd;
@@ -5944,7 +5980,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
#ifdef CONFIG_RPS
struct softnet_data *remsd = sd->rps_ipi_list;
- if (remsd) {
+ if (!use_backlog_threads() && remsd) {
sd->rps_ipi_list = NULL;
local_irq_enable();
@@ -5959,7 +5995,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
{
#ifdef CONFIG_RPS
- return sd->rps_ipi_list != NULL;
+ return !use_backlog_threads() && sd->rps_ipi_list;
#else
return false;
#endif
@@ -6003,7 +6039,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
* We can use a plain write instead of clear_bit(),
* and we dont need an smp_mb() memory barrier.
*/
- napi->state = 0;
+ napi->state &= NAPIF_STATE_THREADED;
again = false;
} else {
skb_queue_splice_tail_init(&sd->input_pkt_queue,
@@ -6739,40 +6775,46 @@ static int napi_thread_wait(struct napi_struct *napi)
return -1;
}
+static void napi_threaded_poll_loop(struct napi_struct *napi)
+{
+ struct softnet_data *sd;
+
+ for (;;) {
+ bool repoll = false;
+ void *have;
+
+ local_bh_disable();
+ sd = this_cpu_ptr(&softnet_data);
+ sd->in_napi_threaded_poll = true;
+
+ have = netpoll_poll_lock(napi);
+ __napi_poll(napi, &repoll);
+ netpoll_poll_unlock(have);
+
+ sd->in_napi_threaded_poll = false;
+ barrier();
+
+ if (sd_has_rps_ipi_waiting(sd)) {
+ local_irq_disable();
+ net_rps_action_and_irq_enable(sd);
+ }
+ skb_defer_free_flush(sd);
+ local_bh_enable();
+
+ if (!repoll)
+ break;
+
+ cond_resched();
+ }
+}
+
static int napi_threaded_poll(void *data)
{
struct napi_struct *napi = data;
- struct softnet_data *sd;
- void *have;
- while (!napi_thread_wait(napi)) {
- for (;;) {
- bool repoll = false;
+ while (!napi_thread_wait(napi))
+ napi_threaded_poll_loop(napi);
- local_bh_disable();
- sd = this_cpu_ptr(&softnet_data);
- sd->in_napi_threaded_poll = true;
-
- have = netpoll_poll_lock(napi);
- __napi_poll(napi, &repoll);
- netpoll_poll_unlock(have);
-
- sd->in_napi_threaded_poll = false;
- barrier();
-
- if (sd_has_rps_ipi_waiting(sd)) {
- local_irq_disable();
- net_rps_action_and_irq_enable(sd);
- }
- skb_defer_free_flush(sd);
- local_bh_enable();
-
- if (!repoll)
- break;
-
- cond_resched();
- }
- }
return 0;
}
@@ -11395,7 +11437,7 @@ static int dev_cpu_dead(unsigned int oldcpu)
list_del_init(&napi->poll_list);
if (napi->poll == process_backlog)
- napi->state = 0;
+ napi->state &= NAPIF_STATE_THREADED;
else
____napi_schedule(sd, napi);
}
@@ -11403,12 +11445,14 @@ static int dev_cpu_dead(unsigned int oldcpu)
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_enable();
+ if (!use_backlog_threads()) {
#ifdef CONFIG_RPS
- remsd = oldsd->rps_ipi_list;
- oldsd->rps_ipi_list = NULL;
+ remsd = oldsd->rps_ipi_list;
+ oldsd->rps_ipi_list = NULL;
#endif
- /* send out pending IPI's on offline CPU */
- net_rps_send_ipi(remsd);
+ /* send out pending IPI's on offline CPU */
+ net_rps_send_ipi(remsd);
+ }
/* Process offline CPU's input_pkt_queue */
while ((skb = __skb_dequeue(&oldsd->process_queue))) {
@@ -11746,6 +11790,38 @@ static int net_page_pool_create(int cpuid)
return 0;
}
+static int backlog_napi_should_run(unsigned int cpu)
+{
+ struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+ struct napi_struct *napi = &sd->backlog;
+
+ return test_bit(NAPI_STATE_SCHED_THREADED, &napi->state);
+}
+
+static void run_backlog_napi(unsigned int cpu)
+{
+ struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+
+ napi_threaded_poll_loop(&sd->backlog);
+}
+
+static void backlog_napi_setup(unsigned int cpu)
+{
+ struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
+ struct napi_struct *napi = &sd->backlog;
+
+ napi->thread = this_cpu_read(backlog_napi);
+ set_bit(NAPI_STATE_THREADED, &napi->state);
+}
+
+static struct smp_hotplug_thread backlog_threads = {
+ .store = &backlog_napi,
+ .thread_should_run = backlog_napi_should_run,
+ .thread_fn = run_backlog_napi,
+ .thread_comm = "backlog_napi/%u",
+ .setup = backlog_napi_setup,
+};
+
/*
* This is called single threaded during boot, so no need
* to take the rtnl semaphore.
@@ -11798,10 +11874,13 @@ static int __init net_dev_init(void)
init_gro_hash(&sd->backlog);
sd->backlog.poll = process_backlog;
sd->backlog.weight = weight_p;
+ INIT_LIST_HEAD(&sd->backlog.poll_list);
if (net_page_pool_create(i))
goto out;
}
+ if (use_backlog_threads())
+ smpboot_register_percpu_thread(&backlog_threads);
dev_boot_phase = 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v4 net-next 3/4] net: Use backlog-NAPI to clean up the defer_list.
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 1/4] net: Remove conditional threaded-NAPI wakeup based on task state Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 2/4] net: Allow to use SMP threads for backlog NAPI Sebastian Andrzej Siewior
@ 2024-03-05 11:53 ` Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 4/4] net: Rename rps_lock to backlog_lock Sebastian Andrzej Siewior
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-05 11:53 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner,
Wander Lairson Costa, Yan Zhai, Sebastian Andrzej Siewior
The defer_list is a per-CPU list which is used to free skbs outside of
the socket lock and on the CPU on which they have been allocated.
The list is processed during NAPI callbacks so ideally the list is
cleaned up.
Should the amount of skbs on the list exceed a certain water mark then
the softirq is triggered remotely on the target CPU by invoking a remote
function call. The raise of the softirqs via a remote function call
leads to waking the ksoftirqd on PREEMPT_RT which is undesired.
The backlog-NAPI threads already provide the infrastructure which can be
utilized to perform the cleanup of the defer_list.
The NAPI state is updated with the input_pkt_queue.lock acquired. It
order not to break the state, it is needed to also wake the backlog-NAPI
thread with the lock held. This requires to acquire the use the lock in
rps_lock_irq*() if the backlog-NAPI threads are used even with RPS
disabled.
Move the logic of remotely starting softirqs to clean up the defer_list
into kick_defer_list_purge(). Make sure a lock is held in
rps_lock_irq*() if backlog-NAPI threads are used. Schedule backlog-NAPI
for defer_list cleanup if backlog-NAPI is available.
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/netdevice.h | 1 +
net/core/dev.c | 25 +++++++++++++++++++++----
net/core/skbuff.c | 4 ++--
3 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index c41019f341794..38ceb64e522f7 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3368,6 +3368,7 @@ static inline void dev_xmit_recursion_dec(void)
__this_cpu_dec(softnet_data.xmit.recursion);
}
+void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu);
void __netif_schedule(struct Qdisc *q);
void netif_schedule_queue(struct netdev_queue *txq);
diff --git a/net/core/dev.c b/net/core/dev.c
index 24601c8db2d70..5ce16b62e1982 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -226,7 +226,7 @@ static bool use_backlog_threads(void)
static inline void rps_lock_irqsave(struct softnet_data *sd,
unsigned long *flags)
{
- if (IS_ENABLED(CONFIG_RPS))
+ if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_lock_irqsave(&sd->input_pkt_queue.lock, *flags);
else if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_irq_save(*flags);
@@ -234,7 +234,7 @@ static inline void rps_lock_irqsave(struct softnet_data *sd,
static inline void rps_lock_irq_disable(struct softnet_data *sd)
{
- if (IS_ENABLED(CONFIG_RPS))
+ if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_lock_irq(&sd->input_pkt_queue.lock);
else if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_irq_disable();
@@ -243,7 +243,7 @@ static inline void rps_lock_irq_disable(struct softnet_data *sd)
static inline void rps_unlock_irq_restore(struct softnet_data *sd,
unsigned long *flags)
{
- if (IS_ENABLED(CONFIG_RPS))
+ if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_unlock_irqrestore(&sd->input_pkt_queue.lock, *flags);
else if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_irq_restore(*flags);
@@ -251,7 +251,7 @@ static inline void rps_unlock_irq_restore(struct softnet_data *sd,
static inline void rps_unlock_irq_enable(struct softnet_data *sd)
{
- if (IS_ENABLED(CONFIG_RPS))
+ if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_unlock_irq(&sd->input_pkt_queue.lock);
else if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_irq_enable();
@@ -4737,6 +4737,23 @@ static void napi_schedule_rps(struct softnet_data *sd)
__napi_schedule_irqoff(&mysd->backlog);
}
+void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu)
+{
+ unsigned long flags;
+
+ if (use_backlog_threads()) {
+ rps_lock_irqsave(sd, &flags);
+
+ if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state))
+ __napi_schedule_irqoff(&sd->backlog);
+
+ rps_unlock_irq_restore(sd, &flags);
+
+ } else if (!cmpxchg(&sd->defer_ipi_scheduled, 0, 1)) {
+ smp_call_function_single_async(cpu, &sd->defer_csd);
+ }
+}
+
#ifdef CONFIG_NET_FLOW_LIMIT
int netdev_flow_limit_table_len __read_mostly = (1 << 12);
#endif
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 1f918e602bc4f..3b2d74ca8517a 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -7042,8 +7042,8 @@ nodefer: __kfree_skb(skb);
/* Make sure to trigger NET_RX_SOFTIRQ on the remote CPU
* if we are unlucky enough (this seems very unlikely).
*/
- if (unlikely(kick) && !cmpxchg(&sd->defer_ipi_scheduled, 0, 1))
- smp_call_function_single_async(cpu, &sd->defer_csd);
+ if (unlikely(kick))
+ kick_defer_list_purge(sd, cpu);
}
static void skb_splice_csum_page(struct sk_buff *skb, struct page *page,
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v4 net-next 4/4] net: Rename rps_lock to backlog_lock.
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
` (2 preceding siblings ...)
2024-03-05 11:53 ` [PATCH v4 net-next 3/4] net: Use backlog-NAPI to clean up the defer_list Sebastian Andrzej Siewior
@ 2024-03-05 11:53 ` Sebastian Andrzej Siewior
2024-03-05 12:07 ` [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Wander Lairson Costa
2024-03-08 15:33 ` Sebastian Andrzej Siewior
5 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-05 11:53 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner,
Wander Lairson Costa, Yan Zhai, Sebastian Andrzej Siewior
The rps_lock.*() functions use the inner lock of a sk_buff_head for
locking. This lock is used if RPS is enabled, otherwise the list is
accessed lockless and disabling interrupts is enough for the
synchronisation because it is only accessed CPU local. Not only the list
is protected but also the NAPI state protected.
With the addition of backlog threads, the lock is also needed because of
the cross CPU access even without RPS. The clean up of the defer_list
list is also done via backlog threads (if enabled).
It has been suggested to rename the locking function since it is no
longer just RPS.
Rename the rps_lock*() functions to backlog_lock*().
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/core/dev.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index 5ce16b62e1982..024d55e7af7d5 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -223,8 +223,8 @@ static bool use_backlog_threads(void)
#endif
-static inline void rps_lock_irqsave(struct softnet_data *sd,
- unsigned long *flags)
+static inline void backlog_lock_irq_save(struct softnet_data *sd,
+ unsigned long *flags)
{
if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_lock_irqsave(&sd->input_pkt_queue.lock, *flags);
@@ -232,7 +232,7 @@ static inline void rps_lock_irqsave(struct softnet_data *sd,
local_irq_save(*flags);
}
-static inline void rps_lock_irq_disable(struct softnet_data *sd)
+static inline void backlog_lock_irq_disable(struct softnet_data *sd)
{
if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_lock_irq(&sd->input_pkt_queue.lock);
@@ -240,8 +240,8 @@ static inline void rps_lock_irq_disable(struct softnet_data *sd)
local_irq_disable();
}
-static inline void rps_unlock_irq_restore(struct softnet_data *sd,
- unsigned long *flags)
+static inline void backlog_unlock_irq_restore(struct softnet_data *sd,
+ unsigned long *flags)
{
if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_unlock_irqrestore(&sd->input_pkt_queue.lock, *flags);
@@ -249,7 +249,7 @@ static inline void rps_unlock_irq_restore(struct softnet_data *sd,
local_irq_restore(*flags);
}
-static inline void rps_unlock_irq_enable(struct softnet_data *sd)
+static inline void backlog_unlock_irq_enable(struct softnet_data *sd)
{
if (IS_ENABLED(CONFIG_RPS) || use_backlog_threads())
spin_unlock_irq(&sd->input_pkt_queue.lock);
@@ -4742,12 +4742,12 @@ void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu)
unsigned long flags;
if (use_backlog_threads()) {
- rps_lock_irqsave(sd, &flags);
+ backlog_lock_irq_save(sd, &flags);
if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state))
__napi_schedule_irqoff(&sd->backlog);
- rps_unlock_irq_restore(sd, &flags);
+ backlog_unlock_irq_restore(sd, &flags);
} else if (!cmpxchg(&sd->defer_ipi_scheduled, 0, 1)) {
smp_call_function_single_async(cpu, &sd->defer_csd);
@@ -4809,7 +4809,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
reason = SKB_DROP_REASON_NOT_SPECIFIED;
sd = &per_cpu(softnet_data, cpu);
- rps_lock_irqsave(sd, &flags);
+ backlog_lock_irq_save(sd, &flags);
if (!netif_running(skb->dev))
goto drop;
qlen = skb_queue_len(&sd->input_pkt_queue);
@@ -4818,7 +4818,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
enqueue:
__skb_queue_tail(&sd->input_pkt_queue, skb);
input_queue_tail_incr_save(sd, qtail);
- rps_unlock_irq_restore(sd, &flags);
+ backlog_unlock_irq_restore(sd, &flags);
return NET_RX_SUCCESS;
}
@@ -4833,7 +4833,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
drop:
sd->dropped++;
- rps_unlock_irq_restore(sd, &flags);
+ backlog_unlock_irq_restore(sd, &flags);
dev_core_stats_rx_dropped_inc(skb->dev);
kfree_skb_reason(skb, reason);
@@ -5898,7 +5898,7 @@ static void flush_backlog(struct work_struct *work)
local_bh_disable();
sd = this_cpu_ptr(&softnet_data);
- rps_lock_irq_disable(sd);
+ backlog_lock_irq_disable(sd);
skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
if (skb->dev->reg_state == NETREG_UNREGISTERING) {
__skb_unlink(skb, &sd->input_pkt_queue);
@@ -5906,7 +5906,7 @@ static void flush_backlog(struct work_struct *work)
input_queue_head_incr(sd);
}
}
- rps_unlock_irq_enable(sd);
+ backlog_unlock_irq_enable(sd);
skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
if (skb->dev->reg_state == NETREG_UNREGISTERING) {
@@ -5924,14 +5924,14 @@ static bool flush_required(int cpu)
struct softnet_data *sd = &per_cpu(softnet_data, cpu);
bool do_flush;
- rps_lock_irq_disable(sd);
+ backlog_lock_irq_disable(sd);
/* as insertion into process_queue happens with the rps lock held,
* process_queue access may race only with dequeue
*/
do_flush = !skb_queue_empty(&sd->input_pkt_queue) ||
!skb_queue_empty_lockless(&sd->process_queue);
- rps_unlock_irq_enable(sd);
+ backlog_unlock_irq_enable(sd);
return do_flush;
#endif
@@ -6046,7 +6046,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
}
- rps_lock_irq_disable(sd);
+ backlog_lock_irq_disable(sd);
if (skb_queue_empty(&sd->input_pkt_queue)) {
/*
* Inline a custom version of __napi_complete().
@@ -6062,7 +6062,7 @@ static int process_backlog(struct napi_struct *napi, int quota)
skb_queue_splice_tail_init(&sd->input_pkt_queue,
&sd->process_queue);
}
- rps_unlock_irq_enable(sd);
+ backlog_unlock_irq_enable(sd);
}
return work;
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
` (3 preceding siblings ...)
2024-03-05 11:53 ` [PATCH v4 net-next 4/4] net: Rename rps_lock to backlog_lock Sebastian Andrzej Siewior
@ 2024-03-05 12:07 ` Wander Lairson Costa
2024-03-05 12:17 ` Denis Kirjanov
2024-03-08 15:33 ` Sebastian Andrzej Siewior
5 siblings, 1 reply; 10+ messages in thread
From: Wander Lairson Costa @ 2024-03-05 12:07 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner, Yan Zhai
On Tue, Mar 05, 2024 at 12:53:18PM +0100, Sebastian Andrzej Siewior wrote:
> The RPS code and "deferred skb free" both send IPI/ function call
> to a remote CPU in which a softirq is raised. This leads to a warning on
> PREEMPT_RT because raising softiqrs from function call led to undesired
> behaviour in the past. I had duct tape in RT for the "deferred skb free"
> and Wander Lairson Costa reported the RPS case.
>
> This series only provides support for SMP threads for backlog NAPI, I
> did not attach a patch to make it default and remove the IPI related
> code to avoid confusion. I can post it for reference it asked.
>
> The RedHat performance team was so kind to provide some testing here.
> The series (with the IPI code removed) has been tested and no regression
> vs without the series has been found. For testing iperf3 was used on 25G
> interface, provided by mlx5, ix40e or ice driver and RPS was enabled. I
> can provide the individual test results if needed.
>
> Changes:
> - v3…v4 https://lore.kernel.org/all/20240228121000.526645-1-bigeasy@linutronix.de/
>
> - Rebase on top of current net-next, collect Acks.
>
> - Add struct softnet_data as an argument to kick_defer_list_purge().
>
> - Add sd_has_rps_ipi_waiting() check to napi_threaded_poll_loop() which was
> accidentally removed.
>
> - v2…v3 https://lore.kernel.org/all/20240221172032.78737-1-bigeasy@linutronix.de/
>
> - Move the "if use_backlog_threads()" case into the CONFIG_RPS block
> within napi_schedule_rps().
>
> - Use __napi_schedule_irqoff() instead of napi_schedule_rps() in
> kick_defer_list_purge().
>
> - v1…v2 https://lore.kernel.org/all/20230929162121.1822900-1-bigeasy@linutronix.de/
>
> - Patch #1 is new. It ensures that NAPI_STATE_SCHED_THREADED is always
> set (instead conditional based on task state) and the smboot thread
> logic relies on this bit now. In v1 NAPI_STATE_SCHED was used but is
> racy.
>
> - The defer list clean up is split out and also relies on
> NAPI_STATE_SCHED_THREADED. This fixes a different race.
>
> - RFC…v1 https://lore.kernel.org/all/20230814093528.117342-1-bigeasy@linutronix.de/
>
> - Patch #2 has been removed. Removing the warning is still an option.
>
> - There are two patches in the series:
> - Patch #1 always creates backlog threads
> - Patch #2 creates the backlog threads if requested at boot time,
> mandatory on PREEMPT_RT.
> So it is either or and I wanted to show how both look like.
>
> - The kernel test robot reported a performance regression with
> loopback (stress-ng --udp X --udp-ops Y) against the RFC version.
> The regression is now avoided by using local-NAPI if backlog
> processing is requested on the local CPU.
>
> Sebastian
>
Patch 0002 does not apply for me. I tried torvalds/master and
linux-rt-devel/linux-6.8.y-rt. Which tree should I use?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI
2024-03-05 12:07 ` [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Wander Lairson Costa
@ 2024-03-05 12:17 ` Denis Kirjanov
0 siblings, 0 replies; 10+ messages in thread
From: Denis Kirjanov @ 2024-03-05 12:17 UTC (permalink / raw)
To: Wander Lairson Costa, Sebastian Andrzej Siewior
Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner, Yan Zhai
>>
>
> Patch 0002 does not apply for me. I tried torvalds/master and
> linux-rt-devel/linux-6.8.y-rt. Which tree should I use?
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
` (4 preceding siblings ...)
2024-03-05 12:07 ` [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Wander Lairson Costa
@ 2024-03-08 15:33 ` Sebastian Andrzej Siewior
2024-03-09 4:29 ` Jakub Kicinski
5 siblings, 1 reply; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-08 15:33 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski,
Jesper Dangaard Brouer, Paolo Abeni, Thomas Gleixner,
Wander Lairson Costa, Yan Zhai
On 2024-03-05 12:53:18 [+0100], To netdev@vger.kernel.org wrote:
> The RPS code and "deferred skb free" both send IPI/ function call
> to a remote CPU in which a softirq is raised. This leads to a warning on
> PREEMPT_RT because raising softiqrs from function call led to undesired
> behaviour in the past. I had duct tape in RT for the "deferred skb free"
> and Wander Lairson Costa reported the RPS case.
>
> This series only provides support for SMP threads for backlog NAPI, I
> did not attach a patch to make it default and remove the IPI related
> code to avoid confusion. I can post it for reference it asked.
>
> The RedHat performance team was so kind to provide some testing here.
> The series (with the IPI code removed) has been tested and no regression
> vs without the series has been found. For testing iperf3 was used on 25G
> interface, provided by mlx5, ix40e or ice driver and RPS was enabled. I
> can provide the individual test results if needed.
>
> Changes:
> - v3…v4 https://lore.kernel.org/all/20240228121000.526645-1-bigeasy@linutronix.de/
The v4 is marked as "Changes Requested". Is there anything for me to do?
I've been asked to rebase v3 on top of net-next which I did with v4. It
still applies onto net-next as of today.
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI
2024-03-08 15:33 ` Sebastian Andrzej Siewior
@ 2024-03-09 4:29 ` Jakub Kicinski
2024-03-09 9:09 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 10+ messages in thread
From: Jakub Kicinski @ 2024-03-09 4:29 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: netdev, David S. Miller, Eric Dumazet, Jesper Dangaard Brouer,
Paolo Abeni, Thomas Gleixner, Wander Lairson Costa, Yan Zhai
On Fri, 8 Mar 2024 16:33:02 +0100 Sebastian Andrzej Siewior wrote:
> The v4 is marked as "Changes Requested". Is there anything for me to do?
> I've been asked to rebase v3 on top of net-next which I did with v4. It
> still applies onto net-next as of today.
Hm, I tried to apply and it doesn't, sure you fetched?
Big set of changes from Eric got applied last night.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI
2024-03-09 4:29 ` Jakub Kicinski
@ 2024-03-09 9:09 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-03-09 9:09 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, David S. Miller, Eric Dumazet, Jesper Dangaard Brouer,
Paolo Abeni, Thomas Gleixner, Wander Lairson Costa, Yan Zhai
On 2024-03-08 20:29:54 [-0800], Jakub Kicinski wrote:
> On Fri, 8 Mar 2024 16:33:02 +0100 Sebastian Andrzej Siewior wrote:
> > The v4 is marked as "Changes Requested". Is there anything for me to do?
> > I've been asked to rebase v3 on top of net-next which I did with v4. It
> > still applies onto net-next as of today.
>
> Hm, I tried to apply and it doesn't, sure you fetched?
> Big set of changes from Eric got applied last night.
So git merge did fine but the individual import failed due to recent
changes. Now I rebased it on top of
d7e14e5344933 ("Merge tag 'mlx5-socket-direct-v3' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux")
and reposted as of 20240309090824.2956805-1-bigeasy@linutronix.de.
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-03-09 9:09 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-05 11:53 [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 1/4] net: Remove conditional threaded-NAPI wakeup based on task state Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 2/4] net: Allow to use SMP threads for backlog NAPI Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 3/4] net: Use backlog-NAPI to clean up the defer_list Sebastian Andrzej Siewior
2024-03-05 11:53 ` [PATCH v4 net-next 4/4] net: Rename rps_lock to backlog_lock Sebastian Andrzej Siewior
2024-03-05 12:07 ` [PATCH v4 net-next 0/4] net: Provide SMP threads for backlog NAPI Wander Lairson Costa
2024-03-05 12:17 ` Denis Kirjanov
2024-03-08 15:33 ` Sebastian Andrzej Siewior
2024-03-09 4:29 ` Jakub Kicinski
2024-03-09 9:09 ` Sebastian Andrzej Siewior
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).