netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/6] Suspend IRQs during application busy periods
@ 2024-08-23 17:30 Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout Joe Damato
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Joe Damato,
	Alexander Lobakin, Alexander Viro,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_), Breno Leitao,
	Christian Brauner, Daniel Borkmann, David S. Miller, Eric Dumazet,
	Jan Kara, Jiri Pirko, Johannes Berg, Jonathan Corbet,
	open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	Lorenzo Bianconi, Martin Karsten, Paolo Abeni,
	Sebastian Andrzej Siewior

Greetings:

This series introduces a new mechanism, IRQ suspension, which allows
network applications using epoll to mask IRQs during periods of high
traffic while also reducing tail latency (compared to existing
mechanisms, see below) during periods of low traffic. In doing so, this
balances CPU consumption with network processing efficiency.

Martin Karsten (CC'd) and I have been collaborating on this series for
several months and have appreciated the feedback from the community on
our RFC [1]. We've updated the cover letter and kernel documentation in
an attempt to more clearly explain how this mechanism works, how
applications can use it, and how it compares to existing mechanisms in
the kernel. We've added an additional test case, 'fullbusy', achieved by
modifying libevent for comparison. See below for a detailed description,
link to the patch, and test results.

I briefly mentioned this idea at netdev conf 2024 (for those who were
there) and Martin described this idea in an earlier paper presented at
Sigmetrics 2024 [2].

~ The short explanation (TL;DR)

We propose adding a new sysfs parameter: irq_suspend_timeout to help
balance CPU usage and network processing efficiency when using IRQ
deferral and napi busy poll.

If this parameter is set to a non-zero value *and* a user application
has enabled preferred busy poll on a busy poll context (via the
EPIOCSPARAMS ioctl introduced in commit 18e2bf0edf4d ("eventpoll: Add
epoll ioctl for epoll_params")), then application calls to epoll_wait
for that context will cause device IRQs and softirq processing to be
suspended as long as epoll_wait successfully retrieves data from the
NAPI. Each time data is retrieved, the irq_suspend_timeout is deferred.

If/when network traffic subsides and epoll_wait returns no data, IRQ
suspension is immediately reverted back to the existing
napi_defer_hard_irqs and gro_flush_timeout mechanism which was
introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
feature")).

The irq_suspend_timeout serves as a safety mechanism. If userland takes
a long time processing data, irq_suspend_timeout will fire and restart
normal NAPI processing.

For a more in depth explanation, please continue reading.

~ Comparison with existing mechanisms

Interrupt mitigation can be accomplished in napi software, by setting
napi_defer_hard_irqs and gro_flush_timeout, or via interrupt coalescing
in the NIC. This can be quite efficient, but in both cases, a fixed
timeout (or packet count) needs to be configured. However, a fixed
timeout cannot effectively support both low- and high-load situations:

At low load, an application typically processes a few requests and then
waits to receive more input data. In this scenario, a large timeout will
cause unnecessary latency.

At high load, an application typically processes many requests before
being ready to receive more input data. In this case, a small timeout
will likely fire prematurely and trigger irq/softirq processing, which
interferes with the application's execution. This causes overhead, most
likely due to cache contention.

While NICs attempt to provide adaptive interrupt coalescing schemes,
these cannot properly take into account application-level processing.

An alternative packet delivery mechanism is busy-polling, which results
in perfect alignment of application processing and network polling. It
delivers optimal performance (throughput and latency), but results in
100% cpu utilization and is thus inefficient for below-capacity
workloads.

We propose to add a new packet delivery mode that properly alternates
between busy polling and interrupt-based delivery depending on busy and
idle periods of the application. During a busy period, the system
operates in busy-polling mode, which avoids interference. During an idle
period, the system falls back to interrupt deferral, but with a small
timeout to avoid excessive latencies. This delivery mode can also be
viewed as an extension of basic interrupt deferral, but alternating
between a small and a very large timeout.

This delivery mode is efficient, because it avoids softirq execution
interfering with application processing during busy periods. It can be
used with blocking epoll_wait to conserve cpu cycles during idle
periods. The effect of alternating between busy and idle periods is that
performance (throughput and latency) is very close to full busy polling,
while cpu utilization is lower and very close to interrupt mitigation.

~ Usage details

IRQ suspension is introduced via a sysfs parameter that controls the
maximum time that IRQs can be suspended.

Here's how it is intended to work:
  - An administrator sets the existing sysfs parameters for
    napi_defer_hard_irqs and gro_flush_timeout to enable IRQ deferral.

  - An administrator sets the new sysfs parameter irq_suspend_timeout
    to a larger value than gro_flush_timeout to enable IRQ suspension.

  - The user application issues the existing epoll ioctl to set the
    prefer_busy_poll flag on the epoll context.

  - The user application then calls epoll_wait to busy poll for network
    events, as it normally would.

  - If epoll_wait returns events to userland, IRQs are suspended for the
    duration of irq_suspend_timeout.

  - If epoll_wait finds no events and the thread is about to go to
    sleep, IRQ handling using napi_defer_hard_irqs and gro_flush_timeout
    is resumed.

As long as epoll_wait is retrieving events, IRQs (and softirq
processing) for the NAPI being polled remain disabled. When network
traffic reduces, eventually a busy poll loop in the kernel will retrieve
no data. When this occurs, regular IRQ deferral using gro_flush_timeout
for the polled NAPI is re-enabled.

Unless IRQ suspension is continued by subsequent calls to epoll_wait, it
automatically times out after the irq_suspend_timeout timer expires.
Regular deferral is also immediately re-enabled when the epoll context
is destroyed.

~ Usage scenario

The target scenario for IRQ suspension as packet delivery mode is a
system that runs a dominant application with substantial network I/O.
The target application can be configured to receive input data up to a
certain batch size (via epoll_wait maxevents parameter) and this batch
size determines the worst-case latency that application requests might
experience. Because packet delivery is suspended during the target
application's processing, the batch size also determines the worst-case
latency of concurrent applications.

gro_flush_timeout should be set as small as possible, but large enough to
make sure that a single request is likely not being interfered with.

irq_suspend_timeout is largely a safety mechanism against misbehaving
applications. It should be set large enough to cover the processing of an
entire application batch, i.e., the factor between gro_flush_timeout and
irq_suspend_timeout should roughly correspond to the maximum batch size
that the target application would process in one go.

~ Design rationale

The implementation of the IRQ suspension mechanism very nicely dovetails
with the existing mechanism for IRQ deferral when preferred busy poll is
enabled (introduced in commit 7fd3253a7de6 ("net: Introduce preferred
busy-polling"), see that commit message for more details).
  
The existing IRQ deferral mechanism works together with our proposal,
and as such, it seems natural to put irq_suspend_time at the same level
of napi_defer_hard_irqs and gro_flush_timeout as a per-device sysfs
parameter with the hope that these parameters will eventually be
migrated to per-napi settings.

While it would be possible to inject the suspend timeout via
the existing epoll ioctl, it is more natural to avoid this path for two
reasons:
  
1. Using a sysfs parameter ensures admin oversight for using the
   mechanism and its configuration based on overall system objectives
  
2. An epoll context is linked to NAPI IDs as file descriptors are added;
   this means any epoll context might suddenly be associated with a
   different net_device if the application were to replace all existing
   fds with fds from a different device. In this case, the scope of the
   suspend timeout becomes unclear and many edge cases for both the user
   application and the kernel are introduced

Only a single iteration through napi busy polling is needed for this
mechanism to work effectively. Since an important objective for this
mechanism is preserving cpu cycles, exactly one iteration of the napi
busy loop is invoked when busy_poll_usecs is set to 0.

~ Important call outs in the implementation

  - Enabling per epoll-context preferred busy poll will now effectively
    lead to a nonblocking iteration through napi_busy_loop, even when
    busy_poll_usecs is 0. See patch 4.

  - Patches apply cleanly on commit d785ed945de6 ("net: wwan: t7xx: PCIe
    reset rescan"). When commit b4988e3bd1f0 ("eventpoll: Annotate
    data-race of busy_poll_usecs") is picked up from the vfs folks, there
    will be a very minor merge conflict with patch 4. patch 4 already
    includes the fix picked up vfs, but modifies the same line by adding an
    extra conditional.

  - In the future, time permitting, I hope to enable support for
    napi_defer_hard_irqs, gro_flush_timeout (introduced in commit
    6f8b12d661d0 ("net: napi: add hard irqs deferral feature")), and
    irq_suspend_timeout (introduced in this series) on a per-NAPI basis
    (presumably via netdev-genl).

~ Benchmark configs & descriptions

The changes were benchmarked with memcached [3] using the benchmarking
tool mutilate [4].

To facilitate benchmarking, a small patch [5] was applied to memcached
1.6.29 (the latest memcached release as of this submission) to allow
setting per-epoll context preferred busy poll and other settings via
environment variables. Another small patch [6] was applied to libevent
to enable full busy-polling.

Multiple scenarios were benchmarked as described below and the scripts
used for producing these results can be found on github [7] (note: all
scenarios use NAPI-based traffic splitting via SO_INCOMING_ID by passing
-N to memcached):

  - base:
    - no other options enabled
  - deferX:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to X,000
  - napibusy:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to 200,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 64,
      busy_poll_budget = 64, prefer_busy_poll = true)
  - fullbusy:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to 5,000,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 1000,
      busy_poll_budget = 64, prefer_busy_poll = true)
    - change memcached's nonblocking epoll_wait invocation (via
      libevent) to using a 1 ms timeout
  - suspendX:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to X,000
    - set irq_suspend_timeout to 20,000,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 0,
      busy_poll_budget = 64, prefer_busy_poll = true)

~ Benchmark results

Tested on:

Single socket AMD EPYC 7662 64-Core Processor
Hyperthreading disabled
4 NUMA Zones (NPS=4)
16 CPUs per NUMA zone (64 cores total)
2 x Dual port 100gbps Mellanox Technologies ConnectX-5 Ex EN NIC

The test machine is configured such that a single interface has 8 RX
queues. The queues' IRQs and memcached are pinned to CPUs that are
NUMA-local to the interface which is under test. The NIC's interrupt
coalescing configuration is left at boot-time defaults.

Results:

Results are shown below. The mechanism added by this series is
represented by the 'suspend' cases. Data presented shows a summary over
10 runs of each test case [8] using the scripts on github [7]. For,
latency the median of the 10 runs is shown. For throughput and CPU
utilization, the average is shown.

These results were captured using the scripts on github [7] to
illustrate how this approach compares with other pre-existing
mechanisms. This data is not to be interpreted as scientific data
captured in a fully isolated lab setting, but instead as best effort,
illustrative information comparing and contrasting tradeoffs.

Compare:
- Throughput (MAX) and latencies of base vs suspend.
- CPU usage of napibusy and fullbusy during lower load (200K, 400K for
  example) vs suspend.
- Latency of the defer variants vs suspend as timeout and load
  increases.

The overall takeaway is that the suspend variants provide a superior
combination of high throughput, low latency, and low cpu utilization
compared to all other variants. Each of the suspend variants works very
well, but some fine-tuning between latency and cpu utilization is still
possible by tuning the small timeout (gro_flush_timeout).

base
  load     qps  avglat  95%lat  99%lat     cpu
  200K  200042     113     234     416      28
  400K  399983     141     271     730      42
  600K  599951     162     386     712      64
  800K  800057     350    1151    2278      84
 1000K  962651    4168    5980    7611      99
   MAX  982241    4426    5809    7317      99

defer10
  load     qps  avglat  95%lat  99%lat     cpu
  200K  200025      53     119     140      31
  400K  400064      59     131     156      53
  600K  599930      72     146     204      76
  800K  800002     656    3236    5455      97
 1000K  887890    4806    6288    8568      99
   MAX  914970    4863    6008    6361     100

defer20
  load     qps  avglat  95%lat  99%lat     cpu
  200K  199971      59     122     149      26
  400K  400029      67     139     167      46
  600K  599985      78     152     195      68
  800K  799931     250    1009    2481      89
 1000K  896278    4662    5876    6422      99
   MAX  966120    4549    5652    6076     100

defer50
  load     qps  avglat  95%lat  99%lat     cpu
  200K  200005      78     133     175      22
  400K  400000      88     159     192      39
  600K  600067      97     173     214      58
  800K  799933     177     519    1302      81
 1000K  948784    4378    6078    8941      98
   MAX 1015637    4358    5354    6365     100

defer200
  load     qps  avglat  95%lat  99%lat     cpu
  200K  200025     164     255     304      18
  400K  399964     182     276     331      34
  600K  599924     204     303     361      48
  800K  800013     253     401     727      73
 1000K  979727    3380    5690    7648      98
   MAX 1022960    4322    5598    6652     100

fullbusy
  load     qps  avglat  95%lat  99%lat     cpu
  200K  200000      47     113     129     100
  400K  400071      51     123     148     100
  600K  599942      58     128     176     100
  800K  799966      65     138     191     100
 1000K 1000099      83     167     233     100
   MAX 1159578    3797    4281    4371     100

napibusy
  load     qps  avglat  95%lat  99%lat     cpu
  200K  200022     100     238     272      56
  400K  399897      77     223     273      83
  600K  600077      65     158     255      96
  800K  800007      74     147     237      99
 1000K  999992      88     173     242     100
   MAX 1067116    4060    5903   10231      99

suspend10
  load     qps  avglat  95%lat  99%lat     cpu
  200K  199984      51     119     139      32
  400K  400077      56     127     153      51
  600K  599944      64     137     190      69
  800K  800012      71     145     200      84
 1000K 1000031      95     183     300      94
   MAX 1146106    3852    4313    4462     100

suspend20
  load     qps  avglat  95%lat  99%lat     cpu
  200K  199966      57     120     146      29
  400K  400016      61     133     158      47
  600K  599997      68     141     187      65
  800K  800057      77     153     210      81
 1000K 1000059     107     191     397      92
   MAX 1150835    3813    4284    4373     100

suspend50
  load     qps  avglat  95%lat  99%lat     cpu
  200K  199940      72     125     170      25
  400K  399981      76     145     184      42
  600K  600054      82     157     201      58
  800K  799940      93     175     352      75
 1000K 1000000     108     200     269      89
   MAX 1139291    3834    4330    4383     100

suspend200
  load     qps  avglat  95%lat  99%lat     cpu
  200K  199983     149     250     298      19
  400K  399968     155     270     326      35
  600K  599982     159     285     348      51
  800K  800079     162     297     361      67
 1000K  999944     175     311     392      84
   MAX 1128863    3891    4334    4383     100

~ FAQ

  - Can the new timeout value be threaded through the new epoll ioctl ?

    Only with difficulty. The epoll ioctl sets options on an epoll
    context and the NAPI ID associated with an epoll context can change
    based on what file descriptors a user app adds to the epoll context.
    This would introduce complexity in the API from the user perspective
    and also complexity in the kernel.

    This new sysfs parameter, which is similar to and used in
    combination with napi_defer_hard_irqs and gro_flush_timeout should
    be exposed to users in the same way: via sysfs.

    At some point in the future, per-NAPI support for
    napi_defer_hard_irqs, gro_flush_timeout, and irq_suspend_timeout
    (presumably via netdev-genl) can be added.

  - Can irq suspend be built by combining NIC coalescing and
    gro_flush_timeout ?

    No. The problem is that the long timeout must engage if and only if
    prefer-busy is active.

    When using NIC coalescing for the short timeout (without
    napi_defer_hard_irqs/gro_flush_timeout), an interrupt after an idle
    period will trigger softirq, which will run napi polling. At this
    point, prefer-busy is not active, so NIC interrupts would be
    re-enabled. Then it is not possible for the longer timeout to
    interject to switch control back to polling. In other words, only by
    using the software timer for the short timeout, it is possible to
    extend the timeout without having to reprogram the NIC timer or
    reach down directly and disable interrupts.

    Using gro_flush_timeout for the long timeout also has problems, for
    the same underlying reason. In the current napi implementation,
    gro_flush_timeout is not tied to prefer-busy. We'd either have to
    change that and in the process modify the existing deferral
    mechanism, or introduce a state variable to determine whether
    gro_flush_timeout is used as long timeout for irq suspend or whether
    it is used for its default purpose. In an earlier version, we did
    try something similar to the latter and made it work, but it ends up
    being a lot more convoluted than our current proposal.

  - Isn't it already possible to combine busy looping with irq deferral?

    Yes, in fact enabling irq deferral via napi_defer_hard_irqs and
    gro_flush_timeout is a precondition for prefer_busy_poll to have an
    effect. If the application also uses a tight busy loop with
    essentially nonblocking epoll_wait (accomplished with a very short
    timeout parameter), this is the fullbusy case shown in the results.
    An application using blocking epoll_wait is shown as the napibusy
    case in the result. It's a hybrid approach that provides limited
    latency benefits compared to the base case and plain irq deferral,
    but not as good as fullbusy or suspend.

~ Special thanks

Several people were involved in earlier stages of the development of this
mechanism whom we'd like to thank:

  - Peter Cai (CC'd), for the initial kernel patch and his contributions
    to the paper.
    
  - Mohammadamin Shafie (CC'd), for testing various versions of the kernel
    patch and providing helpful feedback.

Thanks,
Martin and Joe

[1]: https://lore.kernel.org/netdev/20240812125717.413108-1-jdamato@fastly.com/
[2]: https://doi.org/10.1145/3626780
[3]: https://github.com/memcached/memcached/blob/master/doc/napi_ids.txt
[4]: https://github.com/leverich/mutilate
[5]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/memcached.patch
[6]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/libevent.patch
[7]: https://github.com/martinkarsten/irqsuspend
[8]: https://github.com/martinkarsten/irqsuspend/tree/main/results

rfc -> v1:
  - Cover letter updated to include more details.
  - Patch 1 updated to remove the documentation added. This was moved to
    patch 6 with the rest of the docs (see below).
  - Patch 5 updated to fix an error uncovered by the kernel build robot.
    See patch 5's changelog for more details.
  - Patch 6 added which updates kernel documentation.

Joe Damato (1):
  docs: networking: Describe irq suspension

Martin Karsten (5):
  net: Add sysfs parameter irq_suspend_timeout
  net: Suspend softirq when prefer_busy_poll is set
  net: Add control functions for irq suspension
  eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set
  eventpoll: Control irq suspension for prefer_busy_poll

 Documentation/networking/napi.rst | 112 +++++++++++++++++++++++++++++-
 fs/eventpoll.c                    |  34 ++++++++-
 include/linux/netdevice.h         |   2 +
 include/net/busy_poll.h           |   3 +
 net/core/dev.c                    |  53 ++++++++++++--
 net/core/net-sysfs.c              |  18 +++++
 6 files changed, 213 insertions(+), 9 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout
  2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
@ 2024-08-23 17:30 ` Joe Damato
  2024-08-23 17:39   ` Eric Dumazet
  2024-08-23 17:30 ` [PATCH net-next 2/6] net: Suspend softirq when prefer_busy_poll is set Joe Damato
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, David S. Miller, Eric Dumazet, Paolo Abeni,
	Jiri Pirko, Sebastian Andrzej Siewior, Lorenzo Bianconi,
	Daniel Borkmann, Breno Leitao, Johannes Berg, Heiner Kallweit,
	open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

This patch doesn't change any behavior but prepares the code for other
changes in the following commits which use irq_suspend_timeout as a
timeout for IRQ suspension.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 rfc -> v1:
   - Removed napi.rst documentation from this patch; added to patch 6.

 include/linux/netdevice.h |  2 ++
 net/core/dev.c            |  3 ++-
 net/core/net-sysfs.c      | 18 ++++++++++++++++++
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0ef3eaa23f4b..31867bb2ff65 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1857,6 +1857,7 @@ enum netdev_reg_state {
  *	@gro_flush_timeout:	timeout for GRO layer in NAPI
  *	@napi_defer_hard_irqs:	If not zero, provides a counter that would
  *				allow to avoid NIC hard IRQ, on busy queues.
+ *	@irq_suspend_timeout:	IRQ suspension timeout
  *
  *	@rx_handler:		handler for received packets
  *	@rx_handler_data: 	XXX: need comments on this one
@@ -2060,6 +2061,7 @@ struct net_device {
 	struct netdev_rx_queue	*_rx;
 	unsigned long		gro_flush_timeout;
 	int			napi_defer_hard_irqs;
+	unsigned long		irq_suspend_timeout;
 	unsigned int		gro_max_size;
 	unsigned int		gro_ipv4_max_size;
 	rx_handler_func_t __rcu	*rx_handler;
diff --git a/net/core/dev.c b/net/core/dev.c
index e7260889d4cb..3bf325ec25a3 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -11945,6 +11945,7 @@ static void __init net_dev_struct_check(void)
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, _rx);
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_flush_timeout);
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, napi_defer_hard_irqs);
+	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, irq_suspend_timeout);
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_max_size);
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_ipv4_max_size);
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, rx_handler);
@@ -11956,7 +11957,7 @@ static void __init net_dev_struct_check(void)
 #ifdef CONFIG_NET_XGRESS
 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, tcx_ingress);
 #endif
-	CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 104);
+	CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 112);
 }
 
 /*
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index 0e2084ce7b75..fb6f3327310f 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -440,6 +440,23 @@ static ssize_t napi_defer_hard_irqs_store(struct device *dev,
 }
 NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_dec);
 
+static int change_irq_suspend_timeout(struct net_device *dev, unsigned long val)
+{
+	WRITE_ONCE(dev->irq_suspend_timeout, val);
+	return 0;
+}
+
+static ssize_t irq_suspend_timeout_store(struct device *dev,
+					 struct device_attribute *attr,
+					 const char *buf, size_t len)
+{
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	return netdev_store(dev, attr, buf, len, change_irq_suspend_timeout);
+}
+NETDEVICE_SHOW_RW(irq_suspend_timeout, fmt_ulong);
+
 static ssize_t ifalias_store(struct device *dev, struct device_attribute *attr,
 			     const char *buf, size_t len)
 {
@@ -664,6 +681,7 @@ static struct attribute *net_class_attrs[] __ro_after_init = {
 	&dev_attr_tx_queue_len.attr,
 	&dev_attr_gro_flush_timeout.attr,
 	&dev_attr_napi_defer_hard_irqs.attr,
+	&dev_attr_irq_suspend_timeout.attr,
 	&dev_attr_phys_port_id.attr,
 	&dev_attr_phys_port_name.attr,
 	&dev_attr_phys_switch_id.attr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 2/6] net: Suspend softirq when prefer_busy_poll is set
  2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout Joe Damato
@ 2024-08-23 17:30 ` Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 3/6] net: Add control functions for irq suspension Joe Damato
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, David S. Miller, Eric Dumazet, Paolo Abeni,
	Jiri Pirko, Sebastian Andrzej Siewior, Lorenzo Bianconi,
	Daniel Borkmann, open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

When NAPI_F_PREFER_BUSY_POLL is set during busy_poll_stop and the
irq_suspend_timeout sysfs is nonzero, this timeout is used to defer
softirq scheduling, potentially longer than gro_flush_timeout. This can
be used to effectively suspend softirq processing during the time it
takes for an application to process data and return to the next
busy-poll.

The call to napi->poll in busy_poll_stop might lead to an invocation of
napi_complete_done, but the prefer-busy flag is still set at that time,
so the same logic is used to defer softirq scheduling for
irq_suspend_timeout.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 net/core/dev.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 3bf325ec25a3..74060ba866d4 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6230,7 +6230,12 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
 			timeout = READ_ONCE(n->dev->gro_flush_timeout);
 		n->defer_hard_irqs_count = READ_ONCE(n->dev->napi_defer_hard_irqs);
 	}
-	if (n->defer_hard_irqs_count > 0) {
+	if (napi_prefer_busy_poll(n)) {
+		timeout = READ_ONCE(n->dev->irq_suspend_timeout);
+		if (timeout)
+			ret = false;
+	}
+	if (ret && n->defer_hard_irqs_count > 0) {
 		n->defer_hard_irqs_count--;
 		timeout = READ_ONCE(n->dev->gro_flush_timeout);
 		if (timeout)
@@ -6366,9 +6371,13 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock,
 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
 
 	if (flags & NAPI_F_PREFER_BUSY_POLL) {
-		napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
-		timeout = READ_ONCE(napi->dev->gro_flush_timeout);
-		if (napi->defer_hard_irqs_count && timeout) {
+		timeout = READ_ONCE(napi->dev->irq_suspend_timeout);
+		if (!timeout) {
+			napi->defer_hard_irqs_count = READ_ONCE(napi->dev->napi_defer_hard_irqs);
+			if (napi->defer_hard_irqs_count)
+				timeout = READ_ONCE(napi->dev->gro_flush_timeout);
+		}
+		if (timeout) {
 			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
 			skip_schedule = true;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 3/6] net: Add control functions for irq suspension
  2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 2/6] net: Suspend softirq when prefer_busy_poll is set Joe Damato
@ 2024-08-23 17:30 ` Joe Damato
  2024-08-23 17:56   ` Eric Dumazet
  2024-08-23 18:14   ` Eric Dumazet
  2024-08-23 17:30 ` [PATCH net-next 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, David S. Miller, Eric Dumazet, Paolo Abeni,
	Jiri Pirko, Sebastian Andrzej Siewior, Lorenzo Bianconi,
	Daniel Borkmann, open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

The napi_suspend_irqs routine bootstraps irq suspension by elongating
the defer timeout to irq_suspend_timeout.

The napi_resume_irqs routine effectly cancels irq suspension by forcing
the napi to be scheduled immediately.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 include/net/busy_poll.h |  3 +++
 net/core/dev.c          | 33 +++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index 9b09acac538e..f095b2bdeee1 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -52,6 +52,9 @@ void napi_busy_loop_rcu(unsigned int napi_id,
 			bool (*loop_end)(void *, unsigned long),
 			void *loop_end_arg, bool prefer_busy_poll, u16 budget);
 
+void napi_suspend_irqs(unsigned int napi_id);
+void napi_resume_irqs(unsigned int napi_id);
+
 #else /* CONFIG_NET_RX_BUSY_POLL */
 static inline unsigned long net_busy_loop_on(void)
 {
diff --git a/net/core/dev.c b/net/core/dev.c
index 74060ba866d4..4de0dfc86e21 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6507,6 +6507,39 @@ void napi_busy_loop(unsigned int napi_id,
 }
 EXPORT_SYMBOL(napi_busy_loop);
 
+void napi_suspend_irqs(unsigned int napi_id)
+{
+	struct napi_struct *napi;
+
+	rcu_read_lock();
+	napi = napi_by_id(napi_id);
+	if (napi) {
+		unsigned long timeout = READ_ONCE(napi->dev->irq_suspend_timeout);
+
+		if (timeout)
+			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(napi_suspend_irqs);
+
+void napi_resume_irqs(unsigned int napi_id)
+{
+	struct napi_struct *napi;
+
+	rcu_read_lock();
+	napi = napi_by_id(napi_id);
+	if (napi) {
+		if (READ_ONCE(napi->dev->irq_suspend_timeout)) {
+			local_bh_disable();
+			napi_schedule(napi);
+			local_bh_enable();
+		}
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(napi_resume_irqs);
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 static void napi_hash_add(struct napi_struct *napi)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set
  2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (2 preceding siblings ...)
  2024-08-23 17:30 ` [PATCH net-next 3/6] net: Add control functions for irq suspension Joe Damato
@ 2024-08-23 17:30 ` Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 5/6] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 6/6] docs: networking: Describe irq suspension Joe Damato
  5 siblings, 0 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, Alexander Viro, Christian Brauner, Jan Kara,
	open list:FILESYSTEMS (VFS and infrastructure), open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

Setting prefer_busy_poll now leads to an effectively nonblocking
iteration though napi_busy_loop, even when busy_poll_usecs is 0.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 fs/eventpoll.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index f53ca4f7fced..cc47f72005ed 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -420,7 +420,9 @@ static bool busy_loop_ep_timeout(unsigned long start_time,
 
 static bool ep_busy_loop_on(struct eventpoll *ep)
 {
-	return !!ep->busy_poll_usecs || net_busy_loop_on();
+	return !!READ_ONCE(ep->busy_poll_usecs) ||
+	       READ_ONCE(ep->prefer_busy_poll) ||
+	       net_busy_loop_on();
 }
 
 static bool ep_busy_loop_end(void *p, unsigned long start_time)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 5/6] eventpoll: Control irq suspension for prefer_busy_poll
  2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (3 preceding siblings ...)
  2024-08-23 17:30 ` [PATCH net-next 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
@ 2024-08-23 17:30 ` Joe Damato
  2024-08-23 17:30 ` [PATCH net-next 6/6] docs: networking: Describe irq suspension Joe Damato
  5 siblings, 0 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, Alexander Viro, Christian Brauner, Jan Kara,
	open list:FILESYSTEMS (VFS and infrastructure), open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

When events are reported to userland and prefer_busy_poll is set, irqs are
temporarily suspended using napi_suspend_irqs.

If no events are found and ep_poll would go to sleep, irq suspension is
cancelled using napi_resume_irqs.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 rfc -> v1:
   - move irq resume code from ep_free to a helper which either resumes
     IRQs or does nothing if !defined(CONFIG_NET_RX_BUSY_POLL).

 fs/eventpoll.c | 30 +++++++++++++++++++++++++++++-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index cc47f72005ed..5dbe717c06b4 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -457,6 +457,8 @@ static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
 		 * it back in when we have moved a socket with a valid NAPI
 		 * ID onto the ready list.
 		 */
+		if (prefer_busy_poll)
+			napi_resume_irqs(napi_id);
 		ep->napi_id = 0;
 		return false;
 	}
@@ -540,6 +542,22 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
 	}
 }
 
+static void ep_suspend_napi_irqs(struct eventpoll *ep)
+{
+	unsigned int napi_id = READ_ONCE(ep->napi_id);
+
+	if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
+		napi_suspend_irqs(napi_id);
+}
+
+static void ep_resume_napi_irqs(struct eventpoll *ep)
+{
+	unsigned int napi_id = READ_ONCE(ep->napi_id);
+
+	if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
+		napi_resume_irqs(napi_id);
+}
+
 #else
 
 static inline bool ep_busy_loop(struct eventpoll *ep, int nonblock)
@@ -557,6 +575,13 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
 	return -EOPNOTSUPP;
 }
 
+static void ep_suspend_napi_irqs(struct eventpoll *ep)
+{
+}
+
+static void ep_resume_napi_irqs(struct eventpoll *ep)
+{
+}
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 /*
@@ -788,6 +813,7 @@ static bool ep_refcount_dec_and_test(struct eventpoll *ep)
 
 static void ep_free(struct eventpoll *ep)
 {
+	ep_resume_napi_irqs(ep);
 	mutex_destroy(&ep->mtx);
 	free_uid(ep->user);
 	wakeup_source_unregister(ep->ws);
@@ -2005,8 +2031,10 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 			 * trying again in search of more luck.
 			 */
 			res = ep_send_events(ep, events, maxevents);
-			if (res)
+			if (res) {
+				ep_suspend_napi_irqs(ep);
 				return res;
+			}
 		}
 
 		if (timed_out)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 6/6] docs: networking: Describe irq suspension
  2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (4 preceding siblings ...)
  2024-08-23 17:30 ` [PATCH net-next 5/6] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
@ 2024-08-23 17:30 ` Joe Damato
  5 siblings, 0 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 17:30 UTC (permalink / raw)
  To: netdev
  Cc: amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Joe Damato,
	Martin Karsten, David S. Miller, Eric Dumazet, Paolo Abeni,
	Jonathan Corbet, open list:DOCUMENTATION, open list,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_)

Describe irq suspension, the epoll ioctls, and the tradeoffs of using
different gro_flush_timeout values.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Co-developed-by: Martin Karsten <mkarsten@uwaterloo.ca>
Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 Documentation/networking/napi.rst | 112 +++++++++++++++++++++++++++++-
 1 file changed, 110 insertions(+), 2 deletions(-)

diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index 7bf7b95c4f7a..04e838835b50 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -192,6 +192,9 @@ The ``gro_flush_timeout`` sysfs configuration of the netdevice
 is reused to control the delay of the timer, while
 ``napi_defer_hard_irqs`` controls the number of consecutive empty polls
 before NAPI gives up and goes back to using hardware IRQs.
+``irq_suspend_timeout`` is used to determine how long an application can
+completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
+which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
 
 .. _poll:
 
@@ -208,6 +211,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
 ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
 also exists.
 
+epoll-based busy polling
+------------------------
+
+It is possible to trigger packet processing directly from calls to
+``epoll_wait``. In order to use this feature, a user application must ensure
+all file descriptors which are added to an epoll context have the same NAPI ID.
+
+If the application uses a dedicated acceptor thread, the application can obtain
+the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
+distribute that file descriptor to a worker thread. The worker thread would add
+the file descriptor to its epoll context. This would ensure each worker thread
+has an epoll context with FDs that have the same NAPI ID.
+
+Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program be
+inserted to distribute incoming connections to threads such that each thread is
+only given incoming connections with the same NAPI ID. Care must be taken to
+carefully handle cases where a system may have multiple NICs.
+
+In order to enable busy polling, there are two choices:
+
+1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
+   loop waiting for events. This is a system-wide setting and will cause all
+   epoll-based applications to busy poll when they call epoll_wait. This may
+   not be desireable as many applications may not have the need to busy poll.
+
+2. Applications using recent kernels can issue an ioctl on the epoll context
+   file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
+   epoll_params``:, which user programs can define as follows:
+
+.. code-block:: c
+
+  struct epoll_params {
+      uint32_t busy_poll_usecs;
+      uint16_t busy_poll_budget;
+      uint8_t prefer_busy_poll;
+
+      /* pad the struct to a multiple of 64bits */
+      uint8_t __pad;
+  };
+
 IRQ mitigation
 ---------------
 
@@ -223,12 +266,77 @@ Such applications can pledge to the kernel that they will perform a busy
 polling operation periodically, and the driver should keep the device IRQs
 permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
 socket option. To avoid system misbehavior the pledge is revoked
-if ``gro_flush_timeout`` passes without any busy poll call.
+if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
+busy polling applications, the ``prefer_busy_poll`` field of ``struct
+epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
+enable this mode. See the above section for more details.
 
 The NAPI budget for busy polling is lower than the default (which makes
 sense given the low latency intention of normal busy polling). This is
 not the case with IRQ mitigation, however, so the budget can be adjusted
-with the ``SO_BUSY_POLL_BUDGET`` socket option.
+with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
+applications, the ``busy_poll_budget`` field can be adjusted to the desired value
+in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
+ioctl. See the above section for more details.
+
+It is important to note that choosing a large value for ``gro_flush_timeout``
+will defer IRQs to allow for better batch processing, but will induce latency
+when the system is not fully loaded. Choosing a small value for
+``gro_flush_timeout`` can cause interference of the user application which is
+attempting to busy poll by device IRQs and softirq processing. This value
+should be chosen carefully with these tradeoffs in mind. epoll-based busy
+polling applications may be able to mitigate how much user processing happens
+by choosing an appropriate value for ``maxevents``.
+
+Users may want to consider an alternate approach, IRQ suspension, to help deal
+with these tradeoffs.
+
+IRQ suspension
+--------------
+
+IRQ suspension is a mechanism wherein device IRQs are masked while epoll
+triggers NAPI packet processing.
+
+While application calls to epoll_wait successfully retrieve events, the kernel will
+defer the IRQ suspension timer. If the kernel does not retrieve any events
+while busy polling (for example, because network traffic levels subsided), IRQ
+suspension is disabled and the IRQ mitigation strategies described above are
+engaged.
+
+This allows users to balance CPU consumption with network processing
+efficiency.
+
+To use this mechanism:
+
+  1. The sysfs parameter ``irq_suspend_timeout`` should be set to the maximum
+     time (in nanoseconds) the application can have its IRQs suspended. This
+     timeout serves as a safety mechanism to restart IRQ driver interrupt
+     processing if the application has stalled. This value should be chosen so
+     that it covers the amount of time the user application needs to process
+     data from its call to epoll_wait, noting that applications can control how
+     much data they retrieve by setting ``max_events`` when calling epoll_wait.
+
+  2. The sysfs parameter ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` can
+     be set to low values. They will be used to defer IRQs after busy poll has
+     found no data.
+
+  3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
+     the ``EPIOCSPARAMS`` ioctl as described above.
+
+  4. The application uses epoll as described above to trigger NAPI packet
+     processing.
+
+As mentioned above, as long as subsequent calls to epoll_wait return events to
+userland, the ``irq_suspend_timeout`` is deferred and IRQs are disabled. This
+allows the application to process data without interference.
+
+Once a call to epoll_wait results in no events being found, IRQ suspension is
+automatically disabled and the ``gro_flush_timeout`` and
+``napi_defer_hard_irqs`` mitigation mechanisms take over.
+
+It is expected that ``irq_suspend_timeout`` will be set to a value much larger
+than ``gro_flush_timeout`` as ``irq_suspend_timeout`` should suspend IRQs for
+the duration of one userland processing cycle.
 
 .. _threaded:
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout
  2024-08-23 17:30 ` [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout Joe Damato
@ 2024-08-23 17:39   ` Eric Dumazet
  2024-08-23 20:15     ` Joe Damato
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2024-08-23 17:39 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei,
	bjorn, hch, willy, willemdebruijn.kernel, skhawaja, kuba,
	Martin Karsten, David S. Miller, Paolo Abeni, Jiri Pirko,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Daniel Borkmann,
	Breno Leitao, Johannes Berg, Heiner Kallweit, open list

On Fri, Aug 23, 2024 at 7:31 PM Joe Damato <jdamato@fastly.com> wrote:
>
> From: Martin Karsten <mkarsten@uwaterloo.ca>
>
> This patch doesn't change any behavior but prepares the code for other
> changes in the following commits which use irq_suspend_timeout as a
> timeout for IRQ suspension.
>
> Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
> Co-developed-by: Joe Damato <jdamato@fastly.com>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> ---
>  rfc -> v1:
>    - Removed napi.rst documentation from this patch; added to patch 6.
>
>  include/linux/netdevice.h |  2 ++
>  net/core/dev.c            |  3 ++-
>  net/core/net-sysfs.c      | 18 ++++++++++++++++++
>  3 files changed, 22 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 0ef3eaa23f4b..31867bb2ff65 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -1857,6 +1857,7 @@ enum netdev_reg_state {
>   *     @gro_flush_timeout:     timeout for GRO layer in NAPI
>   *     @napi_defer_hard_irqs:  If not zero, provides a counter that would
>   *                             allow to avoid NIC hard IRQ, on busy queues.
> + *     @irq_suspend_timeout:   IRQ suspension timeout
>   *
>   *     @rx_handler:            handler for received packets
>   *     @rx_handler_data:       XXX: need comments on this one
> @@ -2060,6 +2061,7 @@ struct net_device {
>         struct netdev_rx_queue  *_rx;
>         unsigned long           gro_flush_timeout;
>         int                     napi_defer_hard_irqs;
> +       unsigned long           irq_suspend_timeout;
>         unsigned int            gro_max_size;
>         unsigned int            gro_ipv4_max_size;
>         rx_handler_func_t __rcu *rx_handler;
> diff --git a/net/core/dev.c b/net/core/dev.c
> index e7260889d4cb..3bf325ec25a3 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -11945,6 +11945,7 @@ static void __init net_dev_struct_check(void)
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, _rx);
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_flush_timeout);
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, napi_defer_hard_irqs);
> +       CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, irq_suspend_timeout);
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_max_size);
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_ipv4_max_size);
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, rx_handler);
> @@ -11956,7 +11957,7 @@ static void __init net_dev_struct_check(void)
>  #ifdef CONFIG_NET_XGRESS
>         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, tcx_ingress);
>  #endif
> -       CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 104);
> +       CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 112);
>  }
>
>  /*
> diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
> index 0e2084ce7b75..fb6f3327310f 100644
> --- a/net/core/net-sysfs.c
> +++ b/net/core/net-sysfs.c
> @@ -440,6 +440,23 @@ static ssize_t napi_defer_hard_irqs_store(struct device *dev,
>  }
>  NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_dec);
>
> +static int change_irq_suspend_timeout(struct net_device *dev, unsigned long val)
> +{
> +       WRITE_ONCE(dev->irq_suspend_timeout, val);
> +       return 0;
> +}
> +
> +static ssize_t irq_suspend_timeout_store(struct device *dev,
> +                                        struct device_attribute *attr,
> +                                        const char *buf, size_t len)
> +{
> +       if (!capable(CAP_NET_ADMIN))
> +               return -EPERM;
> +
> +       return netdev_store(dev, attr, buf, len, change_irq_suspend_timeout);
> +}
> +NETDEVICE_SHOW_RW(irq_suspend_timeout, fmt_ulong);
> +
>  static ssize_t ifalias_store(struct device *dev, struct device_attribute *attr,
>                              const char *buf, size_t len)
>  {
> @@ -664,6 +681,7 @@ static struct attribute *net_class_attrs[] __ro_after_init = {
>         &dev_attr_tx_queue_len.attr,
>         &dev_attr_gro_flush_timeout.attr,
>         &dev_attr_napi_defer_hard_irqs.attr,
> +       &dev_attr_irq_suspend_timeout.attr,
>         &dev_attr_phys_port_id.attr,
>         &dev_attr_phys_port_name.attr,
>         &dev_attr_phys_switch_id.attr,
> --
> 2.25.1


Please no more per-device sysfs entry, shared by all the users of the device.

Let's not repeat past mistakes.

Nowadays, we need/want per receive-queue tuning, preferably set with netlink.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 3/6] net: Add control functions for irq suspension
  2024-08-23 17:30 ` [PATCH net-next 3/6] net: Add control functions for irq suspension Joe Damato
@ 2024-08-23 17:56   ` Eric Dumazet
  2024-08-23 18:02     ` Joe Damato
  2024-08-23 18:14   ` Eric Dumazet
  1 sibling, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2024-08-23 17:56 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei,
	bjorn, hch, willy, willemdebruijn.kernel, skhawaja, kuba,
	Martin Karsten, David S. Miller, Paolo Abeni, Jiri Pirko,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Daniel Borkmann,
	open list

On Fri, Aug 23, 2024 at 7:31 PM Joe Damato <jdamato@fastly.com> wrote:
>
> From: Martin Karsten <mkarsten@uwaterloo.ca>
>
> The napi_suspend_irqs routine bootstraps irq suspension by elongating
> the defer timeout to irq_suspend_timeout.
>
> The napi_resume_irqs routine effectly cancels irq suspension by forcing
> the napi to be scheduled immediately.
>
> Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
> Co-developed-by: Joe Damato <jdamato@fastly.com>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> ---

You have not CC me on all the patches in the series, making the review
harder then necessary.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 3/6] net: Add control functions for irq suspension
  2024-08-23 17:56   ` Eric Dumazet
@ 2024-08-23 18:02     ` Joe Damato
  0 siblings, 0 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 18:02 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: netdev, amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei,
	bjorn, hch, willy, willemdebruijn.kernel, skhawaja, kuba,
	Martin Karsten, David S. Miller, Paolo Abeni, Jiri Pirko,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Daniel Borkmann,
	open list

On Fri, Aug 23, 2024 at 07:56:32PM +0200, Eric Dumazet wrote:
> On Fri, Aug 23, 2024 at 7:31 PM Joe Damato <jdamato@fastly.com> wrote:
> >
> > From: Martin Karsten <mkarsten@uwaterloo.ca>
> >
> > The napi_suspend_irqs routine bootstraps irq suspension by elongating
> > the defer timeout to irq_suspend_timeout.
> >
> > The napi_resume_irqs routine effectly cancels irq suspension by forcing
> > the napi to be scheduled immediately.
> >
> > Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > Co-developed-by: Joe Damato <jdamato@fastly.com>
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > Tested-by: Joe Damato <jdamato@fastly.com>
> > Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > ---
> 
> You have not CC me on all the patches in the series, making the review
> harder then necessary.

My sincere apologies, Eric, and thank you for your time reviewing
this.

I used a script I'd been using for a while to generate the CC list,
but it clearly has a bug.

For any future revisions I will be sure to explicitly include you.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 3/6] net: Add control functions for irq suspension
  2024-08-23 17:30 ` [PATCH net-next 3/6] net: Add control functions for irq suspension Joe Damato
  2024-08-23 17:56   ` Eric Dumazet
@ 2024-08-23 18:14   ` Eric Dumazet
  2024-08-23 20:21     ` Martin Karsten
  1 sibling, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2024-08-23 18:14 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei,
	bjorn, hch, willy, willemdebruijn.kernel, skhawaja, kuba,
	Martin Karsten, David S. Miller, Paolo Abeni, Jiri Pirko,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Daniel Borkmann,
	open list

On Fri, Aug 23, 2024 at 7:31 PM Joe Damato <jdamato@fastly.com> wrote:
>
> From: Martin Karsten <mkarsten@uwaterloo.ca>
>
> The napi_suspend_irqs routine bootstraps irq suspension by elongating
> the defer timeout to irq_suspend_timeout.
>
> The napi_resume_irqs routine effectly cancels irq suspension by forcing
> the napi to be scheduled immediately.
>
> Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
> Co-developed-by: Joe Damato <jdamato@fastly.com>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> ---
>  include/net/busy_poll.h |  3 +++
>  net/core/dev.c          | 33 +++++++++++++++++++++++++++++++++
>  2 files changed, 36 insertions(+)
>
> diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
> index 9b09acac538e..f095b2bdeee1 100644
> --- a/include/net/busy_poll.h
> +++ b/include/net/busy_poll.h
> @@ -52,6 +52,9 @@ void napi_busy_loop_rcu(unsigned int napi_id,
>                         bool (*loop_end)(void *, unsigned long),
>                         void *loop_end_arg, bool prefer_busy_poll, u16 budget);
>
> +void napi_suspend_irqs(unsigned int napi_id);
> +void napi_resume_irqs(unsigned int napi_id);
> +
>  #else /* CONFIG_NET_RX_BUSY_POLL */
>  static inline unsigned long net_busy_loop_on(void)
>  {
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 74060ba866d4..4de0dfc86e21 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -6507,6 +6507,39 @@ void napi_busy_loop(unsigned int napi_id,
>  }
>  EXPORT_SYMBOL(napi_busy_loop);
>
> +void napi_suspend_irqs(unsigned int napi_id)
> +{
> +       struct napi_struct *napi;
> +
> +       rcu_read_lock();
> +       napi = napi_by_id(napi_id);
> +       if (napi) {
> +               unsigned long timeout = READ_ONCE(napi->dev->irq_suspend_timeout);
> +
> +               if (timeout)
> +                       hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
> +       }
> +       rcu_read_unlock();
> +}
> +EXPORT_SYMBOL(napi_suspend_irqs);
> +
> +void napi_resume_irqs(unsigned int napi_id)
> +{
> +       struct napi_struct *napi;
> +
> +       rcu_read_lock();
> +       napi = napi_by_id(napi_id);
> +       if (napi) {
> +               if (READ_ONCE(napi->dev->irq_suspend_timeout)) {


Since we'll read irq_suspend_timeout twice, we could have a situation
where the napi_schedule() will not be done
if another thread changes irq_suspend_timeout ?

If this is fine, a comment would be nice :)

The thing is that the kernel can not trust the user (think of syzbot)

> +                       local_bh_disable();
> +                       napi_schedule(napi);
> +                       local_bh_enable();
> +               }
> +       }
> +       rcu_read_unlock();
> +}
> +EXPORT_SYMBOL(napi_resume_irqs);
> +
>  #endif /* CONFIG_NET_RX_BUSY_POLL */
>
>  static void napi_hash_add(struct napi_struct *napi)
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout
  2024-08-23 17:39   ` Eric Dumazet
@ 2024-08-23 20:15     ` Joe Damato
  0 siblings, 0 replies; 13+ messages in thread
From: Joe Damato @ 2024-08-23 20:15 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: netdev, amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei,
	bjorn, hch, willy, willemdebruijn.kernel, skhawaja, kuba,
	Martin Karsten, David S. Miller, Paolo Abeni, Jiri Pirko,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Daniel Borkmann,
	Breno Leitao, Johannes Berg, Heiner Kallweit, open list

On Fri, Aug 23, 2024 at 07:39:56PM +0200, Eric Dumazet wrote:
> On Fri, Aug 23, 2024 at 7:31 PM Joe Damato <jdamato@fastly.com> wrote:
> >
> > From: Martin Karsten <mkarsten@uwaterloo.ca>
> >
> > This patch doesn't change any behavior but prepares the code for other
> > changes in the following commits which use irq_suspend_timeout as a
> > timeout for IRQ suspension.
> >
> > Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > Co-developed-by: Joe Damato <jdamato@fastly.com>
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > Tested-by: Joe Damato <jdamato@fastly.com>
> > Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > ---
> >  rfc -> v1:
> >    - Removed napi.rst documentation from this patch; added to patch 6.
> >
> >  include/linux/netdevice.h |  2 ++
> >  net/core/dev.c            |  3 ++-
> >  net/core/net-sysfs.c      | 18 ++++++++++++++++++
> >  3 files changed, 22 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> > index 0ef3eaa23f4b..31867bb2ff65 100644
> > --- a/include/linux/netdevice.h
> > +++ b/include/linux/netdevice.h
> > @@ -1857,6 +1857,7 @@ enum netdev_reg_state {
> >   *     @gro_flush_timeout:     timeout for GRO layer in NAPI
> >   *     @napi_defer_hard_irqs:  If not zero, provides a counter that would
> >   *                             allow to avoid NIC hard IRQ, on busy queues.
> > + *     @irq_suspend_timeout:   IRQ suspension timeout
> >   *
> >   *     @rx_handler:            handler for received packets
> >   *     @rx_handler_data:       XXX: need comments on this one
> > @@ -2060,6 +2061,7 @@ struct net_device {
> >         struct netdev_rx_queue  *_rx;
> >         unsigned long           gro_flush_timeout;
> >         int                     napi_defer_hard_irqs;
> > +       unsigned long           irq_suspend_timeout;
> >         unsigned int            gro_max_size;
> >         unsigned int            gro_ipv4_max_size;
> >         rx_handler_func_t __rcu *rx_handler;
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index e7260889d4cb..3bf325ec25a3 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -11945,6 +11945,7 @@ static void __init net_dev_struct_check(void)
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, _rx);
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_flush_timeout);
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, napi_defer_hard_irqs);
> > +       CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, irq_suspend_timeout);
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_max_size);
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_ipv4_max_size);
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, rx_handler);
> > @@ -11956,7 +11957,7 @@ static void __init net_dev_struct_check(void)
> >  #ifdef CONFIG_NET_XGRESS
> >         CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, tcx_ingress);
> >  #endif
> > -       CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 104);
> > +       CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 112);
> >  }
> >
> >  /*
> > diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
> > index 0e2084ce7b75..fb6f3327310f 100644
> > --- a/net/core/net-sysfs.c
> > +++ b/net/core/net-sysfs.c
> > @@ -440,6 +440,23 @@ static ssize_t napi_defer_hard_irqs_store(struct device *dev,
> >  }
> >  NETDEVICE_SHOW_RW(napi_defer_hard_irqs, fmt_dec);
> >
> > +static int change_irq_suspend_timeout(struct net_device *dev, unsigned long val)
> > +{
> > +       WRITE_ONCE(dev->irq_suspend_timeout, val);
> > +       return 0;
> > +}
> > +
> > +static ssize_t irq_suspend_timeout_store(struct device *dev,
> > +                                        struct device_attribute *attr,
> > +                                        const char *buf, size_t len)
> > +{
> > +       if (!capable(CAP_NET_ADMIN))
> > +               return -EPERM;
> > +
> > +       return netdev_store(dev, attr, buf, len, change_irq_suspend_timeout);
> > +}
> > +NETDEVICE_SHOW_RW(irq_suspend_timeout, fmt_ulong);
> > +
> >  static ssize_t ifalias_store(struct device *dev, struct device_attribute *attr,
> >                              const char *buf, size_t len)
> >  {
> > @@ -664,6 +681,7 @@ static struct attribute *net_class_attrs[] __ro_after_init = {
> >         &dev_attr_tx_queue_len.attr,
> >         &dev_attr_gro_flush_timeout.attr,
> >         &dev_attr_napi_defer_hard_irqs.attr,
> > +       &dev_attr_irq_suspend_timeout.attr,
> >         &dev_attr_phys_port_id.attr,
> >         &dev_attr_phys_port_name.attr,
> >         &dev_attr_phys_switch_id.attr,
> > --
> > 2.25.1
> 
> 
> Please no more per-device sysfs entry, shared by all the users of the device.
> 
> Let's not repeat past mistakes.
> 
> Nowadays, we need/want per receive-queue tuning, preferably set with netlink.

Thanks for the feedback, Eric. We appreciate your consideration
and guidance.

May we ask what your thoughts are, overall, about getting a
mechanism like this accepted?

We want to make sure that this, in principle, is acceptable before
iterating further and going down the path of netlink, if required.

On the specific netlink bit in your comment, we agree in principle,
however:

  1. Our code integrates directly with existing sysfs parameters.
     If we make our parameter settable via netlink, but the others
     remain as sysfs parameters then the interface for users
     becomes a bit cumbersome.

     And, so the urge will be to move all parameters to netlink
     for ease of use for the user.

     As we mentioned in our cover letter: we agree that doing so is
     a good idea, but we hope to convince you (and the other
     maintainers) that the netlink work can come later as a separate
     change which affects the existing parameters we integrate with
     as well as the parameter we are introducing, at the same time.

  2. The proposed mechanism yields a substantial performance and
     efficiency improvement which would be valuable to users. It would
     be unfortunate to block until all of this could be moved to
     netlink, first.

  3. While adding a new sysfs parameter does affect the ABI
     permanently, it doesn't prevent us from making these parameters
     per-NAPI in the future. This series adds strength to the argument
     that these parameters should be per-NAPI because our results show
     the impact these parameters can have on network processing very
     clearly in a range of scenarios.

We appreciate your thoughts on the above as we want to ensure we
are moving in the right direction.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 3/6] net: Add control functions for irq suspension
  2024-08-23 18:14   ` Eric Dumazet
@ 2024-08-23 20:21     ` Martin Karsten
  0 siblings, 0 replies; 13+ messages in thread
From: Martin Karsten @ 2024-08-23 20:21 UTC (permalink / raw)
  To: Eric Dumazet, Joe Damato
  Cc: netdev, amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei,
	bjorn, hch, willy, willemdebruijn.kernel, skhawaja, kuba,
	David S. Miller, Paolo Abeni, Jiri Pirko,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Daniel Borkmann,
	open list

On 2024-08-23 14:14, Eric Dumazet wrote:
> On Fri, Aug 23, 2024 at 7:31 PM Joe Damato <jdamato@fastly.com> wrote:
>>
>> From: Martin Karsten <mkarsten@uwaterloo.ca>
>>
>> The napi_suspend_irqs routine bootstraps irq suspension by elongating
>> the defer timeout to irq_suspend_timeout.
>>
>> The napi_resume_irqs routine effectly cancels irq suspension by forcing
>> the napi to be scheduled immediately.
>>
>> Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
>> Co-developed-by: Joe Damato <jdamato@fastly.com>
>> Signed-off-by: Joe Damato <jdamato@fastly.com>
>> Tested-by: Joe Damato <jdamato@fastly.com>
>> Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
>> ---
>>   include/net/busy_poll.h |  3 +++
>>   net/core/dev.c          | 33 +++++++++++++++++++++++++++++++++
>>   2 files changed, 36 insertions(+)
>>
>> diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
>> index 9b09acac538e..f095b2bdeee1 100644
>> --- a/include/net/busy_poll.h
>> +++ b/include/net/busy_poll.h
>> @@ -52,6 +52,9 @@ void napi_busy_loop_rcu(unsigned int napi_id,
>>                          bool (*loop_end)(void *, unsigned long),
>>                          void *loop_end_arg, bool prefer_busy_poll, u16 budget);
>>
>> +void napi_suspend_irqs(unsigned int napi_id);
>> +void napi_resume_irqs(unsigned int napi_id);
>> +
>>   #else /* CONFIG_NET_RX_BUSY_POLL */
>>   static inline unsigned long net_busy_loop_on(void)
>>   {
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index 74060ba866d4..4de0dfc86e21 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>> @@ -6507,6 +6507,39 @@ void napi_busy_loop(unsigned int napi_id,
>>   }
>>   EXPORT_SYMBOL(napi_busy_loop);
>>
>> +void napi_suspend_irqs(unsigned int napi_id)
>> +{
>> +       struct napi_struct *napi;
>> +
>> +       rcu_read_lock();
>> +       napi = napi_by_id(napi_id);
>> +       if (napi) {
>> +               unsigned long timeout = READ_ONCE(napi->dev->irq_suspend_timeout);
>> +
>> +               if (timeout)
>> +                       hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
>> +       }
>> +       rcu_read_unlock();
>> +}
>> +EXPORT_SYMBOL(napi_suspend_irqs);
>> +
>> +void napi_resume_irqs(unsigned int napi_id)
>> +{
>> +       struct napi_struct *napi;
>> +
>> +       rcu_read_lock();
>> +       napi = napi_by_id(napi_id);
>> +       if (napi) {
>> +               if (READ_ONCE(napi->dev->irq_suspend_timeout)) {
> 
> 
> Since we'll read irq_suspend_timeout twice, we could have a situation
> where the napi_schedule() will not be done
> if another thread changes irq_suspend_timeout ?
> 
> If this is fine, a comment would be nice :)
> 
> The thing is that the kernel can not trust the user (think of syzbot)

Yes, this should be fine. Calling napi_resume_irqs is done to restart 
irq processing right away. In case irq_suspend_timeout is set to 0 
between suspend and resume, the original value of irq_suspend_timeout 
(when napi_suspend_irqs was called) determines the safety timeout as 
intended and the watchdog will restart irq processing. We will a add 
comment to make this clear.

Thanks,
Martin


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-08-23 20:21 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-23 17:30 [PATCH net-next 0/6] Suspend IRQs during application busy periods Joe Damato
2024-08-23 17:30 ` [PATCH net-next 1/6] net: Add sysfs parameter irq_suspend_timeout Joe Damato
2024-08-23 17:39   ` Eric Dumazet
2024-08-23 20:15     ` Joe Damato
2024-08-23 17:30 ` [PATCH net-next 2/6] net: Suspend softirq when prefer_busy_poll is set Joe Damato
2024-08-23 17:30 ` [PATCH net-next 3/6] net: Add control functions for irq suspension Joe Damato
2024-08-23 17:56   ` Eric Dumazet
2024-08-23 18:02     ` Joe Damato
2024-08-23 18:14   ` Eric Dumazet
2024-08-23 20:21     ` Martin Karsten
2024-08-23 17:30 ` [PATCH net-next 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
2024-08-23 17:30 ` [PATCH net-next 5/6] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
2024-08-23 17:30 ` [PATCH net-next 6/6] docs: networking: Describe irq suspension Joe Damato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).