netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/6] Suspend IRQs during application busy periods
@ 2024-10-21  1:52 Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 1/6] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:52 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Joe Damato, Alexander Lobakin, Alexander Viro,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_), Christian Brauner,
	David S. Miller, Donald Hunter, Jan Kara, Jesper Dangaard Brouer,
	Jiri Pirko, Johannes Berg, Jonathan Corbet,
	open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	Lorenzo Bianconi, Martin Karsten, Mina Almasry, Paolo Abeni,
	Sebastian Andrzej Siewior, Xuan Zhuo

Greetings:

Welcome to v2, see changelog below.

This series introduces a new mechanism, IRQ suspension, which allows
network applications using epoll to mask IRQs during periods of high
traffic while also reducing tail latency (compared to existing
mechanisms, see below) during periods of low traffic. In doing so, this
balances CPU consumption with network processing efficiency.

Martin Karsten (CC'd) and I have been collaborating on this series for
several months and have appreciated the feedback from the community on
our RFC [1]. We've updated the cover letter and kernel documentation in
an attempt to more clearly explain how this mechanism works, how
applications can use it, and how it compares to existing mechanisms in
the kernel. We've added an additional test case, 'fullbusy', achieved by
modifying libevent for comparison. See below for a detailed description,
link to the patch, and test results.

I briefly mentioned this idea at netdev conf 2024 (for those who were
there) and Martin described this idea in an earlier paper presented at
Sigmetrics 2024 [2].

~ The short explanation (TL;DR)

We propose adding a new napi config parameter: irq_suspend_timeout to
help balance CPU usage and network processing efficiency when using IRQ
deferral and napi busy poll.

If this parameter is set to a non-zero value *and* a user application
has enabled preferred busy poll on a busy poll context (via the
EPIOCSPARAMS ioctl introduced in commit 18e2bf0edf4d ("eventpoll: Add
epoll ioctl for epoll_params")), then application calls to epoll_wait
for that context will cause device IRQs and softirq processing to be
suspended as long as epoll_wait successfully retrieves data from the
NAPI. Each time data is retrieved, the irq_suspend_timeout is deferred.

If/when network traffic subsides and epoll_wait returns no data, IRQ
suspension is immediately reverted back to the existing
napi_defer_hard_irqs and gro_flush_timeout mechanism which was
introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
feature")).

The irq_suspend_timeout serves as a safety mechanism. If userland takes
a long time processing data, irq_suspend_timeout will fire and restart
normal NAPI processing.

For a more in depth explanation, please continue reading.

~ Comparison with existing mechanisms

Interrupt mitigation can be accomplished in napi software, by setting
napi_defer_hard_irqs and gro_flush_timeout, or via interrupt coalescing
in the NIC. This can be quite efficient, but in both cases, a fixed
timeout (or packet count) needs to be configured. However, a fixed
timeout cannot effectively support both low- and high-load situations:

At low load, an application typically processes a few requests and then
waits to receive more input data. In this scenario, a large timeout will
cause unnecessary latency.

At high load, an application typically processes many requests before
being ready to receive more input data. In this case, a small timeout
will likely fire prematurely and trigger irq/softirq processing, which
interferes with the application's execution. This causes overhead, most
likely due to cache contention.

While NICs attempt to provide adaptive interrupt coalescing schemes,
these cannot properly take into account application-level processing.

An alternative packet delivery mechanism is busy-polling, which results
in perfect alignment of application processing and network polling. It
delivers optimal performance (throughput and latency), but results in
100% cpu utilization and is thus inefficient for below-capacity
workloads.

We propose to add a new packet delivery mode that properly alternates
between busy polling and interrupt-based delivery depending on busy and
idle periods of the application. During a busy period, the system
operates in busy-polling mode, which avoids interference. During an idle
period, the system falls back to interrupt deferral, but with a small
timeout to avoid excessive latencies. This delivery mode can also be
viewed as an extension of basic interrupt deferral, but alternating
between a small and a very large timeout.

This delivery mode is efficient, because it avoids softirq execution
interfering with application processing during busy periods. It can be
used with blocking epoll_wait to conserve cpu cycles during idle
periods. The effect of alternating between busy and idle periods is that
performance (throughput and latency) is very close to full busy polling,
while cpu utilization is lower and very close to interrupt mitigation.

~ Usage details

IRQ suspension is introduced via a per-NAPI configuration parameter that
controls the maximum time that IRQs can be suspended.

Here's how it is intended to work:
  - The user application (or system administrator) uses the netdev-genl
    netlink interface to set the pre-existing napi_defer_hard_irqs and
    gro_flush_timeout NAPI config parameters to enable IRQ deferral.

  - The user application (or system administrator) sets the proposed
    irq_suspend_timeout parameter via the netdev-genl netlink interface
    to a larger value than gro_flush_timeout to enable IRQ suspension.

  - The user application issues the existing epoll ioctl to set the
    prefer_busy_poll flag on the epoll context.

  - The user application then calls epoll_wait to busy poll for network
    events, as it normally would.

  - If epoll_wait returns events to userland, IRQs are suspended for the
    duration of irq_suspend_timeout.

  - If epoll_wait finds no events and the thread is about to go to
    sleep, IRQ handling using napi_defer_hard_irqs and gro_flush_timeout
    is resumed.

As long as epoll_wait is retrieving events, IRQs (and softirq
processing) for the NAPI being polled remain disabled. When network
traffic reduces, eventually a busy poll loop in the kernel will retrieve
no data. When this occurs, regular IRQ deferral using gro_flush_timeout
for the polled NAPI is re-enabled.

Unless IRQ suspension is continued by subsequent calls to epoll_wait, it
automatically times out after the irq_suspend_timeout timer expires.
Regular deferral is also immediately re-enabled when the epoll context
is destroyed.

~ Usage scenario

The target scenario for IRQ suspension as packet delivery mode is a
system that runs a dominant application with substantial network I/O.
The target application can be configured to receive input data up to a
certain batch size (via epoll_wait maxevents parameter) and this batch
size determines the worst-case latency that application requests might
experience. Because packet delivery is suspended during the target
application's processing, the batch size also determines the worst-case
latency of concurrent applications using the same RX queue(s).

gro_flush_timeout should be set as small as possible, but large enough to
make sure that a single request is likely not being interfered with.

irq_suspend_timeout is largely a safety mechanism against misbehaving
applications. It should be set large enough to cover the processing of an
entire application batch, i.e., the factor between gro_flush_timeout and
irq_suspend_timeout should roughly correspond to the maximum batch size
that the target application would process in one go.

~ Design rationale

The implementation of the IRQ suspension mechanism very nicely dovetails
with the existing mechanism for IRQ deferral when preferred busy poll is
enabled (introduced in commit 7fd3253a7de6 ("net: Introduce preferred
busy-polling"), see that commit message for more details).

While it would be possible to inject the suspend timeout via
the existing epoll ioctl, it is more natural to avoid this path for one
main reason:

  An epoll context is linked to NAPI IDs as file descriptors are added;
  this means any epoll context might suddenly be associated with a
  different net_device if the application were to replace all existing
  fds with fds from a different device. In this case, the scope of the
  suspend timeout becomes unclear and many edge cases for both the user
  application and the kernel are introduced

Only a single iteration through napi busy polling is needed for this
mechanism to work effectively. Since an important objective for this
mechanism is preserving cpu cycles, exactly one iteration of the napi
busy loop is invoked when busy_poll_usecs is set to 0.

~ Important call outs in the implementation

  - Enabling per epoll-context preferred busy poll will now effectively
    lead to a nonblocking iteration through napi_busy_loop, even when
    busy_poll_usecs is 0. See patch 4.

  - Patches apply cleanly on commit 160a810b2a85 ("net: vxlan: update
    the document for vxlan_snoop()").

~ Benchmark configs & descriptions

The changes were benchmarked with memcached [3] using the benchmarking
tool mutilate [4].

To facilitate benchmarking, a small patch [5] was applied to memcached
1.6.29 to allow setting per-epoll context preferred busy poll and other
settings via environment variables. Another small patch [6] was applied
to libevent to enable full busy-polling.

Multiple scenarios were benchmarked as described below and the scripts
used for producing these results can be found on github [7] (note: all
scenarios use NAPI-based traffic splitting via SO_INCOMING_ID by passing
-N to memcached):

  - base:
    - no other options enabled
  - deferX:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to X,000
  - napibusy:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to 200,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 64,
      busy_poll_budget = 64, prefer_busy_poll = true)
  - fullbusy:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to 5,000,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 1000,
      busy_poll_budget = 64, prefer_busy_poll = true)
    - change memcached's nonblocking epoll_wait invocation (via
      libevent) to using a 1 ms timeout
  - suspendX:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to X,000
    - set irq_suspend_timeout to 20,000,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 0,
      busy_poll_budget = 64, prefer_busy_poll = true)

~ Benchmark results

Tested on:

Single socket AMD EPYC 7662 64-Core Processor
Hyperthreading disabled
4 NUMA Zones (NPS=4)
16 CPUs per NUMA zone (64 cores total)
2 x Dual port 100gbps Mellanox Technologies ConnectX-5 Ex EN NIC

The test machine is configured such that a single interface has 8 RX
queues. The queues' IRQs and memcached are pinned to CPUs that are
NUMA-local to the interface which is under test. The NIC's interrupt
coalescing configuration is left at boot-time defaults.

Results:

Results are shown below. The mechanism added by this series is
represented by the 'suspend' cases. Data presented shows a summary over
at least 10 runs of each test case [8] using the scripts on github [7].
For latency, the median is shown. For throughput and CPU utilization,
the average is shown.

The results also include cycles-per-query (cpq) and
instruction-per-query (ipq) metrics, following the methodology proposed
in [2], to augment the CPU utilization numbers, which could be skewed
due to frequency scaling. We find that this does not appear to be the
case as CPU utilization and low-level metrics show similar trends.

These results were captured using the scripts on github [7] to
illustrate how this approach compares with other pre-existing
mechanisms. This data is not to be interpreted as scientific data
captured in a fully isolated lab setting, but instead as best effort,
illustrative information comparing and contrasting tradeoffs.

The absolute QPS results are higher than our previous submission, but
the relative differences between variants are equivalent. Because the
patches have been rebased on 6.12, several factors have likely
influenced the overall performance. Most importantly, we had to switch
to a new set of basic kernel options, which has likely altered the
baseline performance. Because the overall comparison of variants still
holds, we have not attempted to recreate the exact set of kernel options
from the previous submission.

Compare:
- Throughput (MAX) and latencies of base vs suspend.
- CPU usage of napibusy and fullbusy during lower load (200K, 400K for
  example) vs suspend.
- Latency of the defer variants vs suspend as timeout and load
  increases.

The overall takeaway is that the suspend variants provide a superior
combination of high throughput, low latency, and low cpu utilization
compared to all other variants. Each of the suspend variants works very
well, but some fine-tuning between latency and cpu utilization is still
possible by tuning the small timeout (gro_flush_timeout).

Note: we've reorganized the results to make comparison among testcases
with the same load easier.

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  200K  200024     127     254     458      25   12748   11289
   defer10  200K  199991      64     128     166      27   18763   16574
   defer20  200K  199986      72     135     178      25   15405   14173
   defer50  200K  200025      91     149     198      23   12275   12203
  defer200  200K  199996     182     266     326      18    8595    9183
  fullbusy  200K  200040      58     123     167     100   43641   23145
  napibusy  200K  200009     115     244     299      56   24797   24693
 suspend10  200K  200005      63     128     167      32   19559   17240
 suspend20  200K  199952      69     132     170      29   16324   14838
 suspend50  200K  200019      84     144     189      26   13106   12516
suspend200  200K  199978     168     264     326      20    9331    9643

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  400K  400017     157     292     762      39    9287    9325
   defer10  400K  400033      71     141     204      53   13950   12943
   defer20  400K  399935      79     150     212      47   12027   11673
   defer50  400K  399888     101     171     231      39    9556    9921
  defer200  400K  399993     200     287     358      32    7428    8576
  fullbusy  400K  400018      63     132     203     100   21827   16062
  napibusy  400K  399970      89     230     292      83   18156   16508
 suspend10  400K  400061      69     139     202      54   13576   13057
 suspend20  400K  399988      73     144     206      49   11930   11773
 suspend50  400K  399975      88     161     218      42    9996   10270
suspend200  400K  399954     172     276     353      34    7847    8713

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  600K  600031     166     289     631      61    9188    8787
   defer10  600K  599967      85     167     262      75   11833   10947
   defer20  600K  599888      89     165     243      66   10513   10362
   defer50  600K  600072     109     185     253      55    8664    9190
  defer200  600K  599951     222     315     393      45    6892    8213
  fullbusy  600K  600041      69     145     227     100   14549   13936
  napibusy  600K  599980      79     188     280      96   13927   14155
 suspend10  600K  600028      78     159     267      69   10877   11032
 suspend20  600K  600026      81     159     254      64    9922   10320
 suspend50  600K  600007      96     178     258      57    8681    9331
suspend200  600K  599964     177     295     369      47    7115    8366

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  800K  800034     198     329     698      84    9366    8338
   defer10  800K  799718     243     642    1457      95   10532    9007
   defer20  800K  800009     132     245     399      89    9956    8979
   defer50  800K  800024     136     228     378      80    9002    8598
  defer200  800K  799965     255     362     473      66    7481    8147
  fullbusy  800K  799927      78     157     253     100   10915   12533
  napibusy  800K  799870      81     173     273      99   10826   12532
 suspend10  800K  799991      84     167     269      83    9380    9802
 suspend20  800K  799979      90     172     290      78    8765    9404
 suspend50  800K  800031     106     191     307      71    7945    8805
suspend200  800K  799905     182     307     411      62    6985    8242

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base 1000K  919543    3805    6390   14229      98    9324    7978
   defer10 1000K  850751    4574    7382   15370      99   10218    8470
   defer20 1000K  890296    4736    6862   14858      99    9708    8277
   defer50 1000K  932694    3463    6180   13251      97    9148    8053
  defer200 1000K  951311    3524    6052   13599      96    8875    7845
  fullbusy 1000K 1000011      90     181     278     100    8731   10686
  napibusy 1000K 1000050      93     184     280     100    8721   10547
 suspend10 1000K  999962     101     193     306      92    8138    8980
 suspend20 1000K 1000030     103     191     324      88    7844    8763
 suspend50 1000K 1000001     114     202     320      83    7396    8431
suspend200 1000K  999965     185     314     428      76    6733    8072

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base   MAX 1005592    4651    6594   14979     100    8679    7918
   defer10   MAX  928204    5106    7286   15199     100    9398    8380
   defer20   MAX  984663    4774    6518   14920     100    8861    8063
   defer50   MAX 1044099    4431    6368   14652     100    8350    7948
  defer200   MAX 1040451    4423    6610   14674     100    8380    7931
  fullbusy   MAX 1236608    3715    3987   12805     100    7051    7936
  napibusy   MAX 1077516    4345   10155   15957     100    8080    7842
 suspend10   MAX 1218344    3760    3990   12585     100    7150    7935
 suspend20   MAX 1220056    3752    4053   12602     100    7150    7961
 suspend50   MAX 1213666    3791    4103   12919     100    7183    7959
suspend200   MAX 1217411    3768    3988   12863     100    7161    7954

~ FAQ

  - Can the new timeout value be threaded through the new epoll ioctl ?

    Only with difficulty. The epoll ioctl sets options on an epoll
    context and the NAPI ID associated with an epoll context can change
    based on what file descriptors a user app adds to the epoll context.
    This would introduce complexity in the API from the user perspective
    and also complexity in the kernel.

  - Can irq suspend be built by combining NIC coalescing and
    gro_flush_timeout ?

    No. The problem is that the long timeout must engage if and only if
    prefer-busy is active.

    When using NIC coalescing for the short timeout (without
    napi_defer_hard_irqs/gro_flush_timeout), an interrupt after an idle
    period will trigger softirq, which will run napi polling. At this
    point, prefer-busy is not active, so NIC interrupts would be
    re-enabled. Then it is not possible for the longer timeout to
    interject to switch control back to polling. In other words, only by
    using the software timer for the short timeout, it is possible to
    extend the timeout without having to reprogram the NIC timer or
    reach down directly and disable interrupts.

    Using gro_flush_timeout for the long timeout also has problems, for
    the same underlying reason. In the current napi implementation,
    gro_flush_timeout is not tied to prefer-busy. We'd either have to
    change that and in the process modify the existing deferral
    mechanism, or introduce a state variable to determine whether
    gro_flush_timeout is used as long timeout for irq suspend or whether
    it is used for its default purpose. In an earlier version, we did
    try something similar to the latter and made it work, but it ends up
    being a lot more convoluted than our current proposal.

  - Isn't it already possible to combine busy looping with irq deferral?

    Yes, in fact enabling irq deferral via napi_defer_hard_irqs and
    gro_flush_timeout is a precondition for prefer_busy_poll to have an
    effect. If the application also uses a tight busy loop with
    essentially nonblocking epoll_wait (accomplished with a very short
    timeout parameter), this is the fullbusy case shown in the results.
    An application using blocking epoll_wait is shown as the napibusy
    case in the result. It's a hybrid approach that provides limited
    latency benefits compared to the base case and plain irq deferral,
    but not as good as fullbusy or suspend.

~ Special thanks

Several people were involved in earlier stages of the development of this
mechanism whom we'd like to thank:

  - Peter Cai (CC'd), for the initial kernel patch and his contributions
    to the paper.
    
  - Mohammadamin Shafie (CC'd), for testing various versions of the kernel
    patch and providing helpful feedback.

Thanks,
Martin and Joe

[1]: https://lore.kernel.org/netdev/20240812125717.413108-1-jdamato@fastly.com/
[2]: https://doi.org/10.1145/3626780
[3]: https://github.com/memcached/memcached/blob/master/doc/napi_ids.txt
[4]: https://github.com/leverich/mutilate
[5]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/memcached.patch
[6]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/libevent.patch
[7]: https://github.com/martinkarsten/irqsuspend
[8]: https://github.com/martinkarsten/irqsuspend/tree/main/results

v2:
  - Cover letter updated, including a re-run of test data.
  - Patch 1 rewritten to use netdev-genl instead of sysfs.
  - Patch 3 updated with a comment added to napi_resume_irqs.
  - Patch 4 rebased to apply now that commit b9ca079dd6b0 ("eventpoll:
    Annotate data-race of busy_poll_usecs") has been picked up from VFS.
  - Patch 6 updated the kernel documentation.

rfc -> v1: https://lore.kernel.org/netdev/20240823173103.94978-1-jdamato@fastly.com/
  - Cover letter updated to include more details.
  - Patch 1 updated to remove the documentation added. This was moved to
    patch 6 with the rest of the docs (see below).
  - Patch 5 updated to fix an error uncovered by the kernel build robot.
    See patch 5's changelog for more details.
  - Patch 6 added which updates kernel documentation.

Joe Damato (1):
  docs: networking: Describe irq suspension

Martin Karsten (5):
  net: Add napi_struct parameter irq_suspend_timeout
  net: Suspend softirq when prefer_busy_poll is set
  net: Add control functions for irq suspension
  eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set
  eventpoll: Control irq suspension for prefer_busy_poll

 Documentation/netlink/specs/netdev.yaml |   7 ++
 Documentation/networking/napi.rst       | 132 +++++++++++++++++++++++-
 fs/eventpoll.c                          |  35 ++++++-
 include/linux/netdevice.h               |   2 +
 include/net/busy_poll.h                 |   3 +
 include/uapi/linux/netdev.h             |   1 +
 net/core/dev.c                          |  58 ++++++++++-
 net/core/dev.h                          |  25 +++++
 net/core/netdev-genl-gen.c              |   5 +-
 net/core/netdev-genl.c                  |  12 +++
 tools/include/uapi/linux/netdev.h       |   1 +
 11 files changed, 271 insertions(+), 10 deletions(-)


base-commit: 160a810b2a8588187ec2b1536d0355c0aab8981c
-- 
2.25.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH net-next v2 1/6] net: Add napi_struct parameter irq_suspend_timeout
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
@ 2024-10-21  1:52 ` Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 2/6] net: Suspend softirq when prefer_busy_poll is set Joe Damato
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:52 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Martin Karsten, Joe Damato, David S. Miller,
	Paolo Abeni, Donald Hunter, Jesper Dangaard Brouer, Mina Almasry,
	Xuan Zhuo, Jiri Pirko, Sebastian Andrzej Siewior,
	Lorenzo Bianconi, Alexander Lobakin, Johannes Berg,
	Daniel Jurgens, open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

Add a per-NAPI IRQ suspension parameter, which can be get/set with
netdev-genl.

This patch doesn't change any behavior but prepares the code for other
changes in the following commits which use irq_suspend_timeout as a
timeout for IRQ suspension.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 v1 -> v2:
   - Rewrote this patch to make irq_suspend_timeout per-NAPI via
     netdev-genl.

 rfc -> v1:
   - Removed napi.rst documentation from this patch; added to patch 6.

 Documentation/netlink/specs/netdev.yaml |  7 +++++++
 include/linux/netdevice.h               |  2 ++
 include/uapi/linux/netdev.h             |  1 +
 net/core/dev.c                          |  2 ++
 net/core/dev.h                          | 25 +++++++++++++++++++++++++
 net/core/netdev-genl-gen.c              |  5 +++--
 net/core/netdev-genl.c                  | 12 ++++++++++++
 tools/include/uapi/linux/netdev.h       |  1 +
 8 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index f9cb97d6106c..cbb544bd6c84 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -263,6 +263,11 @@ attribute-sets:
              the end of a NAPI cycle. This may add receive latency in exchange
              for reducing the number of frames processed by the network stack.
         type: uint
+      -
+        name: irq-suspend-timeout
+        doc: The timeout, in nanoseconds, of how long to suspend irq
+             processing, if event polling finds events
+        type: uint
   -
     name: queue
     attributes:
@@ -653,6 +658,7 @@ operations:
             - pid
             - defer-hard-irqs
             - gro-flush-timeout
+            - irq-suspend-timeout
       dump:
         request:
           attributes:
@@ -704,6 +710,7 @@ operations:
             - id
             - defer-hard-irqs
             - gro-flush-timeout
+            - irq-suspend-timeout
 
 kernel-family:
   headers: [ "linux/list.h"]
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8feaca12655e..9c3be8455707 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -347,6 +347,7 @@ struct gro_list {
  */
 struct napi_config {
 	u64 gro_flush_timeout;
+	u64 irq_suspend_timeout;
 	u32 defer_hard_irqs;
 	unsigned int napi_id;
 };
@@ -383,6 +384,7 @@ struct napi_struct {
 	struct hrtimer		timer;
 	struct task_struct	*thread;
 	unsigned long		gro_flush_timeout;
+	unsigned long		irq_suspend_timeout;
 	u32			defer_hard_irqs;
 	/* control-path-only fields follow */
 	struct list_head	dev_list;
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index e3ebb49f60d2..e4be227d3ad6 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -124,6 +124,7 @@ enum {
 	NETDEV_A_NAPI_PID,
 	NETDEV_A_NAPI_DEFER_HARD_IRQS,
 	NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+	NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 
 	__NETDEV_A_NAPI_MAX,
 	NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
diff --git a/net/core/dev.c b/net/core/dev.c
index c682173a7642..68d0b16c2f2c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6662,6 +6662,7 @@ static void napi_restore_config(struct napi_struct *n)
 {
 	n->defer_hard_irqs = n->config->defer_hard_irqs;
 	n->gro_flush_timeout = n->config->gro_flush_timeout;
+	n->irq_suspend_timeout = n->config->irq_suspend_timeout;
 	/* a NAPI ID might be stored in the config, if so use it. if not, use
 	 * napi_hash_add to generate one for us. It will be saved to the config
 	 * in napi_disable.
@@ -6676,6 +6677,7 @@ static void napi_save_config(struct napi_struct *n)
 {
 	n->config->defer_hard_irqs = n->defer_hard_irqs;
 	n->config->gro_flush_timeout = n->gro_flush_timeout;
+	n->config->irq_suspend_timeout = n->irq_suspend_timeout;
 	n->config->napi_id = n->napi_id;
 	napi_hash_del(n);
 }
diff --git a/net/core/dev.h b/net/core/dev.h
index 7881bced70a9..d043dee25a68 100644
--- a/net/core/dev.h
+++ b/net/core/dev.h
@@ -236,6 +236,31 @@ static inline void netdev_set_gro_flush_timeout(struct net_device *netdev,
 		netdev->napi_config[i].gro_flush_timeout = timeout;
 }
 
+/**
+ * napi_get_irq_suspend_timeout - get the irq_suspend_timeout
+ * @n: napi struct to get the irq_suspend_timeout from
+ *
+ * Return: the per-NAPI value of the irq_suspend_timeout field.
+ */
+static inline unsigned long
+napi_get_irq_suspend_timeout(const struct napi_struct *n)
+{
+	return READ_ONCE(n->irq_suspend_timeout);
+}
+
+/**
+ * napi_set_irq_suspend_timeout - set the irq_suspend_timeout for a napi
+ * @n: napi struct to set the irq_suspend_timeout
+ * @timeout: timeout value to set
+ *
+ * napi_set_irq_suspend_timeout sets the per-NAPI irq_suspend_timeout
+ */
+static inline void napi_set_irq_suspend_timeout(struct napi_struct *n,
+						unsigned long timeout)
+{
+	WRITE_ONCE(n->irq_suspend_timeout, timeout);
+}
+
 int rps_cpumask_housekeeping(struct cpumask *mask);
 
 #if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL)
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index 21de7e10be16..a89cbd8d87c3 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -92,10 +92,11 @@ static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_DMABUF_FD + 1]
 };
 
 /* NETDEV_CMD_NAPI_SET - do */
-static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT + 1] = {
+static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT + 1] = {
 	[NETDEV_A_NAPI_ID] = { .type = NLA_U32, },
 	[NETDEV_A_NAPI_DEFER_HARD_IRQS] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_napi_defer_hard_irqs_range),
 	[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT] = { .type = NLA_UINT, },
+	[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT] = { .type = NLA_UINT, },
 };
 
 /* Ops table for netdev */
@@ -186,7 +187,7 @@ static const struct genl_split_ops netdev_nl_ops[] = {
 		.cmd		= NETDEV_CMD_NAPI_SET,
 		.doit		= netdev_nl_napi_set_doit,
 		.policy		= netdev_napi_set_nl_policy,
-		.maxattr	= NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+		.maxattr	= NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 		.flags		= GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
 	},
 };
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index b49c3b4e5fbe..765ce7c9d73b 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -161,6 +161,7 @@ static int
 netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
 			const struct genl_info *info)
 {
+	unsigned long irq_suspend_timeout;
 	unsigned long gro_flush_timeout;
 	u32 napi_defer_hard_irqs;
 	void *hdr;
@@ -196,6 +197,11 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
 			napi_defer_hard_irqs))
 		goto nla_put_failure;
 
+	irq_suspend_timeout = napi_get_irq_suspend_timeout(napi);
+	if (nla_put_uint(rsp, NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
+			 irq_suspend_timeout))
+		goto nla_put_failure;
+
 	gro_flush_timeout = napi_get_gro_flush_timeout(napi);
 	if (nla_put_uint(rsp, NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
 			 gro_flush_timeout))
@@ -306,6 +312,7 @@ int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
 static int
 netdev_nl_napi_set_config(struct napi_struct *napi, struct genl_info *info)
 {
+	u64 irq_suspend_timeout = 0;
 	u64 gro_flush_timeout = 0;
 	u32 defer = 0;
 
@@ -314,6 +321,11 @@ netdev_nl_napi_set_config(struct napi_struct *napi, struct genl_info *info)
 		napi_set_defer_hard_irqs(napi, defer);
 	}
 
+	if (info->attrs[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT]) {
+		irq_suspend_timeout = nla_get_uint(info->attrs[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT]);
+		napi_set_irq_suspend_timeout(napi, irq_suspend_timeout);
+	}
+
 	if (info->attrs[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT]) {
 		gro_flush_timeout = nla_get_uint(info->attrs[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT]);
 		napi_set_gro_flush_timeout(napi, gro_flush_timeout);
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index e3ebb49f60d2..e4be227d3ad6 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -124,6 +124,7 @@ enum {
 	NETDEV_A_NAPI_PID,
 	NETDEV_A_NAPI_DEFER_HARD_IRQS,
 	NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+	NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 
 	__NETDEV_A_NAPI_MAX,
 	NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v2 2/6] net: Suspend softirq when prefer_busy_poll is set
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 1/6] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
@ 2024-10-21  1:52 ` Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 3/6] net: Add control functions for irq suspension Joe Damato
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:52 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Martin Karsten, Joe Damato, David S. Miller,
	Paolo Abeni, Jiri Pirko, Sebastian Andrzej Siewior,
	Lorenzo Bianconi, Alexander Lobakin, open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

When NAPI_F_PREFER_BUSY_POLL is set during busy_poll_stop and the
irq_suspend_timeout sysfs is nonzero, this timeout is used to defer
softirq scheduling, potentially longer than gro_flush_timeout. This can
be used to effectively suspend softirq processing during the time it
takes for an application to process data and return to the next
busy-poll.

The call to napi->poll in busy_poll_stop might lead to an invocation of
napi_complete_done, but the prefer-busy flag is still set at that time,
so the same logic is used to defer softirq scheduling for
irq_suspend_timeout.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 net/core/dev.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 68d0b16c2f2c..a388303b26b9 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6235,7 +6235,12 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
 			timeout = napi_get_gro_flush_timeout(n);
 		n->defer_hard_irqs_count = napi_get_defer_hard_irqs(n);
 	}
-	if (n->defer_hard_irqs_count > 0) {
+	if (napi_prefer_busy_poll(n)) {
+		timeout = napi_get_irq_suspend_timeout(n);
+		if (timeout)
+			ret = false;
+	}
+	if (ret && n->defer_hard_irqs_count > 0) {
 		n->defer_hard_irqs_count--;
 		timeout = napi_get_gro_flush_timeout(n);
 		if (timeout)
@@ -6371,9 +6376,13 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock,
 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
 
 	if (flags & NAPI_F_PREFER_BUSY_POLL) {
-		napi->defer_hard_irqs_count = napi_get_defer_hard_irqs(napi);
-		timeout = napi_get_gro_flush_timeout(napi);
-		if (napi->defer_hard_irqs_count && timeout) {
+		timeout = napi_get_irq_suspend_timeout(napi);
+		if (!timeout) {
+			napi->defer_hard_irqs_count = napi_get_defer_hard_irqs(napi);
+			if (napi->defer_hard_irqs_count)
+				timeout = napi_get_gro_flush_timeout(napi);
+		}
+		if (timeout) {
 			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
 			skip_schedule = true;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v2 3/6] net: Add control functions for irq suspension
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 1/6] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 2/6] net: Suspend softirq when prefer_busy_poll is set Joe Damato
@ 2024-10-21  1:52 ` Joe Damato
  2024-10-21  1:52 ` [PATCH net-next v2 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:52 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Martin Karsten, Joe Damato, David S. Miller,
	Paolo Abeni, Jiri Pirko, Sebastian Andrzej Siewior,
	Lorenzo Bianconi, Alexander Lobakin, open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

The napi_suspend_irqs routine bootstraps irq suspension by elongating
the defer timeout to irq_suspend_timeout.

The napi_resume_irqs routine effectively cancels irq suspension by
forcing the napi to be scheduled immediately.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 v1 -> v2:
   - Added a comment to napi_resume_irqs.
  
 include/net/busy_poll.h |  3 +++
 net/core/dev.c          | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index f03040baaefd..c858270141bc 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -52,6 +52,9 @@ void napi_busy_loop_rcu(unsigned int napi_id,
 			bool (*loop_end)(void *, unsigned long),
 			void *loop_end_arg, bool prefer_busy_poll, u16 budget);
 
+void napi_suspend_irqs(unsigned int napi_id);
+void napi_resume_irqs(unsigned int napi_id);
+
 #else /* CONFIG_NET_RX_BUSY_POLL */
 static inline unsigned long net_busy_loop_on(void)
 {
diff --git a/net/core/dev.c b/net/core/dev.c
index a388303b26b9..143b3e690169 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6512,6 +6512,45 @@ void napi_busy_loop(unsigned int napi_id,
 }
 EXPORT_SYMBOL(napi_busy_loop);
 
+void napi_suspend_irqs(unsigned int napi_id)
+{
+	struct napi_struct *napi;
+
+	rcu_read_lock();
+	napi = napi_by_id(napi_id);
+	if (napi) {
+		unsigned long timeout = napi_get_irq_suspend_timeout(napi);
+
+		if (timeout)
+			hrtimer_start(&napi->timer, ns_to_ktime(timeout),
+				      HRTIMER_MODE_REL_PINNED);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(napi_suspend_irqs);
+
+void napi_resume_irqs(unsigned int napi_id)
+{
+	struct napi_struct *napi;
+
+	rcu_read_lock();
+	napi = napi_by_id(napi_id);
+	if (napi) {
+		/* If irq_suspend_timeout is set to 0 between the call to
+		 * napi_suspend_irqs and now, the original value still
+		 * determines the safety timeout as intended and napi_watchdog
+		 * will resume irq processing.
+		 */
+		if (napi_get_irq_suspend_timeout(napi)) {
+			local_bh_disable();
+			napi_schedule(napi);
+			local_bh_enable();
+		}
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(napi_resume_irqs);
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 static void __napi_hash_add_with_id(struct napi_struct *napi,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v2 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (2 preceding siblings ...)
  2024-10-21  1:52 ` [PATCH net-next v2 3/6] net: Add control functions for irq suspension Joe Damato
@ 2024-10-21  1:52 ` Joe Damato
  2024-10-21  1:53 ` [PATCH net-next v2 5/6] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:52 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Martin Karsten, Joe Damato, Alexander Viro,
	Christian Brauner, Jan Kara,
	open list:FILESYSTEMS (VFS and infrastructure), open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

Setting prefer_busy_poll now leads to an effectively nonblocking
iteration though napi_busy_loop, even when busy_poll_usecs is 0.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 v1 -> v2:
   - Rebased to apply now that commit b9ca079dd6b0 ("eventpoll: Annotate
     data-race of busy_poll_usecs") has been picked up from VFS.

 fs/eventpoll.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 1ae4542f0bd8..f9e0d9307dad 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -420,7 +420,9 @@ static bool busy_loop_ep_timeout(unsigned long start_time,
 
 static bool ep_busy_loop_on(struct eventpoll *ep)
 {
-	return !!READ_ONCE(ep->busy_poll_usecs) || net_busy_loop_on();
+	return !!READ_ONCE(ep->busy_poll_usecs) ||
+	       READ_ONCE(ep->prefer_busy_poll) ||
+	       net_busy_loop_on();
 }
 
 static bool ep_busy_loop_end(void *p, unsigned long start_time)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v2 5/6] eventpoll: Control irq suspension for prefer_busy_poll
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (3 preceding siblings ...)
  2024-10-21  1:52 ` [PATCH net-next v2 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
@ 2024-10-21  1:53 ` Joe Damato
  2024-10-21  1:53 ` [PATCH net-next v2 6/6] docs: networking: Describe irq suspension Joe Damato
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:53 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Martin Karsten, Joe Damato, Alexander Viro,
	Christian Brauner, Jan Kara,
	open list:FILESYSTEMS (VFS and infrastructure), open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

When events are reported to userland and prefer_busy_poll is set, irqs
are temporarily suspended using napi_suspend_irqs.

If no events are found and ep_poll would go to sleep, irq suspension is
cancelled using napi_resume_irqs.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 rfc -> v1:
   - move irq resume code from ep_free to a helper which either resumes
     IRQs or does nothing if !defined(CONFIG_NET_RX_BUSY_POLL).

 fs/eventpoll.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index f9e0d9307dad..36a657594352 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -457,6 +457,8 @@ static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
 		 * it back in when we have moved a socket with a valid NAPI
 		 * ID onto the ready list.
 		 */
+		if (prefer_busy_poll)
+			napi_resume_irqs(napi_id);
 		ep->napi_id = 0;
 		return false;
 	}
@@ -540,6 +542,22 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
 	}
 }
 
+static void ep_suspend_napi_irqs(struct eventpoll *ep)
+{
+	unsigned int napi_id = READ_ONCE(ep->napi_id);
+
+	if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
+		napi_suspend_irqs(napi_id);
+}
+
+static void ep_resume_napi_irqs(struct eventpoll *ep)
+{
+	unsigned int napi_id = READ_ONCE(ep->napi_id);
+
+	if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
+		napi_resume_irqs(napi_id);
+}
+
 #else
 
 static inline bool ep_busy_loop(struct eventpoll *ep, int nonblock)
@@ -557,6 +575,14 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
 	return -EOPNOTSUPP;
 }
 
+static void ep_suspend_napi_irqs(struct eventpoll *ep)
+{
+}
+
+static void ep_resume_napi_irqs(struct eventpoll *ep)
+{
+}
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 /*
@@ -788,6 +814,7 @@ static bool ep_refcount_dec_and_test(struct eventpoll *ep)
 
 static void ep_free(struct eventpoll *ep)
 {
+	ep_resume_napi_irqs(ep);
 	mutex_destroy(&ep->mtx);
 	free_uid(ep->user);
 	wakeup_source_unregister(ep->ws);
@@ -2005,8 +2032,10 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 			 * trying again in search of more luck.
 			 */
 			res = ep_send_events(ep, events, maxevents);
-			if (res)
+			if (res) {
+				ep_suspend_napi_irqs(ep);
 				return res;
+			}
 		}
 
 		if (timed_out)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next v2 6/6] docs: networking: Describe irq suspension
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (4 preceding siblings ...)
  2024-10-21  1:53 ` [PATCH net-next v2 5/6] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
@ 2024-10-21  1:53 ` Joe Damato
  2024-10-21 10:49   ` Bagas Sanjaya
  2024-10-24 23:05 ` [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Stanislav Fomichev
  2024-10-29 10:25 ` Paolo Abeni
  7 siblings, 1 reply; 14+ messages in thread
From: Joe Damato @ 2024-10-21  1:53 UTC (permalink / raw)
  To: netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Joe Damato, Martin Karsten, David S. Miller,
	Paolo Abeni, Jonathan Corbet, open list:DOCUMENTATION, open list,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_)

Describe irq suspension, the epoll ioctls, and the tradeoffs of using
different gro_flush_timeout values.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Co-developed-by: Martin Karsten <mkarsten@uwaterloo.ca>
Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 v1 -> v2:
   - Updated documentation to describe the per-NAPI configuration
     parameters.

 Documentation/networking/napi.rst | 132 +++++++++++++++++++++++++++++-
 1 file changed, 130 insertions(+), 2 deletions(-)

diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index dfa5d549be9c..3b43477a52ce 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -192,6 +192,28 @@ is reused to control the delay of the timer, while
 ``napi_defer_hard_irqs`` controls the number of consecutive empty polls
 before NAPI gives up and goes back to using hardware IRQs.
 
+The above parameters can also be set on a per-NAPI basis using netlink via
+netdev-genl. This can be done programmatically in a user application or by
+using a script included in the kernel source tree: ``tools/net/ynl/cli.py``.
+
+For example, using the script:
+
+.. code-block:: bash
+
+  $ kernel-source/tools/net/ynl/cli.py \
+            --spec Documentation/netlink/specs/netdev.yaml \
+            --do napi-set \
+            --json='{"id": 345,
+                     "defer-hard-irqs": 111,
+                     "gro-flush-timeout": 11111}'
+
+Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
+via netdev-genl. There is no global sysfs parameter for this value.
+
+``irq_suspend_timeout`` is used to determine how long an application can
+completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
+which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
+
 .. _poll:
 
 Busy polling
@@ -207,6 +229,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
 ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
 also exists.
 
+epoll-based busy polling
+------------------------
+
+It is possible to trigger packet processing directly from calls to
+``epoll_wait``. In order to use this feature, a user application must ensure
+all file descriptors which are added to an epoll context have the same NAPI ID.
+
+If the application uses a dedicated acceptor thread, the application can obtain
+the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
+distribute that file descriptor to a worker thread. The worker thread would add
+the file descriptor to its epoll context. This would ensure each worker thread
+has an epoll context with FDs that have the same NAPI ID.
+
+Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program be
+inserted to distribute incoming connections to threads such that each thread is
+only given incoming connections with the same NAPI ID. Care must be taken to
+carefully handle cases where a system may have multiple NICs.
+
+In order to enable busy polling, there are two choices:
+
+1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
+   loop waiting for events. This is a system-wide setting and will cause all
+   epoll-based applications to busy poll when they call epoll_wait. This may
+   not be desirable as many applications may not have the need to busy poll.
+
+2. Applications using recent kernels can issue an ioctl on the epoll context
+   file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
+   epoll_params``:, which user programs can define as follows:
+
+.. code-block:: c
+
+  struct epoll_params {
+      uint32_t busy_poll_usecs;
+      uint16_t busy_poll_budget;
+      uint8_t prefer_busy_poll;
+
+      /* pad the struct to a multiple of 64bits */
+      uint8_t __pad;
+  };
+
 IRQ mitigation
 ---------------
 
@@ -222,12 +284,78 @@ Such applications can pledge to the kernel that they will perform a busy
 polling operation periodically, and the driver should keep the device IRQs
 permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
 socket option. To avoid system misbehavior the pledge is revoked
-if ``gro_flush_timeout`` passes without any busy poll call.
+if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
+busy polling applications, the ``prefer_busy_poll`` field of ``struct
+epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
+enable this mode. See the above section for more details.
 
 The NAPI budget for busy polling is lower than the default (which makes
 sense given the low latency intention of normal busy polling). This is
 not the case with IRQ mitigation, however, so the budget can be adjusted
-with the ``SO_BUSY_POLL_BUDGET`` socket option.
+with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
+applications, the ``busy_poll_budget`` field can be adjusted to the desired value
+in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
+ioctl. See the above section for more details.
+
+It is important to note that choosing a large value for ``gro_flush_timeout``
+will defer IRQs to allow for better batch processing, but will induce latency
+when the system is not fully loaded. Choosing a small value for
+``gro_flush_timeout`` can cause interference of the user application which is
+attempting to busy poll by device IRQs and softirq processing. This value
+should be chosen carefully with these tradeoffs in mind. epoll-based busy
+polling applications may be able to mitigate how much user processing happens
+by choosing an appropriate value for ``maxevents``.
+
+Users may want to consider an alternate approach, IRQ suspension, to help deal
+with these tradeoffs.
+
+IRQ suspension
+--------------
+
+IRQ suspension is a mechanism wherein device IRQs are masked while epoll
+triggers NAPI packet processing.
+
+While application calls to epoll_wait successfully retrieve events, the kernel will
+defer the IRQ suspension timer. If the kernel does not retrieve any events
+while busy polling (for example, because network traffic levels subsided), IRQ
+suspension is disabled and the IRQ mitigation strategies described above are
+engaged.
+
+This allows users to balance CPU consumption with network processing
+efficiency.
+
+To use this mechanism:
+
+  1. The per-NAPI config parameter ``irq_suspend_timeout`` should be set to the
+     maximum time (in nanoseconds) the application can have its IRQs
+     suspended. This is done using netlink, as described above. This timeout
+     serves as a safety mechanism to restart IRQ driver interrupt processing if
+     the application has stalled. This value should be chosen so that it covers
+     the amount of time the user application needs to process data from its
+     call to epoll_wait, noting that applications can control how much data
+     they retrieve by setting ``max_events`` when calling epoll_wait.
+
+  2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
+     and ``napi_defer_hard_irqs`` can be set to low values. They will be used
+     to defer IRQs after busy poll has found no data.
+
+  3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
+     the ``EPIOCSPARAMS`` ioctl as described above.
+
+  4. The application uses epoll as described above to trigger NAPI packet
+     processing.
+
+As mentioned above, as long as subsequent calls to epoll_wait return events to
+userland, the ``irq_suspend_timeout`` is deferred and IRQs are disabled. This
+allows the application to process data without interference.
+
+Once a call to epoll_wait results in no events being found, IRQ suspension is
+automatically disabled and the ``gro_flush_timeout`` and
+``napi_defer_hard_irqs`` mitigation mechanisms take over.
+
+It is expected that ``irq_suspend_timeout`` will be set to a value much larger
+than ``gro_flush_timeout`` as ``irq_suspend_timeout`` should suspend IRQs for
+the duration of one userland processing cycle.
 
 .. _threaded:
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 6/6] docs: networking: Describe irq suspension
  2024-10-21  1:53 ` [PATCH net-next v2 6/6] docs: networking: Describe irq suspension Joe Damato
@ 2024-10-21 10:49   ` Bagas Sanjaya
  2024-10-21 16:33     ` Joe Damato
  0 siblings, 1 reply; 14+ messages in thread
From: Bagas Sanjaya @ 2024-10-21 10:49 UTC (permalink / raw)
  To: Joe Damato, Linux Networking
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Martin Karsten, David S. Miller, Paolo Abeni,
	Jonathan Corbet, Linux Documentation, Linux Kernel Mailing List,
	Linux BPF

[-- Attachment #1: Type: text/plain, Size: 8482 bytes --]

On Mon, Oct 21, 2024 at 01:53:01AM +0000, Joe Damato wrote:
> diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
> index dfa5d549be9c..3b43477a52ce 100644
> --- a/Documentation/networking/napi.rst
> +++ b/Documentation/networking/napi.rst
> @@ -192,6 +192,28 @@ is reused to control the delay of the timer, while
>  ``napi_defer_hard_irqs`` controls the number of consecutive empty polls
>  before NAPI gives up and goes back to using hardware IRQs.
>  
> +The above parameters can also be set on a per-NAPI basis using netlink via
> +netdev-genl. This can be done programmatically in a user application or by
> +using a script included in the kernel source tree: ``tools/net/ynl/cli.py``.
> +
> +For example, using the script:
> +
> +.. code-block:: bash
> +
> +  $ kernel-source/tools/net/ynl/cli.py \
> +            --spec Documentation/netlink/specs/netdev.yaml \
> +            --do napi-set \
> +            --json='{"id": 345,
> +                     "defer-hard-irqs": 111,
> +                     "gro-flush-timeout": 11111}'
> +
> +Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
> +via netdev-genl. There is no global sysfs parameter for this value.

In JSON, both gro-flush-timeout and irq-suspend-timeout parameter
names are written in hyphens; but the rest of the docs uses underscores
(that is, gro_flush_timeout and irq_suspend_timeout), right?

> +
> +``irq_suspend_timeout`` is used to determine how long an application can
> +completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
> +which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
> +
>  .. _poll:
>  
>  Busy polling
> @@ -207,6 +229,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
>  ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
>  also exists.
>  
> +epoll-based busy polling
> +------------------------
> +
> +It is possible to trigger packet processing directly from calls to
> +``epoll_wait``. In order to use this feature, a user application must ensure
> +all file descriptors which are added to an epoll context have the same NAPI ID.
> +
> +If the application uses a dedicated acceptor thread, the application can obtain
> +the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
> +distribute that file descriptor to a worker thread. The worker thread would add
> +the file descriptor to its epoll context. This would ensure each worker thread
> +has an epoll context with FDs that have the same NAPI ID.
> +
> +Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program be
> +inserted to distribute incoming connections to threads such that each thread is
> +only given incoming connections with the same NAPI ID. Care must be taken to
> +carefully handle cases where a system may have multiple NICs.
> +
> +In order to enable busy polling, there are two choices:
> +
> +1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
> +   loop waiting for events. This is a system-wide setting and will cause all
> +   epoll-based applications to busy poll when they call epoll_wait. This may
> +   not be desirable as many applications may not have the need to busy poll.
> +
> +2. Applications using recent kernels can issue an ioctl on the epoll context
> +   file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
> +   epoll_params``:, which user programs can define as follows:
> +
> +.. code-block:: c
> +
> +  struct epoll_params {
> +      uint32_t busy_poll_usecs;
> +      uint16_t busy_poll_budget;
> +      uint8_t prefer_busy_poll;
> +
> +      /* pad the struct to a multiple of 64bits */
> +      uint8_t __pad;
> +  };
> +
>  IRQ mitigation
>  ---------------
>  
> @@ -222,12 +284,78 @@ Such applications can pledge to the kernel that they will perform a busy
>  polling operation periodically, and the driver should keep the device IRQs
>  permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
>  socket option. To avoid system misbehavior the pledge is revoked
> -if ``gro_flush_timeout`` passes without any busy poll call.
> +if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
> +busy polling applications, the ``prefer_busy_poll`` field of ``struct
> +epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
> +enable this mode. See the above section for more details.
>  
>  The NAPI budget for busy polling is lower than the default (which makes
>  sense given the low latency intention of normal busy polling). This is
>  not the case with IRQ mitigation, however, so the budget can be adjusted
> -with the ``SO_BUSY_POLL_BUDGET`` socket option.
> +with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
> +applications, the ``busy_poll_budget`` field can be adjusted to the desired value
> +in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
> +ioctl. See the above section for more details.
> +
> +It is important to note that choosing a large value for ``gro_flush_timeout``
> +will defer IRQs to allow for better batch processing, but will induce latency
> +when the system is not fully loaded. Choosing a small value for
> +``gro_flush_timeout`` can cause interference of the user application which is
> +attempting to busy poll by device IRQs and softirq processing. This value
> +should be chosen carefully with these tradeoffs in mind. epoll-based busy
> +polling applications may be able to mitigate how much user processing happens
> +by choosing an appropriate value for ``maxevents``.
> +
> +Users may want to consider an alternate approach, IRQ suspension, to help deal
> +with these tradeoffs.
> +
> +IRQ suspension
> +--------------
> +
> +IRQ suspension is a mechanism wherein device IRQs are masked while epoll
> +triggers NAPI packet processing.
> +
> +While application calls to epoll_wait successfully retrieve events, the kernel will
> +defer the IRQ suspension timer. If the kernel does not retrieve any events
> +while busy polling (for example, because network traffic levels subsided), IRQ
> +suspension is disabled and the IRQ mitigation strategies described above are
> +engaged.
> +
> +This allows users to balance CPU consumption with network processing
> +efficiency.
> +
> +To use this mechanism:
> +
> +  1. The per-NAPI config parameter ``irq_suspend_timeout`` should be set to the
> +     maximum time (in nanoseconds) the application can have its IRQs
> +     suspended. This is done using netlink, as described above. This timeout
> +     serves as a safety mechanism to restart IRQ driver interrupt processing if
> +     the application has stalled. This value should be chosen so that it covers
> +     the amount of time the user application needs to process data from its
> +     call to epoll_wait, noting that applications can control how much data
> +     they retrieve by setting ``max_events`` when calling epoll_wait.
> +
> +  2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
> +     and ``napi_defer_hard_irqs`` can be set to low values. They will be used
> +     to defer IRQs after busy poll has found no data.
> +
> +  3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
> +     the ``EPIOCSPARAMS`` ioctl as described above.
> +
> +  4. The application uses epoll as described above to trigger NAPI packet
> +     processing.
> +
> +As mentioned above, as long as subsequent calls to epoll_wait return events to
> +userland, the ``irq_suspend_timeout`` is deferred and IRQs are disabled. This
> +allows the application to process data without interference.
> +
> +Once a call to epoll_wait results in no events being found, IRQ suspension is
> +automatically disabled and the ``gro_flush_timeout`` and
> +``napi_defer_hard_irqs`` mitigation mechanisms take over.
> +
> +It is expected that ``irq_suspend_timeout`` will be set to a value much larger
> +than ``gro_flush_timeout`` as ``irq_suspend_timeout`` should suspend IRQs for
> +the duration of one userland processing cycle.
>  
>  .. _threaded:
>  

The rest LGTM, thanks!

Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>

-- 
An old man doll... just what I always wanted! - Clara

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 6/6] docs: networking: Describe irq suspension
  2024-10-21 10:49   ` Bagas Sanjaya
@ 2024-10-21 16:33     ` Joe Damato
  2024-10-25  1:03       ` Bagas Sanjaya
  0 siblings, 1 reply; 14+ messages in thread
From: Joe Damato @ 2024-10-21 16:33 UTC (permalink / raw)
  To: Bagas Sanjaya
  Cc: Linux Networking, namangulati, edumazet, amritha.nambiar,
	sridhar.samudrala, sdf, peter, m2shafiei, bjorn, hch, willy,
	willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Paolo Abeni, Jonathan Corbet,
	Linux Documentation, Linux Kernel Mailing List, Linux BPF

On Mon, Oct 21, 2024 at 05:49:14PM +0700, Bagas Sanjaya wrote:
> On Mon, Oct 21, 2024 at 01:53:01AM +0000, Joe Damato wrote:
> > diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
> > index dfa5d549be9c..3b43477a52ce 100644
> > --- a/Documentation/networking/napi.rst
> > +++ b/Documentation/networking/napi.rst
> > @@ -192,6 +192,28 @@ is reused to control the delay of the timer, while
> >  ``napi_defer_hard_irqs`` controls the number of consecutive empty polls
> >  before NAPI gives up and goes back to using hardware IRQs.
> >  
> > +The above parameters can also be set on a per-NAPI basis using netlink via
> > +netdev-genl. This can be done programmatically in a user application or by
> > +using a script included in the kernel source tree: ``tools/net/ynl/cli.py``.
> > +
> > +For example, using the script:
> > +
> > +.. code-block:: bash
> > +
> > +  $ kernel-source/tools/net/ynl/cli.py \
> > +            --spec Documentation/netlink/specs/netdev.yaml \
> > +            --do napi-set \
> > +            --json='{"id": 345,
> > +                     "defer-hard-irqs": 111,
> > +                     "gro-flush-timeout": 11111}'
> > +
> > +Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
> > +via netdev-genl. There is no global sysfs parameter for this value.
> 
> In JSON, both gro-flush-timeout and irq-suspend-timeout parameter
> names are written in hyphens; but the rest of the docs uses underscores
> (that is, gro_flush_timeout and irq_suspend_timeout), right?

That's right. The YAML specification uses hyphens throughout, so we
follow that convention there.

In the rest of the docs we use the name of the field which appears
in the code itself, which uses underscores.

> > +
> > +``irq_suspend_timeout`` is used to determine how long an application can
> > +completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
> > +which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
> > +
> >  .. _poll:
> >  
> >  Busy polling
> > @@ -207,6 +229,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
> >  ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
> >  also exists.
> >  
> > +epoll-based busy polling
> > +------------------------
> > +
> > +It is possible to trigger packet processing directly from calls to
> > +``epoll_wait``. In order to use this feature, a user application must ensure
> > +all file descriptors which are added to an epoll context have the same NAPI ID.
> > +
> > +If the application uses a dedicated acceptor thread, the application can obtain
> > +the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
> > +distribute that file descriptor to a worker thread. The worker thread would add
> > +the file descriptor to its epoll context. This would ensure each worker thread
> > +has an epoll context with FDs that have the same NAPI ID.
> > +
> > +Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program be
> > +inserted to distribute incoming connections to threads such that each thread is
> > +only given incoming connections with the same NAPI ID. Care must be taken to
> > +carefully handle cases where a system may have multiple NICs.
> > +
> > +In order to enable busy polling, there are two choices:
> > +
> > +1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
> > +   loop waiting for events. This is a system-wide setting and will cause all
> > +   epoll-based applications to busy poll when they call epoll_wait. This may
> > +   not be desirable as many applications may not have the need to busy poll.
> > +
> > +2. Applications using recent kernels can issue an ioctl on the epoll context
> > +   file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
> > +   epoll_params``:, which user programs can define as follows:
> > +
> > +.. code-block:: c
> > +
> > +  struct epoll_params {
> > +      uint32_t busy_poll_usecs;
> > +      uint16_t busy_poll_budget;
> > +      uint8_t prefer_busy_poll;
> > +
> > +      /* pad the struct to a multiple of 64bits */
> > +      uint8_t __pad;
> > +  };
> > +
> >  IRQ mitigation
> >  ---------------
> >  
> > @@ -222,12 +284,78 @@ Such applications can pledge to the kernel that they will perform a busy
> >  polling operation periodically, and the driver should keep the device IRQs
> >  permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
> >  socket option. To avoid system misbehavior the pledge is revoked
> > -if ``gro_flush_timeout`` passes without any busy poll call.
> > +if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
> > +busy polling applications, the ``prefer_busy_poll`` field of ``struct
> > +epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
> > +enable this mode. See the above section for more details.
> >  
> >  The NAPI budget for busy polling is lower than the default (which makes
> >  sense given the low latency intention of normal busy polling). This is
> >  not the case with IRQ mitigation, however, so the budget can be adjusted
> > -with the ``SO_BUSY_POLL_BUDGET`` socket option.
> > +with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
> > +applications, the ``busy_poll_budget`` field can be adjusted to the desired value
> > +in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
> > +ioctl. See the above section for more details.
> > +
> > +It is important to note that choosing a large value for ``gro_flush_timeout``
> > +will defer IRQs to allow for better batch processing, but will induce latency
> > +when the system is not fully loaded. Choosing a small value for
> > +``gro_flush_timeout`` can cause interference of the user application which is
> > +attempting to busy poll by device IRQs and softirq processing. This value
> > +should be chosen carefully with these tradeoffs in mind. epoll-based busy
> > +polling applications may be able to mitigate how much user processing happens
> > +by choosing an appropriate value for ``maxevents``.
> > +
> > +Users may want to consider an alternate approach, IRQ suspension, to help deal
> > +with these tradeoffs.
> > +
> > +IRQ suspension
> > +--------------
> > +
> > +IRQ suspension is a mechanism wherein device IRQs are masked while epoll
> > +triggers NAPI packet processing.
> > +
> > +While application calls to epoll_wait successfully retrieve events, the kernel will
> > +defer the IRQ suspension timer. If the kernel does not retrieve any events
> > +while busy polling (for example, because network traffic levels subsided), IRQ
> > +suspension is disabled and the IRQ mitigation strategies described above are
> > +engaged.
> > +
> > +This allows users to balance CPU consumption with network processing
> > +efficiency.
> > +
> > +To use this mechanism:
> > +
> > +  1. The per-NAPI config parameter ``irq_suspend_timeout`` should be set to the
> > +     maximum time (in nanoseconds) the application can have its IRQs
> > +     suspended. This is done using netlink, as described above. This timeout
> > +     serves as a safety mechanism to restart IRQ driver interrupt processing if
> > +     the application has stalled. This value should be chosen so that it covers
> > +     the amount of time the user application needs to process data from its
> > +     call to epoll_wait, noting that applications can control how much data
> > +     they retrieve by setting ``max_events`` when calling epoll_wait.
> > +
> > +  2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
> > +     and ``napi_defer_hard_irqs`` can be set to low values. They will be used
> > +     to defer IRQs after busy poll has found no data.
> > +
> > +  3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
> > +     the ``EPIOCSPARAMS`` ioctl as described above.
> > +
> > +  4. The application uses epoll as described above to trigger NAPI packet
> > +     processing.
> > +
> > +As mentioned above, as long as subsequent calls to epoll_wait return events to
> > +userland, the ``irq_suspend_timeout`` is deferred and IRQs are disabled. This
> > +allows the application to process data without interference.
> > +
> > +Once a call to epoll_wait results in no events being found, IRQ suspension is
> > +automatically disabled and the ``gro_flush_timeout`` and
> > +``napi_defer_hard_irqs`` mitigation mechanisms take over.
> > +
> > +It is expected that ``irq_suspend_timeout`` will be set to a value much larger
> > +than ``gro_flush_timeout`` as ``irq_suspend_timeout`` should suspend IRQs for
> > +the duration of one userland processing cycle.
> >  
> >  .. _threaded:
> >  
> 
> The rest LGTM, thanks!

Thanks for the review.

> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
> 
> -- 
> An old man doll... just what I always wanted! - Clara



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 0/6] Suspend IRQs during application busy periods
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (5 preceding siblings ...)
  2024-10-21  1:53 ` [PATCH net-next v2 6/6] docs: networking: Describe irq suspension Joe Damato
@ 2024-10-24 23:05 ` Stanislav Fomichev
  2024-10-29 10:25 ` Paolo Abeni
  7 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2024-10-24 23:05 UTC (permalink / raw)
  To: Joe Damato
  Cc: netdev, namangulati, edumazet, amritha.nambiar, sridhar.samudrala,
	sdf, peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Alexander Lobakin, Alexander Viro,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_), Christian Brauner,
	David S. Miller, Donald Hunter, Jan Kara, Jesper Dangaard Brouer,
	Jiri Pirko, Johannes Berg, Jonathan Corbet,
	open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	Lorenzo Bianconi, Martin Karsten, Mina Almasry, Paolo Abeni,
	Sebastian Andrzej Siewior, Xuan Zhuo

On 10/21, Joe Damato wrote:
> Greetings:
> 
> Welcome to v2, see changelog below.
> 
> This series introduces a new mechanism, IRQ suspension, which allows
> network applications using epoll to mask IRQs during periods of high
> traffic while also reducing tail latency (compared to existing
> mechanisms, see below) during periods of low traffic. In doing so, this
> balances CPU consumption with network processing efficiency.
> 
> Martin Karsten (CC'd) and I have been collaborating on this series for
> several months and have appreciated the feedback from the community on
> our RFC [1]. We've updated the cover letter and kernel documentation in
> an attempt to more clearly explain how this mechanism works, how
> applications can use it, and how it compares to existing mechanisms in
> the kernel. We've added an additional test case, 'fullbusy', achieved by
> modifying libevent for comparison. See below for a detailed description,
> link to the patch, and test results.
> 
> I briefly mentioned this idea at netdev conf 2024 (for those who were
> there) and Martin described this idea in an earlier paper presented at
> Sigmetrics 2024 [2].
> 
> ~ The short explanation (TL;DR)
> 
> We propose adding a new napi config parameter: irq_suspend_timeout to
> help balance CPU usage and network processing efficiency when using IRQ
> deferral and napi busy poll.
> 
> If this parameter is set to a non-zero value *and* a user application
> has enabled preferred busy poll on a busy poll context (via the
> EPIOCSPARAMS ioctl introduced in commit 18e2bf0edf4d ("eventpoll: Add
> epoll ioctl for epoll_params")), then application calls to epoll_wait
> for that context will cause device IRQs and softirq processing to be
> suspended as long as epoll_wait successfully retrieves data from the
> NAPI. Each time data is retrieved, the irq_suspend_timeout is deferred.
> 
> If/when network traffic subsides and epoll_wait returns no data, IRQ
> suspension is immediately reverted back to the existing
> napi_defer_hard_irqs and gro_flush_timeout mechanism which was
> introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
> feature")).
> 
> The irq_suspend_timeout serves as a safety mechanism. If userland takes
> a long time processing data, irq_suspend_timeout will fire and restart
> normal NAPI processing.
> 
> For a more in depth explanation, please continue reading.
> 
> ~ Comparison with existing mechanisms
> 
> Interrupt mitigation can be accomplished in napi software, by setting
> napi_defer_hard_irqs and gro_flush_timeout, or via interrupt coalescing
> in the NIC. This can be quite efficient, but in both cases, a fixed
> timeout (or packet count) needs to be configured. However, a fixed
> timeout cannot effectively support both low- and high-load situations:
> 
> At low load, an application typically processes a few requests and then
> waits to receive more input data. In this scenario, a large timeout will
> cause unnecessary latency.
> 
> At high load, an application typically processes many requests before
> being ready to receive more input data. In this case, a small timeout
> will likely fire prematurely and trigger irq/softirq processing, which
> interferes with the application's execution. This causes overhead, most
> likely due to cache contention.
> 
> While NICs attempt to provide adaptive interrupt coalescing schemes,
> these cannot properly take into account application-level processing.
> 
> An alternative packet delivery mechanism is busy-polling, which results
> in perfect alignment of application processing and network polling. It
> delivers optimal performance (throughput and latency), but results in
> 100% cpu utilization and is thus inefficient for below-capacity
> workloads.
> 
> We propose to add a new packet delivery mode that properly alternates
> between busy polling and interrupt-based delivery depending on busy and
> idle periods of the application. During a busy period, the system
> operates in busy-polling mode, which avoids interference. During an idle
> period, the system falls back to interrupt deferral, but with a small
> timeout to avoid excessive latencies. This delivery mode can also be
> viewed as an extension of basic interrupt deferral, but alternating
> between a small and a very large timeout.
> 
> This delivery mode is efficient, because it avoids softirq execution
> interfering with application processing during busy periods. It can be
> used with blocking epoll_wait to conserve cpu cycles during idle
> periods. The effect of alternating between busy and idle periods is that
> performance (throughput and latency) is very close to full busy polling,
> while cpu utilization is lower and very close to interrupt mitigation.
> 
> ~ Usage details
> 
> IRQ suspension is introduced via a per-NAPI configuration parameter that
> controls the maximum time that IRQs can be suspended.
> 
> Here's how it is intended to work:
>   - The user application (or system administrator) uses the netdev-genl
>     netlink interface to set the pre-existing napi_defer_hard_irqs and
>     gro_flush_timeout NAPI config parameters to enable IRQ deferral.
> 
>   - The user application (or system administrator) sets the proposed
>     irq_suspend_timeout parameter via the netdev-genl netlink interface
>     to a larger value than gro_flush_timeout to enable IRQ suspension.
> 
>   - The user application issues the existing epoll ioctl to set the
>     prefer_busy_poll flag on the epoll context.
> 
>   - The user application then calls epoll_wait to busy poll for network
>     events, as it normally would.
> 
>   - If epoll_wait returns events to userland, IRQs are suspended for the
>     duration of irq_suspend_timeout.
> 
>   - If epoll_wait finds no events and the thread is about to go to
>     sleep, IRQ handling using napi_defer_hard_irqs and gro_flush_timeout
>     is resumed.
> 
> As long as epoll_wait is retrieving events, IRQs (and softirq
> processing) for the NAPI being polled remain disabled. When network
> traffic reduces, eventually a busy poll loop in the kernel will retrieve
> no data. When this occurs, regular IRQ deferral using gro_flush_timeout
> for the polled NAPI is re-enabled.
> 
> Unless IRQ suspension is continued by subsequent calls to epoll_wait, it
> automatically times out after the irq_suspend_timeout timer expires.
> Regular deferral is also immediately re-enabled when the epoll context
> is destroyed.
> 
> ~ Usage scenario
> 
> The target scenario for IRQ suspension as packet delivery mode is a
> system that runs a dominant application with substantial network I/O.
> The target application can be configured to receive input data up to a
> certain batch size (via epoll_wait maxevents parameter) and this batch
> size determines the worst-case latency that application requests might
> experience. Because packet delivery is suspended during the target
> application's processing, the batch size also determines the worst-case
> latency of concurrent applications using the same RX queue(s).
> 
> gro_flush_timeout should be set as small as possible, but large enough to
> make sure that a single request is likely not being interfered with.
> 
> irq_suspend_timeout is largely a safety mechanism against misbehaving
> applications. It should be set large enough to cover the processing of an
> entire application batch, i.e., the factor between gro_flush_timeout and
> irq_suspend_timeout should roughly correspond to the maximum batch size
> that the target application would process in one go.
> 
> ~ Design rationale
> 
> The implementation of the IRQ suspension mechanism very nicely dovetails
> with the existing mechanism for IRQ deferral when preferred busy poll is
> enabled (introduced in commit 7fd3253a7de6 ("net: Introduce preferred
> busy-polling"), see that commit message for more details).
> 
> While it would be possible to inject the suspend timeout via
> the existing epoll ioctl, it is more natural to avoid this path for one
> main reason:
> 
>   An epoll context is linked to NAPI IDs as file descriptors are added;
>   this means any epoll context might suddenly be associated with a
>   different net_device if the application were to replace all existing
>   fds with fds from a different device. In this case, the scope of the
>   suspend timeout becomes unclear and many edge cases for both the user
>   application and the kernel are introduced
> 
> Only a single iteration through napi busy polling is needed for this
> mechanism to work effectively. Since an important objective for this
> mechanism is preserving cpu cycles, exactly one iteration of the napi
> busy loop is invoked when busy_poll_usecs is set to 0.
> 
> ~ Important call outs in the implementation
> 
>   - Enabling per epoll-context preferred busy poll will now effectively
>     lead to a nonblocking iteration through napi_busy_loop, even when
>     busy_poll_usecs is 0. See patch 4.
> 
>   - Patches apply cleanly on commit 160a810b2a85 ("net: vxlan: update
>     the document for vxlan_snoop()").
> 
> ~ Benchmark configs & descriptions
> 
> The changes were benchmarked with memcached [3] using the benchmarking
> tool mutilate [4].
> 
> To facilitate benchmarking, a small patch [5] was applied to memcached
> 1.6.29 to allow setting per-epoll context preferred busy poll and other
> settings via environment variables. Another small patch [6] was applied
> to libevent to enable full busy-polling.
> 
> Multiple scenarios were benchmarked as described below and the scripts
> used for producing these results can be found on github [7] (note: all
> scenarios use NAPI-based traffic splitting via SO_INCOMING_ID by passing
> -N to memcached):
> 
>   - base:
>     - no other options enabled
>   - deferX:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to X,000
>   - napibusy:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to 200,000
>     - enable busy poll via the existing ioctl (busy_poll_usecs = 64,
>       busy_poll_budget = 64, prefer_busy_poll = true)
>   - fullbusy:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to 5,000,000
>     - enable busy poll via the existing ioctl (busy_poll_usecs = 1000,
>       busy_poll_budget = 64, prefer_busy_poll = true)
>     - change memcached's nonblocking epoll_wait invocation (via
>       libevent) to using a 1 ms timeout
>   - suspendX:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to X,000
>     - set irq_suspend_timeout to 20,000,000
>     - enable busy poll via the existing ioctl (busy_poll_usecs = 0,
>       busy_poll_budget = 64, prefer_busy_poll = true)
> 
> ~ Benchmark results
> 
> Tested on:
> 
> Single socket AMD EPYC 7662 64-Core Processor
> Hyperthreading disabled
> 4 NUMA Zones (NPS=4)
> 16 CPUs per NUMA zone (64 cores total)
> 2 x Dual port 100gbps Mellanox Technologies ConnectX-5 Ex EN NIC
> 
> The test machine is configured such that a single interface has 8 RX
> queues. The queues' IRQs and memcached are pinned to CPUs that are
> NUMA-local to the interface which is under test. The NIC's interrupt
> coalescing configuration is left at boot-time defaults.
> 
> Results:
> 
> Results are shown below. The mechanism added by this series is
> represented by the 'suspend' cases. Data presented shows a summary over
> at least 10 runs of each test case [8] using the scripts on github [7].
> For latency, the median is shown. For throughput and CPU utilization,
> the average is shown.
> 
> The results also include cycles-per-query (cpq) and
> instruction-per-query (ipq) metrics, following the methodology proposed
> in [2], to augment the CPU utilization numbers, which could be skewed
> due to frequency scaling. We find that this does not appear to be the
> case as CPU utilization and low-level metrics show similar trends.
> 
> These results were captured using the scripts on github [7] to
> illustrate how this approach compares with other pre-existing
> mechanisms. This data is not to be interpreted as scientific data
> captured in a fully isolated lab setting, but instead as best effort,
> illustrative information comparing and contrasting tradeoffs.
> 
> The absolute QPS results are higher than our previous submission, but
> the relative differences between variants are equivalent. Because the
> patches have been rebased on 6.12, several factors have likely
> influenced the overall performance. Most importantly, we had to switch
> to a new set of basic kernel options, which has likely altered the
> baseline performance. Because the overall comparison of variants still
> holds, we have not attempted to recreate the exact set of kernel options
> from the previous submission.
> 
> Compare:
> - Throughput (MAX) and latencies of base vs suspend.
> - CPU usage of napibusy and fullbusy during lower load (200K, 400K for
>   example) vs suspend.
> - Latency of the defer variants vs suspend as timeout and load
>   increases.
> 
> The overall takeaway is that the suspend variants provide a superior
> combination of high throughput, low latency, and low cpu utilization
> compared to all other variants. Each of the suspend variants works very
> well, but some fine-tuning between latency and cpu utilization is still
> possible by tuning the small timeout (gro_flush_timeout).
> 
> Note: we've reorganized the results to make comparison among testcases
> with the same load easier.
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  200K  200024     127     254     458      25   12748   11289
>    defer10  200K  199991      64     128     166      27   18763   16574
>    defer20  200K  199986      72     135     178      25   15405   14173
>    defer50  200K  200025      91     149     198      23   12275   12203
>   defer200  200K  199996     182     266     326      18    8595    9183
>   fullbusy  200K  200040      58     123     167     100   43641   23145
>   napibusy  200K  200009     115     244     299      56   24797   24693
>  suspend10  200K  200005      63     128     167      32   19559   17240
>  suspend20  200K  199952      69     132     170      29   16324   14838
>  suspend50  200K  200019      84     144     189      26   13106   12516
> suspend200  200K  199978     168     264     326      20    9331    9643
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  400K  400017     157     292     762      39    9287    9325
>    defer10  400K  400033      71     141     204      53   13950   12943
>    defer20  400K  399935      79     150     212      47   12027   11673
>    defer50  400K  399888     101     171     231      39    9556    9921
>   defer200  400K  399993     200     287     358      32    7428    8576
>   fullbusy  400K  400018      63     132     203     100   21827   16062
>   napibusy  400K  399970      89     230     292      83   18156   16508
>  suspend10  400K  400061      69     139     202      54   13576   13057
>  suspend20  400K  399988      73     144     206      49   11930   11773
>  suspend50  400K  399975      88     161     218      42    9996   10270
> suspend200  400K  399954     172     276     353      34    7847    8713
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  600K  600031     166     289     631      61    9188    8787
>    defer10  600K  599967      85     167     262      75   11833   10947
>    defer20  600K  599888      89     165     243      66   10513   10362
>    defer50  600K  600072     109     185     253      55    8664    9190
>   defer200  600K  599951     222     315     393      45    6892    8213
>   fullbusy  600K  600041      69     145     227     100   14549   13936
>   napibusy  600K  599980      79     188     280      96   13927   14155
>  suspend10  600K  600028      78     159     267      69   10877   11032
>  suspend20  600K  600026      81     159     254      64    9922   10320
>  suspend50  600K  600007      96     178     258      57    8681    9331
> suspend200  600K  599964     177     295     369      47    7115    8366
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  800K  800034     198     329     698      84    9366    8338
>    defer10  800K  799718     243     642    1457      95   10532    9007
>    defer20  800K  800009     132     245     399      89    9956    8979
>    defer50  800K  800024     136     228     378      80    9002    8598
>   defer200  800K  799965     255     362     473      66    7481    8147
>   fullbusy  800K  799927      78     157     253     100   10915   12533
>   napibusy  800K  799870      81     173     273      99   10826   12532
>  suspend10  800K  799991      84     167     269      83    9380    9802
>  suspend20  800K  799979      90     172     290      78    8765    9404
>  suspend50  800K  800031     106     191     307      71    7945    8805
> suspend200  800K  799905     182     307     411      62    6985    8242
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base 1000K  919543    3805    6390   14229      98    9324    7978
>    defer10 1000K  850751    4574    7382   15370      99   10218    8470
>    defer20 1000K  890296    4736    6862   14858      99    9708    8277
>    defer50 1000K  932694    3463    6180   13251      97    9148    8053
>   defer200 1000K  951311    3524    6052   13599      96    8875    7845
>   fullbusy 1000K 1000011      90     181     278     100    8731   10686
>   napibusy 1000K 1000050      93     184     280     100    8721   10547
>  suspend10 1000K  999962     101     193     306      92    8138    8980
>  suspend20 1000K 1000030     103     191     324      88    7844    8763
>  suspend50 1000K 1000001     114     202     320      83    7396    8431
> suspend200 1000K  999965     185     314     428      76    6733    8072
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base   MAX 1005592    4651    6594   14979     100    8679    7918
>    defer10   MAX  928204    5106    7286   15199     100    9398    8380
>    defer20   MAX  984663    4774    6518   14920     100    8861    8063
>    defer50   MAX 1044099    4431    6368   14652     100    8350    7948
>   defer200   MAX 1040451    4423    6610   14674     100    8380    7931
>   fullbusy   MAX 1236608    3715    3987   12805     100    7051    7936
>   napibusy   MAX 1077516    4345   10155   15957     100    8080    7842
>  suspend10   MAX 1218344    3760    3990   12585     100    7150    7935
>  suspend20   MAX 1220056    3752    4053   12602     100    7150    7961
>  suspend50   MAX 1213666    3791    4103   12919     100    7183    7959
> suspend200   MAX 1217411    3768    3988   12863     100    7161    7954
> 
> ~ FAQ
> 
>   - Can the new timeout value be threaded through the new epoll ioctl ?
> 
>     Only with difficulty. The epoll ioctl sets options on an epoll
>     context and the NAPI ID associated with an epoll context can change
>     based on what file descriptors a user app adds to the epoll context.
>     This would introduce complexity in the API from the user perspective
>     and also complexity in the kernel.
> 
>   - Can irq suspend be built by combining NIC coalescing and
>     gro_flush_timeout ?
> 
>     No. The problem is that the long timeout must engage if and only if
>     prefer-busy is active.
> 
>     When using NIC coalescing for the short timeout (without
>     napi_defer_hard_irqs/gro_flush_timeout), an interrupt after an idle
>     period will trigger softirq, which will run napi polling. At this
>     point, prefer-busy is not active, so NIC interrupts would be
>     re-enabled. Then it is not possible for the longer timeout to
>     interject to switch control back to polling. In other words, only by
>     using the software timer for the short timeout, it is possible to
>     extend the timeout without having to reprogram the NIC timer or
>     reach down directly and disable interrupts.
> 
>     Using gro_flush_timeout for the long timeout also has problems, for
>     the same underlying reason. In the current napi implementation,
>     gro_flush_timeout is not tied to prefer-busy. We'd either have to
>     change that and in the process modify the existing deferral
>     mechanism, or introduce a state variable to determine whether
>     gro_flush_timeout is used as long timeout for irq suspend or whether
>     it is used for its default purpose. In an earlier version, we did
>     try something similar to the latter and made it work, but it ends up
>     being a lot more convoluted than our current proposal.
> 
>   - Isn't it already possible to combine busy looping with irq deferral?
> 
>     Yes, in fact enabling irq deferral via napi_defer_hard_irqs and
>     gro_flush_timeout is a precondition for prefer_busy_poll to have an
>     effect. If the application also uses a tight busy loop with
>     essentially nonblocking epoll_wait (accomplished with a very short
>     timeout parameter), this is the fullbusy case shown in the results.
>     An application using blocking epoll_wait is shown as the napibusy
>     case in the result. It's a hybrid approach that provides limited
>     latency benefits compared to the base case and plain irq deferral,
>     but not as good as fullbusy or suspend.
> 
> ~ Special thanks
> 
> Several people were involved in earlier stages of the development of this
> mechanism whom we'd like to thank:
> 
>   - Peter Cai (CC'd), for the initial kernel patch and his contributions
>     to the paper.
>     
>   - Mohammadamin Shafie (CC'd), for testing various versions of the kernel
>     patch and providing helpful feedback.
> 
> Thanks,
> Martin and Joe
> 
> [1]: https://lore.kernel.org/netdev/20240812125717.413108-1-jdamato@fastly.com/
> [2]: https://doi.org/10.1145/3626780
> [3]: https://github.com/memcached/memcached/blob/master/doc/napi_ids.txt
> [4]: https://github.com/leverich/mutilate
> [5]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/memcached.patch
> [6]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/libevent.patch
> [7]: https://github.com/martinkarsten/irqsuspend
> [8]: https://github.com/martinkarsten/irqsuspend/tree/main/results
> 
> v2:
>   - Cover letter updated, including a re-run of test data.
>   - Patch 1 rewritten to use netdev-genl instead of sysfs.
>   - Patch 3 updated with a comment added to napi_resume_irqs.
>   - Patch 4 rebased to apply now that commit b9ca079dd6b0 ("eventpoll:
>     Annotate data-race of busy_poll_usecs") has been picked up from VFS.
>   - Patch 6 updated the kernel documentation.

The series looks good, thank you both for pushing this!

Acked-by: Stanislav Fomichev <sdf@fomichev.me>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 6/6] docs: networking: Describe irq suspension
  2024-10-21 16:33     ` Joe Damato
@ 2024-10-25  1:03       ` Bagas Sanjaya
  0 siblings, 0 replies; 14+ messages in thread
From: Bagas Sanjaya @ 2024-10-25  1:03 UTC (permalink / raw)
  To: Joe Damato, Linux Networking, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Paolo Abeni, Jonathan Corbet,
	Linux Documentation, Linux Kernel Mailing List, Linux BPF

[-- Attachment #1: Type: text/plain, Size: 2002 bytes --]

On Mon, Oct 21, 2024 at 09:33:21AM -0700, Joe Damato wrote:
> On Mon, Oct 21, 2024 at 05:49:14PM +0700, Bagas Sanjaya wrote:
> > On Mon, Oct 21, 2024 at 01:53:01AM +0000, Joe Damato wrote:
> > > diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
> > > index dfa5d549be9c..3b43477a52ce 100644
> > > --- a/Documentation/networking/napi.rst
> > > +++ b/Documentation/networking/napi.rst
> > > @@ -192,6 +192,28 @@ is reused to control the delay of the timer, while
> > >  ``napi_defer_hard_irqs`` controls the number of consecutive empty polls
> > >  before NAPI gives up and goes back to using hardware IRQs.
> > >  
> > > +The above parameters can also be set on a per-NAPI basis using netlink via
> > > +netdev-genl. This can be done programmatically in a user application or by
> > > +using a script included in the kernel source tree: ``tools/net/ynl/cli.py``.
> > > +
> > > +For example, using the script:
> > > +
> > > +.. code-block:: bash
> > > +
> > > +  $ kernel-source/tools/net/ynl/cli.py \
> > > +            --spec Documentation/netlink/specs/netdev.yaml \
> > > +            --do napi-set \
> > > +            --json='{"id": 345,
> > > +                     "defer-hard-irqs": 111,
> > > +                     "gro-flush-timeout": 11111}'
> > > +
> > > +Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
> > > +via netdev-genl. There is no global sysfs parameter for this value.
> > 
> > In JSON, both gro-flush-timeout and irq-suspend-timeout parameter
> > names are written in hyphens; but the rest of the docs uses underscores
> > (that is, gro_flush_timeout and irq_suspend_timeout), right?
> 
> That's right. The YAML specification uses hyphens throughout, so we
> follow that convention there.
> 
> In the rest of the docs we use the name of the field which appears
> in the code itself, which uses underscores.

OK, thanks!

-- 
An old man doll... just what I always wanted! - Clara

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 0/6] Suspend IRQs during application busy periods
  2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
                   ` (6 preceding siblings ...)
  2024-10-24 23:05 ` [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Stanislav Fomichev
@ 2024-10-29 10:25 ` Paolo Abeni
  2024-10-29 15:03   ` Joe Damato
  2024-10-29 17:53   ` Joe Damato
  7 siblings, 2 replies; 14+ messages in thread
From: Paolo Abeni @ 2024-10-29 10:25 UTC (permalink / raw)
  To: Joe Damato, netdev
  Cc: namangulati, edumazet, amritha.nambiar, sridhar.samudrala, sdf,
	peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Alexander Lobakin, Alexander Viro,
	open list:BPF [MISC] :Keyword:(?:b|_)bpf(?:b|_),
	Christian Brauner, David S. Miller, Donald Hunter, Jan Kara,
	Jesper Dangaard Brouer, Jiri Pirko, Johannes Berg,
	Jonathan Corbet, open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	Lorenzo Bianconi, Martin Karsten, Mina Almasry,
	Sebastian Andrzej Siewior, Xuan Zhuo

On 10/21/24 03:52, Joe Damato wrote:
> Greetings:
> 
> Welcome to v2, see changelog below.
> 
> This series introduces a new mechanism, IRQ suspension, which allows
> network applications using epoll to mask IRQs during periods of high
> traffic while also reducing tail latency (compared to existing
> mechanisms, see below) during periods of low traffic. In doing so, this
> balances CPU consumption with network processing efficiency.
> 
> Martin Karsten (CC'd) and I have been collaborating on this series for
> several months and have appreciated the feedback from the community on
> our RFC [1]. We've updated the cover letter and kernel documentation in
> an attempt to more clearly explain how this mechanism works, how
> applications can use it, and how it compares to existing mechanisms in
> the kernel. We've added an additional test case, 'fullbusy', achieved by
> modifying libevent for comparison. See below for a detailed description,
> link to the patch, and test results.
> 
> I briefly mentioned this idea at netdev conf 2024 (for those who were
> there) and Martin described this idea in an earlier paper presented at
> Sigmetrics 2024 [2].
> 
> ~ The short explanation (TL;DR)
> 
> We propose adding a new napi config parameter: irq_suspend_timeout to
> help balance CPU usage and network processing efficiency when using IRQ
> deferral and napi busy poll.
> 
> If this parameter is set to a non-zero value *and* a user application
> has enabled preferred busy poll on a busy poll context (via the
> EPIOCSPARAMS ioctl introduced in commit 18e2bf0edf4d ("eventpoll: Add
> epoll ioctl for epoll_params")), then application calls to epoll_wait
> for that context will cause device IRQs and softirq processing to be
> suspended as long as epoll_wait successfully retrieves data from the
> NAPI. Each time data is retrieved, the irq_suspend_timeout is deferred.
> 
> If/when network traffic subsides and epoll_wait returns no data, IRQ
> suspension is immediately reverted back to the existing
> napi_defer_hard_irqs and gro_flush_timeout mechanism which was
> introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
> feature")).
> 
> The irq_suspend_timeout serves as a safety mechanism. If userland takes
> a long time processing data, irq_suspend_timeout will fire and restart
> normal NAPI processing.
> 
> For a more in depth explanation, please continue reading.
> 
> ~ Comparison with existing mechanisms
> 
> Interrupt mitigation can be accomplished in napi software, by setting
> napi_defer_hard_irqs and gro_flush_timeout, or via interrupt coalescing
> in the NIC. This can be quite efficient, but in both cases, a fixed
> timeout (or packet count) needs to be configured. However, a fixed
> timeout cannot effectively support both low- and high-load situations:
> 
> At low load, an application typically processes a few requests and then
> waits to receive more input data. In this scenario, a large timeout will
> cause unnecessary latency.
> 
> At high load, an application typically processes many requests before
> being ready to receive more input data. In this case, a small timeout
> will likely fire prematurely and trigger irq/softirq processing, which
> interferes with the application's execution. This causes overhead, most
> likely due to cache contention.
> 
> While NICs attempt to provide adaptive interrupt coalescing schemes,
> these cannot properly take into account application-level processing.
> 
> An alternative packet delivery mechanism is busy-polling, which results
> in perfect alignment of application processing and network polling. It
> delivers optimal performance (throughput and latency), but results in
> 100% cpu utilization and is thus inefficient for below-capacity
> workloads.
> 
> We propose to add a new packet delivery mode that properly alternates
> between busy polling and interrupt-based delivery depending on busy and
> idle periods of the application. During a busy period, the system
> operates in busy-polling mode, which avoids interference. During an idle
> period, the system falls back to interrupt deferral, but with a small
> timeout to avoid excessive latencies. This delivery mode can also be
> viewed as an extension of basic interrupt deferral, but alternating
> between a small and a very large timeout.
> 
> This delivery mode is efficient, because it avoids softirq execution
> interfering with application processing during busy periods. It can be
> used with blocking epoll_wait to conserve cpu cycles during idle
> periods. The effect of alternating between busy and idle periods is that
> performance (throughput and latency) is very close to full busy polling,
> while cpu utilization is lower and very close to interrupt mitigation.
> 
> ~ Usage details
> 
> IRQ suspension is introduced via a per-NAPI configuration parameter that
> controls the maximum time that IRQs can be suspended.
> 
> Here's how it is intended to work:
>   - The user application (or system administrator) uses the netdev-genl
>     netlink interface to set the pre-existing napi_defer_hard_irqs and
>     gro_flush_timeout NAPI config parameters to enable IRQ deferral.
> 
>   - The user application (or system administrator) sets the proposed
>     irq_suspend_timeout parameter via the netdev-genl netlink interface
>     to a larger value than gro_flush_timeout to enable IRQ suspension.
> 
>   - The user application issues the existing epoll ioctl to set the
>     prefer_busy_poll flag on the epoll context.
> 
>   - The user application then calls epoll_wait to busy poll for network
>     events, as it normally would.
> 
>   - If epoll_wait returns events to userland, IRQs are suspended for the
>     duration of irq_suspend_timeout.
> 
>   - If epoll_wait finds no events and the thread is about to go to
>     sleep, IRQ handling using napi_defer_hard_irqs and gro_flush_timeout
>     is resumed.
> 
> As long as epoll_wait is retrieving events, IRQs (and softirq
> processing) for the NAPI being polled remain disabled. When network
> traffic reduces, eventually a busy poll loop in the kernel will retrieve
> no data. When this occurs, regular IRQ deferral using gro_flush_timeout
> for the polled NAPI is re-enabled.
> 
> Unless IRQ suspension is continued by subsequent calls to epoll_wait, it
> automatically times out after the irq_suspend_timeout timer expires.
> Regular deferral is also immediately re-enabled when the epoll context
> is destroyed.
> 
> ~ Usage scenario
> 
> The target scenario for IRQ suspension as packet delivery mode is a
> system that runs a dominant application with substantial network I/O.
> The target application can be configured to receive input data up to a
> certain batch size (via epoll_wait maxevents parameter) and this batch
> size determines the worst-case latency that application requests might
> experience. Because packet delivery is suspended during the target
> application's processing, the batch size also determines the worst-case
> latency of concurrent applications using the same RX queue(s).
> 
> gro_flush_timeout should be set as small as possible, but large enough to
> make sure that a single request is likely not being interfered with.
> 
> irq_suspend_timeout is largely a safety mechanism against misbehaving
> applications. It should be set large enough to cover the processing of an
> entire application batch, i.e., the factor between gro_flush_timeout and
> irq_suspend_timeout should roughly correspond to the maximum batch size
> that the target application would process in one go.
> 
> ~ Design rationale
> 
> The implementation of the IRQ suspension mechanism very nicely dovetails
> with the existing mechanism for IRQ deferral when preferred busy poll is
> enabled (introduced in commit 7fd3253a7de6 ("net: Introduce preferred
> busy-polling"), see that commit message for more details).
> 
> While it would be possible to inject the suspend timeout via
> the existing epoll ioctl, it is more natural to avoid this path for one
> main reason:
> 
>   An epoll context is linked to NAPI IDs as file descriptors are added;
>   this means any epoll context might suddenly be associated with a
>   different net_device if the application were to replace all existing
>   fds with fds from a different device. In this case, the scope of the
>   suspend timeout becomes unclear and many edge cases for both the user
>   application and the kernel are introduced
> 
> Only a single iteration through napi busy polling is needed for this
> mechanism to work effectively. Since an important objective for this
> mechanism is preserving cpu cycles, exactly one iteration of the napi
> busy loop is invoked when busy_poll_usecs is set to 0.
> 
> ~ Important call outs in the implementation
> 
>   - Enabling per epoll-context preferred busy poll will now effectively
>     lead to a nonblocking iteration through napi_busy_loop, even when
>     busy_poll_usecs is 0. See patch 4.
> 
>   - Patches apply cleanly on commit 160a810b2a85 ("net: vxlan: update
>     the document for vxlan_snoop()").
> 
> ~ Benchmark configs & descriptions
> 
> The changes were benchmarked with memcached [3] using the benchmarking
> tool mutilate [4].
> 
> To facilitate benchmarking, a small patch [5] was applied to memcached
> 1.6.29 to allow setting per-epoll context preferred busy poll and other
> settings via environment variables. Another small patch [6] was applied
> to libevent to enable full busy-polling.
> 
> Multiple scenarios were benchmarked as described below and the scripts
> used for producing these results can be found on github [7] (note: all
> scenarios use NAPI-based traffic splitting via SO_INCOMING_ID by passing
> -N to memcached):
> 
>   - base:
>     - no other options enabled
>   - deferX:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to X,000
>   - napibusy:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to 200,000
>     - enable busy poll via the existing ioctl (busy_poll_usecs = 64,
>       busy_poll_budget = 64, prefer_busy_poll = true)
>   - fullbusy:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to 5,000,000
>     - enable busy poll via the existing ioctl (busy_poll_usecs = 1000,
>       busy_poll_budget = 64, prefer_busy_poll = true)
>     - change memcached's nonblocking epoll_wait invocation (via
>       libevent) to using a 1 ms timeout
>   - suspendX:
>     - set defer_hard_irqs to 100
>     - set gro_flush_timeout to X,000
>     - set irq_suspend_timeout to 20,000,000
>     - enable busy poll via the existing ioctl (busy_poll_usecs = 0,
>       busy_poll_budget = 64, prefer_busy_poll = true)
> 
> ~ Benchmark results
> 
> Tested on:
> 
> Single socket AMD EPYC 7662 64-Core Processor
> Hyperthreading disabled
> 4 NUMA Zones (NPS=4)
> 16 CPUs per NUMA zone (64 cores total)
> 2 x Dual port 100gbps Mellanox Technologies ConnectX-5 Ex EN NIC
> 
> The test machine is configured such that a single interface has 8 RX
> queues. The queues' IRQs and memcached are pinned to CPUs that are
> NUMA-local to the interface which is under test. The NIC's interrupt
> coalescing configuration is left at boot-time defaults.
> 
> Results:
> 
> Results are shown below. The mechanism added by this series is
> represented by the 'suspend' cases. Data presented shows a summary over
> at least 10 runs of each test case [8] using the scripts on github [7].
> For latency, the median is shown. For throughput and CPU utilization,
> the average is shown.
> 
> The results also include cycles-per-query (cpq) and
> instruction-per-query (ipq) metrics, following the methodology proposed
> in [2], to augment the CPU utilization numbers, which could be skewed
> due to frequency scaling. We find that this does not appear to be the
> case as CPU utilization and low-level metrics show similar trends.
> 
> These results were captured using the scripts on github [7] to
> illustrate how this approach compares with other pre-existing
> mechanisms. This data is not to be interpreted as scientific data
> captured in a fully isolated lab setting, but instead as best effort,
> illustrative information comparing and contrasting tradeoffs.
> 
> The absolute QPS results are higher than our previous submission, but
> the relative differences between variants are equivalent. Because the
> patches have been rebased on 6.12, several factors have likely
> influenced the overall performance. Most importantly, we had to switch
> to a new set of basic kernel options, which has likely altered the
> baseline performance. Because the overall comparison of variants still
> holds, we have not attempted to recreate the exact set of kernel options
> from the previous submission.
> 
> Compare:
> - Throughput (MAX) and latencies of base vs suspend.
> - CPU usage of napibusy and fullbusy during lower load (200K, 400K for
>   example) vs suspend.
> - Latency of the defer variants vs suspend as timeout and load
>   increases.
> 
> The overall takeaway is that the suspend variants provide a superior
> combination of high throughput, low latency, and low cpu utilization
> compared to all other variants. Each of the suspend variants works very
> well, but some fine-tuning between latency and cpu utilization is still
> possible by tuning the small timeout (gro_flush_timeout).
> 
> Note: we've reorganized the results to make comparison among testcases
> with the same load easier.
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  200K  200024     127     254     458      25   12748   11289
>    defer10  200K  199991      64     128     166      27   18763   16574
>    defer20  200K  199986      72     135     178      25   15405   14173
>    defer50  200K  200025      91     149     198      23   12275   12203
>   defer200  200K  199996     182     266     326      18    8595    9183
>   fullbusy  200K  200040      58     123     167     100   43641   23145
>   napibusy  200K  200009     115     244     299      56   24797   24693
>  suspend10  200K  200005      63     128     167      32   19559   17240
>  suspend20  200K  199952      69     132     170      29   16324   14838
>  suspend50  200K  200019      84     144     189      26   13106   12516
> suspend200  200K  199978     168     264     326      20    9331    9643
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  400K  400017     157     292     762      39    9287    9325
>    defer10  400K  400033      71     141     204      53   13950   12943
>    defer20  400K  399935      79     150     212      47   12027   11673
>    defer50  400K  399888     101     171     231      39    9556    9921
>   defer200  400K  399993     200     287     358      32    7428    8576
>   fullbusy  400K  400018      63     132     203     100   21827   16062
>   napibusy  400K  399970      89     230     292      83   18156   16508
>  suspend10  400K  400061      69     139     202      54   13576   13057
>  suspend20  400K  399988      73     144     206      49   11930   11773
>  suspend50  400K  399975      88     161     218      42    9996   10270
> suspend200  400K  399954     172     276     353      34    7847    8713
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  600K  600031     166     289     631      61    9188    8787
>    defer10  600K  599967      85     167     262      75   11833   10947
>    defer20  600K  599888      89     165     243      66   10513   10362
>    defer50  600K  600072     109     185     253      55    8664    9190
>   defer200  600K  599951     222     315     393      45    6892    8213
>   fullbusy  600K  600041      69     145     227     100   14549   13936
>   napibusy  600K  599980      79     188     280      96   13927   14155
>  suspend10  600K  600028      78     159     267      69   10877   11032
>  suspend20  600K  600026      81     159     254      64    9922   10320
>  suspend50  600K  600007      96     178     258      57    8681    9331
> suspend200  600K  599964     177     295     369      47    7115    8366
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base  800K  800034     198     329     698      84    9366    8338
>    defer10  800K  799718     243     642    1457      95   10532    9007
>    defer20  800K  800009     132     245     399      89    9956    8979
>    defer50  800K  800024     136     228     378      80    9002    8598
>   defer200  800K  799965     255     362     473      66    7481    8147
>   fullbusy  800K  799927      78     157     253     100   10915   12533
>   napibusy  800K  799870      81     173     273      99   10826   12532
>  suspend10  800K  799991      84     167     269      83    9380    9802
>  suspend20  800K  799979      90     172     290      78    8765    9404
>  suspend50  800K  800031     106     191     307      71    7945    8805
> suspend200  800K  799905     182     307     411      62    6985    8242
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base 1000K  919543    3805    6390   14229      98    9324    7978
>    defer10 1000K  850751    4574    7382   15370      99   10218    8470
>    defer20 1000K  890296    4736    6862   14858      99    9708    8277
>    defer50 1000K  932694    3463    6180   13251      97    9148    8053
>   defer200 1000K  951311    3524    6052   13599      96    8875    7845
>   fullbusy 1000K 1000011      90     181     278     100    8731   10686
>   napibusy 1000K 1000050      93     184     280     100    8721   10547
>  suspend10 1000K  999962     101     193     306      92    8138    8980
>  suspend20 1000K 1000030     103     191     324      88    7844    8763
>  suspend50 1000K 1000001     114     202     320      83    7396    8431
> suspend200 1000K  999965     185     314     428      76    6733    8072
> 
>   testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
>       base   MAX 1005592    4651    6594   14979     100    8679    7918
>    defer10   MAX  928204    5106    7286   15199     100    9398    8380
>    defer20   MAX  984663    4774    6518   14920     100    8861    8063
>    defer50   MAX 1044099    4431    6368   14652     100    8350    7948
>   defer200   MAX 1040451    4423    6610   14674     100    8380    7931
>   fullbusy   MAX 1236608    3715    3987   12805     100    7051    7936
>   napibusy   MAX 1077516    4345   10155   15957     100    8080    7842
>  suspend10   MAX 1218344    3760    3990   12585     100    7150    7935
>  suspend20   MAX 1220056    3752    4053   12602     100    7150    7961
>  suspend50   MAX 1213666    3791    4103   12919     100    7183    7959
> suspend200   MAX 1217411    3768    3988   12863     100    7161    7954
> 
> ~ FAQ
> 
>   - Can the new timeout value be threaded through the new epoll ioctl ?
> 
>     Only with difficulty. The epoll ioctl sets options on an epoll
>     context and the NAPI ID associated with an epoll context can change
>     based on what file descriptors a user app adds to the epoll context.
>     This would introduce complexity in the API from the user perspective
>     and also complexity in the kernel.
> 
>   - Can irq suspend be built by combining NIC coalescing and
>     gro_flush_timeout ?
> 
>     No. The problem is that the long timeout must engage if and only if
>     prefer-busy is active.
> 
>     When using NIC coalescing for the short timeout (without
>     napi_defer_hard_irqs/gro_flush_timeout), an interrupt after an idle
>     period will trigger softirq, which will run napi polling. At this
>     point, prefer-busy is not active, so NIC interrupts would be
>     re-enabled. Then it is not possible for the longer timeout to
>     interject to switch control back to polling. In other words, only by
>     using the software timer for the short timeout, it is possible to
>     extend the timeout without having to reprogram the NIC timer or
>     reach down directly and disable interrupts.
> 
>     Using gro_flush_timeout for the long timeout also has problems, for
>     the same underlying reason. In the current napi implementation,
>     gro_flush_timeout is not tied to prefer-busy. We'd either have to
>     change that and in the process modify the existing deferral
>     mechanism, or introduce a state variable to determine whether
>     gro_flush_timeout is used as long timeout for irq suspend or whether
>     it is used for its default purpose. In an earlier version, we did
>     try something similar to the latter and made it work, but it ends up
>     being a lot more convoluted than our current proposal.
> 
>   - Isn't it already possible to combine busy looping with irq deferral?
> 
>     Yes, in fact enabling irq deferral via napi_defer_hard_irqs and
>     gro_flush_timeout is a precondition for prefer_busy_poll to have an
>     effect. If the application also uses a tight busy loop with
>     essentially nonblocking epoll_wait (accomplished with a very short
>     timeout parameter), this is the fullbusy case shown in the results.
>     An application using blocking epoll_wait is shown as the napibusy
>     case in the result. It's a hybrid approach that provides limited
>     latency benefits compared to the base case and plain irq deferral,
>     but not as good as fullbusy or suspend.
> 
> ~ Special thanks
> 
> Several people were involved in earlier stages of the development of this
> mechanism whom we'd like to thank:
> 
>   - Peter Cai (CC'd), for the initial kernel patch and his contributions
>     to the paper.
>     
>   - Mohammadamin Shafie (CC'd), for testing various versions of the kernel
>     patch and providing helpful feedback.
> 
> Thanks,
> Martin and Joe
> 
> [1]: https://lore.kernel.org/netdev/20240812125717.413108-1-jdamato@fastly.com/
> [2]: https://doi.org/10.1145/3626780
> [3]: https://github.com/memcached/memcached/blob/master/doc/napi_ids.txt
> [4]: https://github.com/leverich/mutilate
> [5]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/memcached.patch
> [6]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/libevent.patch
> [7]: https://github.com/martinkarsten/irqsuspend
> [8]: https://github.com/martinkarsten/irqsuspend/tree/main/results
> 
> v2:
>   - Cover letter updated, including a re-run of test data.
>   - Patch 1 rewritten to use netdev-genl instead of sysfs.
>   - Patch 3 updated with a comment added to napi_resume_irqs.
>   - Patch 4 rebased to apply now that commit b9ca079dd6b0 ("eventpoll:
>     Annotate data-race of busy_poll_usecs") has been picked up from VFS.
>   - Patch 6 updated the kernel documentation.
> 
> rfc -> v1: https://lore.kernel.org/netdev/20240823173103.94978-1-jdamato@fastly.com/
>   - Cover letter updated to include more details.
>   - Patch 1 updated to remove the documentation added. This was moved to
>     patch 6 with the rest of the docs (see below).
>   - Patch 5 updated to fix an error uncovered by the kernel build robot.
>     See patch 5's changelog for more details.
>   - Patch 6 added which updates kernel documentation.

The changes makes sense to me, and I could not find any obvious issue in
the patches.

I think this deserve some - even basic - self-tests coverage. Note that
you can enable GRO on veth devices to make NAPI instances avail there.

Possibly you could opt for a drivers/net defaulting to veth usage and
allowing the user to select real H/W via env variables.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 0/6] Suspend IRQs during application busy periods
  2024-10-29 10:25 ` Paolo Abeni
@ 2024-10-29 15:03   ` Joe Damato
  2024-10-29 17:53   ` Joe Damato
  1 sibling, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-29 15:03 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: netdev, namangulati, edumazet, amritha.nambiar, sridhar.samudrala,
	sdf, peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Alexander Lobakin, Alexander Viro,
	open list:BPF [MISC] :Keyword:(?:b|_)bpf(?:b|_),
	Christian Brauner, David S. Miller, Donald Hunter, Jan Kara,
	Jesper Dangaard Brouer, Jiri Pirko, Johannes Berg,
	Jonathan Corbet, open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	Lorenzo Bianconi, Martin Karsten, Mina Almasry,
	Sebastian Andrzej Siewior, Xuan Zhuo

On Tue, Oct 29, 2024 at 11:25:18AM +0100, Paolo Abeni wrote:
> On 10/21/24 03:52, Joe Damato wrote:
> > Greetings:
> > 
> > Welcome to v2, see changelog below.

[...]

> 
> The changes makes sense to me, and I could not find any obvious issue in
> the patches.

Thanks for taking a look.
 
> I think this deserve some - even basic - self-tests coverage. Note that
> you can enable GRO on veth devices to make NAPI instances avail there.
>
> Possibly you could opt for a drivers/net defaulting to veth usage and
> allowing the user to select real H/W via env variables.

Could we send a selftest in a follow up?

I am asking because we've jumped through a number of hoops to get to
this point:
  1. Added support for per-NAPI config settings [1] to address Eric's
     concern in the v1, which took several revisions and was stalled
     due to a merge window.
  2. Added support for netdev-genl to numerous drivers so that this
     would be usable in many different environments (gve, ena, tg3,
     e1000, e1000e, igc...) [2] [3] [4] [5] [6]
  3. We didn't get any feedback about adding a selftest in the RFC
     or v1 [7].

I think all busy poll methods currently in the kernel need selftests
(including the existing busy poll method) and its not clear to me
currently how much work it'll be to get veth or netdevsim working to
a point where we could re-submit this with a selftest that would be
considered reasonable enough.

FWIW, neither my previous series on the epoll ioctl nor the per NAPI
settings were rejected due to lack of selftests, but I did add a
simple selftest for the epoll ioctl later [8].

What do you think?

Could we get this merged without having to get selftests working and
come back with a selftest as separate change (like I did for the
epoll ioctl) ?

[1]: https://lore.kernel.org/lkml/20241011184527.16393-1-jdamato@fastly.com/
[2]: https://lore.kernel.org/lkml/20240930210731.1629-1-jdamato@fastly.com/
[3]: https://lore.kernel.org/lkml/20241002001331.65444-1-jdamato@fastly.com/
[4]: https://lore.kernel.org/netdev/20241009175509.31753-1-jdamato@fastly.com/
[5]: https://lore.kernel.org/netdev/20240930171232.1668-1-jdamato@fastly.com/
[6]: https://lore.kernel.org/lkml/20241028195243.52488-1-jdamato@fastly.com/
[7]: https://lore.kernel.org/all/20240823173103.94978-1-jdamato@fastly.com/
[8]: https://lore.kernel.org/netdev/171528362770.20134.14528995105510778643.git-patchwork-notify@kernel.org/T/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next v2 0/6] Suspend IRQs during application busy periods
  2024-10-29 10:25 ` Paolo Abeni
  2024-10-29 15:03   ` Joe Damato
@ 2024-10-29 17:53   ` Joe Damato
  1 sibling, 0 replies; 14+ messages in thread
From: Joe Damato @ 2024-10-29 17:53 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: netdev, namangulati, edumazet, amritha.nambiar, sridhar.samudrala,
	sdf, peter, m2shafiei, bjorn, hch, willy, willemdebruijn.kernel,
	skhawaja, kuba, Alexander Lobakin, Alexander Viro,
	open list:BPF [MISC] :Keyword:(?:b|_)bpf(?:b|_),
	Christian Brauner, David S. Miller, Donald Hunter, Jan Kara,
	Jesper Dangaard Brouer, Jiri Pirko, Johannes Berg,
	Jonathan Corbet, open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	Lorenzo Bianconi, Martin Karsten, Mina Almasry,
	Sebastian Andrzej Siewior, Xuan Zhuo

On Tue, Oct 29, 2024 at 11:25:18AM +0100, Paolo Abeni wrote:
> On 10/21/24 03:52, Joe Damato wrote:

[...]

> 
> The changes makes sense to me, and I could not find any obvious issue in
> the patches.
> 
> I think this deserve some - even basic - self-tests coverage. Note that
> you can enable GRO on veth devices to make NAPI instances avail there.
> 
> Possibly you could opt for a drivers/net defaulting to veth usage and
> allowing the user to select real H/W via env variables.

Sorry, Paolo, but I took a cursory look at veth and I need to object
more strongly to your feedback here.

My understanding (which could be incorrect) is that neither veth nor
netdevsim use IRQs.

The whole purpose of this series is to block IRQs while data is
being busy polled to increase efficiency. That's in the cover
letter.

If neither of the drivers we'd use to simulate this in selftest use
IRQs, how could we build a selftest which ensures IRQs are correctly
suspended during busy periods without first making considerable
changes to either (or both?) drivers?

Respectfully: I don't think it's appropriate to block this series on
that much additional work.

Please reconsider and let me know how to proceed.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-10-29 17:53 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-21  1:52 [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Joe Damato
2024-10-21  1:52 ` [PATCH net-next v2 1/6] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
2024-10-21  1:52 ` [PATCH net-next v2 2/6] net: Suspend softirq when prefer_busy_poll is set Joe Damato
2024-10-21  1:52 ` [PATCH net-next v2 3/6] net: Add control functions for irq suspension Joe Damato
2024-10-21  1:52 ` [PATCH net-next v2 4/6] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
2024-10-21  1:53 ` [PATCH net-next v2 5/6] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
2024-10-21  1:53 ` [PATCH net-next v2 6/6] docs: networking: Describe irq suspension Joe Damato
2024-10-21 10:49   ` Bagas Sanjaya
2024-10-21 16:33     ` Joe Damato
2024-10-25  1:03       ` Bagas Sanjaya
2024-10-24 23:05 ` [PATCH net-next v2 0/6] Suspend IRQs during application busy periods Stanislav Fomichev
2024-10-29 10:25 ` Paolo Abeni
2024-10-29 15:03   ` Joe Damato
2024-10-29 17:53   ` Joe Damato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).