netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/7] Suspend IRQs during application busy periods
@ 2024-11-03  5:24 Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 1/7] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Joe Damato,
	Alexander Lobakin, Alexander Viro, Andrew Lunn,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_), Christian Brauner,
	David Ahern, David S. Miller, Donald Hunter, Jan Kara,
	Jesper Dangaard Brouer, Jiri Pirko, Johannes Berg,
	Jonathan Corbet, Kory Maincent, Larysa Zaremba,
	open list:DOCUMENTATION,
	open list:FILESYSTEMS (VFS and infrastructure), open list,
	open list:KERNEL SELFTEST FRAMEWORK, Lorenzo Bianconi,
	Martin Karsten, Mina Almasry, Sebastian Andrzej Siewior,
	Shuah Khan, Simon Horman, Xuan Zhuo

Greetings:

Welcome to v5, see changelog below. Note that our performance tests were
not re-run for this revision as we updated a commit message, fixed
typos, removed a short unnecessary paragraph from the documentation and
added a very minor functional change to return without suspended IRQs in
certain error cases.

This series introduces a new mechanism, IRQ suspension, which allows
network applications using epoll to mask IRQs during periods of high
traffic while also reducing tail latency (compared to existing
mechanisms, see below) during periods of low traffic. In doing so, this
balances CPU consumption with network processing efficiency.

Martin Karsten (CC'd) and I have been collaborating on this series for
several months and have appreciated the feedback from the community on
our RFC [1]. We've updated the cover letter and kernel documentation in
an attempt to more clearly explain how this mechanism works, how
applications can use it, and how it compares to existing mechanisms in
the kernel. We've added an additional test case, 'fullbusy', achieved by
modifying libevent for comparison. See below for a detailed description,
link to the patch, and test results.

I briefly mentioned this idea at netdev conf 2024 (for those who were
there) and Martin described this idea in an earlier paper presented at
Sigmetrics 2024 [2].

~ The short explanation (TL;DR)

We propose adding a new napi config parameter: irq_suspend_timeout to
help balance CPU usage and network processing efficiency when using IRQ
deferral and napi busy poll.

If this parameter is set to a non-zero value *and* a user application
has enabled preferred busy poll on a busy poll context (via the
EPIOCSPARAMS ioctl introduced in commit 18e2bf0edf4d ("eventpoll: Add
epoll ioctl for epoll_params")), then application calls to epoll_wait
for that context will cause device IRQs and softirq processing to be
suspended as long as epoll_wait successfully retrieves data from the
NAPI. Each time data is retrieved, the irq_suspend_timeout is deferred.

If/when network traffic subsides and epoll_wait returns no data, IRQ
suspension is immediately reverted back to the existing
napi_defer_hard_irqs and gro_flush_timeout mechanism which was
introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral
feature")).

The irq_suspend_timeout serves as a safety mechanism. If userland takes
a long time processing data, irq_suspend_timeout will fire and restart
normal NAPI processing.

For a more in depth explanation, please continue reading.

~ Comparison with existing mechanisms

Interrupt mitigation can be accomplished in napi software, by setting
napi_defer_hard_irqs and gro_flush_timeout, or via interrupt coalescing
in the NIC. This can be quite efficient, but in both cases, a fixed
timeout (or packet count) needs to be configured. However, a fixed
timeout cannot effectively support both low- and high-load situations:

At low load, an application typically processes a few requests and then
waits to receive more input data. In this scenario, a large timeout will
cause unnecessary latency.

At high load, an application typically processes many requests before
being ready to receive more input data. In this case, a small timeout
will likely fire prematurely and trigger irq/softirq processing, which
interferes with the application's execution. This causes overhead, most
likely due to cache contention.

While NICs attempt to provide adaptive interrupt coalescing schemes,
these cannot properly take into account application-level processing.

An alternative packet delivery mechanism is busy-polling, which results
in perfect alignment of application processing and network polling. It
delivers optimal performance (throughput and latency), but results in
100% cpu utilization and is thus inefficient for below-capacity
workloads.

We propose to add a new packet delivery mode that properly alternates
between busy polling and interrupt-based delivery depending on busy and
idle periods of the application. During a busy period, the system
operates in busy-polling mode, which avoids interference. During an idle
period, the system falls back to interrupt deferral, but with a small
timeout to avoid excessive latencies. This delivery mode can also be
viewed as an extension of basic interrupt deferral, but alternating
between a small and a very large timeout.

This delivery mode is efficient, because it avoids softirq execution
interfering with application processing during busy periods. It can be
used with blocking epoll_wait to conserve cpu cycles during idle
periods. The effect of alternating between busy and idle periods is that
performance (throughput and latency) is very close to full busy polling,
while cpu utilization is lower and very close to interrupt mitigation.

~ Usage details

IRQ suspension is introduced via a per-NAPI configuration parameter that
controls the maximum time that IRQs can be suspended.

Here's how it is intended to work:
  - The user application (or system administrator) uses the netdev-genl
    netlink interface to set the pre-existing napi_defer_hard_irqs and
    gro_flush_timeout NAPI config parameters to enable IRQ deferral.

  - The user application (or system administrator) sets the proposed
    irq_suspend_timeout parameter via the netdev-genl netlink interface
    to a larger value than gro_flush_timeout to enable IRQ suspension.

  - The user application issues the existing epoll ioctl to set the
    prefer_busy_poll flag on the epoll context.

  - The user application then calls epoll_wait to busy poll for network
    events, as it normally would.

  - If epoll_wait returns events to userland, IRQs are suspended for the
    duration of irq_suspend_timeout.

  - If epoll_wait finds no events and the thread is about to go to
    sleep, IRQ handling using napi_defer_hard_irqs and gro_flush_timeout
    is resumed.

As long as epoll_wait is retrieving events, IRQs (and softirq
processing) for the NAPI being polled remain disabled. When network
traffic reduces, eventually a busy poll loop in the kernel will retrieve
no data. When this occurs, regular IRQ deferral using gro_flush_timeout
for the polled NAPI is re-enabled.

Unless IRQ suspension is continued by subsequent calls to epoll_wait, it
automatically times out after the irq_suspend_timeout timer expires.
Regular deferral is also immediately re-enabled when the epoll context
is destroyed.

~ Usage scenario

The target scenario for IRQ suspension as packet delivery mode is a
system that runs a dominant application with substantial network I/O.
The target application can be configured to receive input data up to a
certain batch size (via epoll_wait maxevents parameter) and this batch
size determines the worst-case latency that application requests might
experience. Because packet delivery is suspended during the target
application's processing, the batch size also determines the worst-case
latency of concurrent applications using the same RX queue(s).

gro_flush_timeout should be set as small as possible, but large enough to
make sure that a single request is likely not being interfered with.

irq_suspend_timeout is largely a safety mechanism against misbehaving
applications. It should be set large enough to cover the processing of an
entire application batch, i.e., the factor between gro_flush_timeout and
irq_suspend_timeout should roughly correspond to the maximum batch size
that the target application would process in one go.

~ Design rationale

The implementation of the IRQ suspension mechanism very nicely dovetails
with the existing mechanism for IRQ deferral when preferred busy poll is
enabled (introduced in commit 7fd3253a7de6 ("net: Introduce preferred
busy-polling"), see that commit message for more details).

While it would be possible to inject the suspend timeout via
the existing epoll ioctl, it is more natural to avoid this path for one
main reason:

  An epoll context is linked to NAPI IDs as file descriptors are added;
  this means any epoll context might suddenly be associated with a
  different net_device if the application were to replace all existing
  fds with fds from a different device. In this case, the scope of the
  suspend timeout becomes unclear and many edge cases for both the user
  application and the kernel are introduced

Only a single iteration through napi busy polling is needed for this
mechanism to work effectively. Since an important objective for this
mechanism is preserving cpu cycles, exactly one iteration of the napi
busy loop is invoked when busy_poll_usecs is set to 0.

~ Important call out in the implementation

  - Enabling per epoll-context preferred busy poll will now effectively
    lead to a nonblocking iteration through napi_busy_loop, even when
    busy_poll_usecs is 0. See patch 4.

~ Benchmark configs & descriptions

The changes were benchmarked with memcached [3] using the benchmarking
tool mutilate [4].

To facilitate benchmarking, a small patch [5] was applied to memcached
1.6.29 to allow setting per-epoll context preferred busy poll and other
settings via environment variables. Another small patch [6] was applied
to libevent to enable full busy-polling.

Multiple scenarios were benchmarked as described below and the scripts
used for producing these results can be found on github [7] (note: all
scenarios use NAPI-based traffic splitting via SO_INCOMING_ID by passing
-N to memcached):

  - base:
    - no other options enabled
  - deferX:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to X,000
  - napibusy:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to 200,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 64,
      busy_poll_budget = 64, prefer_busy_poll = true)
  - fullbusy:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to 5,000,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 1000,
      busy_poll_budget = 64, prefer_busy_poll = true)
    - change memcached's nonblocking epoll_wait invocation (via
      libevent) to using a 1 ms timeout
  - suspendX:
    - set defer_hard_irqs to 100
    - set gro_flush_timeout to X,000
    - set irq_suspend_timeout to 20,000,000
    - enable busy poll via the existing ioctl (busy_poll_usecs = 0,
      busy_poll_budget = 64, prefer_busy_poll = true)

~ Benchmark results

Tested on:

Single socket AMD EPYC 7662 64-Core Processor
Hyperthreading disabled
4 NUMA Zones (NPS=4)
16 CPUs per NUMA zone (64 cores total)
2 x Dual port 100gbps Mellanox Technologies ConnectX-5 Ex EN NIC

The test machine is configured such that a single interface has 8 RX
queues. The queues' IRQs and memcached are pinned to CPUs that are
NUMA-local to the interface which is under test. The NIC's interrupt
coalescing configuration is left at boot-time defaults.

Results:

Results are shown below. The mechanism added by this series is
represented by the 'suspend' cases. Data presented shows a summary over
at least 10 runs of each test case [8] using the scripts on github [7].
For latency, the median is shown. For throughput and CPU utilization,
the average is shown.

The results also include cycles-per-query (cpq) and
instruction-per-query (ipq) metrics, following the methodology proposed
in [2], to augment the CPU utilization numbers, which could be skewed
due to frequency scaling. We find that this does not appear to be the
case as CPU utilization and low-level metrics show similar trends.

These results were captured using the scripts on github [7] to
illustrate how this approach compares with other pre-existing
mechanisms. This data is not to be interpreted as scientific data
captured in a fully isolated lab setting, but instead as best effort,
illustrative information comparing and contrasting tradeoffs.

The absolute QPS results are higher than our previous submission, but
the relative differences between variants are equivalent. Because the
patches have been rebased on 6.12, several factors have likely
influenced the overall performance. Most importantly, we had to switch
to a new set of basic kernel options, which has likely altered the
baseline performance. Because the overall comparison of variants still
holds, we have not attempted to recreate the exact set of kernel options
from the previous submission.

Compare:
- Throughput (MAX) and latencies of base vs suspend.
- CPU usage of napibusy and fullbusy during lower load (200K, 400K for
  example) vs suspend.
- Latency of the defer variants vs suspend as timeout and load
  increases.

The overall takeaway is that the suspend variants provide a superior
combination of high throughput, low latency, and low cpu utilization
compared to all other variants. Each of the suspend variants works very
well, but some fine-tuning between latency and cpu utilization is still
possible by tuning the small timeout (gro_flush_timeout).

Note: we've reorganized the results to make comparison among testcases
with the same load easier.

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  200K  200024     127     254     458      25   12748   11289
   defer10  200K  199991      64     128     166      27   18763   16574
   defer20  200K  199986      72     135     178      25   15405   14173
   defer50  200K  200025      91     149     198      23   12275   12203
  defer200  200K  199996     182     266     326      18    8595    9183
  fullbusy  200K  200040      58     123     167     100   43641   23145
  napibusy  200K  200009     115     244     299      56   24797   24693
 suspend10  200K  200005      63     128     167      32   19559   17240
 suspend20  200K  199952      69     132     170      29   16324   14838
 suspend50  200K  200019      84     144     189      26   13106   12516
suspend200  200K  199978     168     264     326      20    9331    9643

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  400K  400017     157     292     762      39    9287    9325
   defer10  400K  400033      71     141     204      53   13950   12943
   defer20  400K  399935      79     150     212      47   12027   11673
   defer50  400K  399888     101     171     231      39    9556    9921
  defer200  400K  399993     200     287     358      32    7428    8576
  fullbusy  400K  400018      63     132     203     100   21827   16062
  napibusy  400K  399970      89     230     292      83   18156   16508
 suspend10  400K  400061      69     139     202      54   13576   13057
 suspend20  400K  399988      73     144     206      49   11930   11773
 suspend50  400K  399975      88     161     218      42    9996   10270
suspend200  400K  399954     172     276     353      34    7847    8713

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  600K  600031     166     289     631      61    9188    8787
   defer10  600K  599967      85     167     262      75   11833   10947
   defer20  600K  599888      89     165     243      66   10513   10362
   defer50  600K  600072     109     185     253      55    8664    9190
  defer200  600K  599951     222     315     393      45    6892    8213
  fullbusy  600K  600041      69     145     227     100   14549   13936
  napibusy  600K  599980      79     188     280      96   13927   14155
 suspend10  600K  600028      78     159     267      69   10877   11032
 suspend20  600K  600026      81     159     254      64    9922   10320
 suspend50  600K  600007      96     178     258      57    8681    9331
suspend200  600K  599964     177     295     369      47    7115    8366

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base  800K  800034     198     329     698      84    9366    8338
   defer10  800K  799718     243     642    1457      95   10532    9007
   defer20  800K  800009     132     245     399      89    9956    8979
   defer50  800K  800024     136     228     378      80    9002    8598
  defer200  800K  799965     255     362     473      66    7481    8147
  fullbusy  800K  799927      78     157     253     100   10915   12533
  napibusy  800K  799870      81     173     273      99   10826   12532
 suspend10  800K  799991      84     167     269      83    9380    9802
 suspend20  800K  799979      90     172     290      78    8765    9404
 suspend50  800K  800031     106     191     307      71    7945    8805
suspend200  800K  799905     182     307     411      62    6985    8242

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base 1000K  919543    3805    6390   14229      98    9324    7978
   defer10 1000K  850751    4574    7382   15370      99   10218    8470
   defer20 1000K  890296    4736    6862   14858      99    9708    8277
   defer50 1000K  932694    3463    6180   13251      97    9148    8053
  defer200 1000K  951311    3524    6052   13599      96    8875    7845
  fullbusy 1000K 1000011      90     181     278     100    8731   10686
  napibusy 1000K 1000050      93     184     280     100    8721   10547
 suspend10 1000K  999962     101     193     306      92    8138    8980
 suspend20 1000K 1000030     103     191     324      88    7844    8763
 suspend50 1000K 1000001     114     202     320      83    7396    8431
suspend200 1000K  999965     185     314     428      76    6733    8072

  testcase  load     qps  avglat  95%lat  99%lat     cpu     cpq     ipq
      base   MAX 1005592    4651    6594   14979     100    8679    7918
   defer10   MAX  928204    5106    7286   15199     100    9398    8380
   defer20   MAX  984663    4774    6518   14920     100    8861    8063
   defer50   MAX 1044099    4431    6368   14652     100    8350    7948
  defer200   MAX 1040451    4423    6610   14674     100    8380    7931
  fullbusy   MAX 1236608    3715    3987   12805     100    7051    7936
  napibusy   MAX 1077516    4345   10155   15957     100    8080    7842
 suspend10   MAX 1218344    3760    3990   12585     100    7150    7935
 suspend20   MAX 1220056    3752    4053   12602     100    7150    7961
 suspend50   MAX 1213666    3791    4103   12919     100    7183    7959
suspend200   MAX 1217411    3768    3988   12863     100    7161    7954

~ FAQ

  - Why is a new parameter needed? Does irq_suspend_timeout override
    gro_flush_timeout?

    Using the suspend mechanism causes the system to alternate between
    polling mode and irq-driven packet delivery. During busy periods,
    irq_suspend_timeout overrides gro_flush_timeout and keeps the system
    busy polling, but when epoll finds no events, the setting of
    gro_flush_timeout and napi_defer_hard_irqs determine the next step.

    There are essentially three possible loops for network processing and
    packet delivery:
    
    1) hardirq -> softirq   -> napi poll; basic interrupt delivery
    
    2)   timer -> softirq   -> napi poll; deferred irq processing
    
    3)   epoll -> busy-poll -> napi poll; busy looping
    
    Loop 2) can take control from Loop 1), if gro_flush_timeout and
    napi_defer_hard_irqs are set.
    
    If gro_flush_timeout and napi_defer_hard_irqs are set, Loops 2) and
    3) "wrestle" with each other for control. During busy periods,
    irq_suspend_timeout is used as timer in Loop 2), which essentially
    tilts this in favour of Loop 3).
    
    If gro_flush_timeout and napi_defer_hard_irqs are not set, Loop 3)
    cannot take control from Loop 1).
    
    Therefore, setting gro_flush_timeout and napi_defer_hard_irqs is the
    recommended usage, because otherwise setting irq_suspend_timeout
    might not have any discernible effect.

    We ran experiments with these parameters set to zero and the results
    are as expected and essentially the same as the base case.

  - Can the new timeout value be threaded through the new epoll ioctl ?

    Only with difficulty. The epoll ioctl sets options on an epoll
    context and the NAPI ID associated with an epoll context can change
    based on what file descriptors a user app adds to the epoll context.
    This would introduce complexity in the API from the user perspective
    and also complexity in the kernel.

  - Can irq suspend be built by combining NIC coalescing and
    gro_flush_timeout ?

    No. The problem is that the long timeout must engage if and only if
    prefer-busy is active.

    When using NIC coalescing for the short timeout (without
    napi_defer_hard_irqs/gro_flush_timeout), an interrupt after an idle
    period will trigger softirq, which will run napi polling. At this
    point, prefer-busy is not active, so NIC interrupts would be
    re-enabled. Then it is not possible for the longer timeout to
    interject to switch control back to polling. In other words, only by
    using the software timer for the short timeout, it is possible to
    extend the timeout without having to reprogram the NIC timer or
    reach down directly and disable interrupts.

    Using gro_flush_timeout for the long timeout also has problems, for
    the same underlying reason. In the current napi implementation,
    gro_flush_timeout is not tied to prefer-busy. We'd either have to
    change that and in the process modify the existing deferral
    mechanism, or introduce a state variable to determine whether
    gro_flush_timeout is used as long timeout for irq suspend or whether
    it is used for its default purpose. In an earlier version, we did
    try something similar to the latter and made it work, but it ends up
    being a lot more convoluted than our current proposal.

  - Isn't it already possible to combine busy looping with irq deferral?

    Yes, in fact enabling irq deferral via napi_defer_hard_irqs and
    gro_flush_timeout is a precondition for prefer_busy_poll to have an
    effect. If the application also uses a tight busy loop with
    essentially nonblocking epoll_wait (accomplished with a very short
    timeout parameter), this is the fullbusy case shown in the results.
    An application using blocking epoll_wait is shown as the napibusy
    case in the results. It's a hybrid approach that provides limited
    latency benefits compared to the base case and plain irq deferral,
    but not as good as fullbusy or suspend.

~ Special thanks

Several people were involved in earlier stages of the development of this
mechanism whom we'd like to thank:

  - Peter Cai (CC'd), for the initial kernel patch and his contributions
    to the paper.
    
  - Mohammadamin Shafie (CC'd), for testing various versions of the kernel
    patch and providing helpful feedback.

Thanks,
Martin and Joe

[1]: https://lore.kernel.org/netdev/20240812125717.413108-1-jdamato@fastly.com/
[2]: https://doi.org/10.1145/3626780
[3]: https://github.com/memcached/memcached/blob/master/doc/napi_ids.txt
[4]: https://github.com/leverich/mutilate
[5]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/memcached.patch
[6]: https://raw.githubusercontent.com/martinkarsten/irqsuspend/main/patches/libevent.patch
[7]: https://github.com/martinkarsten/irqsuspend
[8]: https://github.com/martinkarsten/irqsuspend/tree/main/results

v5:
  - Adjusted patch 5 to only suspend IRQs when ep_send_events returns a
    positive return value. This issue was pointed out by Hillf Danton.
  - Updated the commit message of patch 6 which still mentioned netcat,
    despite the code being updated in v4 to replace it with socat and fixed
    misspelling of netdevsim.
  - Fixed a minor typo in patch 7 and removed an unnecessary paragraph.
  - Added Sridhar Samudrala's Reviewed-by to patch 1-5 and 7.

v4: https://lore.kernel.org/netdev/20241102005214.32443-1-jdamato@fastly.com/
  - Added a new FAQ item to cover letter.
  - Updated patch 6 to use socat instead of nc in busy_poll_test.sh and
    updated busy_poller.c to use netlink directly to configure napi
    params.
  - Updated the kernel documentation in patch 7 to include more details.
  - Dropped Stanislav's Acked-by and Bagas' Reviewed-by from patch 7
    since the documentation was updated.

v3: https://lore.kernel.org/netdev/20241101004846.32532-1-jdamato@fastly.com/
  - Added Stanislav Fomichev's Acked-by to every patch except the newly
    added selftest.
  - Added Bagas Sanjaya's Reviewed-by to the documentation patch.
  - Fixed the commit message of patch 2 to remove a reference to the now
    non-existent sysfs setting.
  - Added a self test which tests both "regular" busy poll and busy poll
    with suspend enabled. This was added as patch 6 as requested by
    Paolo. netdevsim was chosen instead of veth due to netdevsim's
    pre-existing support for netdev-genl. See the commit message of
    patch 6 for more details.

v2: https://lore.kernel.org/bpf/20241021015311.95468-1-jdamato@fastly.com/
  - Cover letter updated, including a re-run of test data.
  - Patch 1 rewritten to use netdev-genl instead of sysfs.
  - Patch 3 updated with a comment added to napi_resume_irqs.
  - Patch 4 rebased to apply now that commit b9ca079dd6b0 ("eventpoll:
    Annotate data-race of busy_poll_usecs") has been picked up from VFS.
  - Patch 6 updated the kernel documentation.

rfc -> v1:
  - Cover letter updated to include more details.
  - Patch 1 updated to remove the documentation added. This was moved to
    patch 6 with the rest of the docs (see below).
  - Patch 5 updated to fix an error uncovered by the kernel build robot.
    See patch 5's changelog for more details.
  - Patch 6 added which updates kernel documentation.

Joe Damato (2):
  selftests: net: Add busy_poll_test
  docs: networking: Describe irq suspension

Martin Karsten (5):
  net: Add napi_struct parameter irq_suspend_timeout
  net: Suspend softirq when prefer_busy_poll is set
  net: Add control functions for irq suspension
  eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set
  eventpoll: Control irq suspension for prefer_busy_poll

 Documentation/netlink/specs/netdev.yaml       |   7 +
 Documentation/networking/napi.rst             | 172 ++++++++-
 fs/eventpoll.c                                |  36 +-
 include/linux/netdevice.h                     |   2 +
 include/net/busy_poll.h                       |   3 +
 include/uapi/linux/netdev.h                   |   1 +
 net/core/dev.c                                |  58 +++-
 net/core/dev.h                                |  25 ++
 net/core/netdev-genl-gen.c                    |   5 +-
 net/core/netdev-genl.c                        |  12 +
 tools/include/uapi/linux/netdev.h             |   1 +
 tools/testing/selftests/net/.gitignore        |   1 +
 tools/testing/selftests/net/Makefile          |   3 +-
 tools/testing/selftests/net/busy_poll_test.sh | 164 +++++++++
 tools/testing/selftests/net/busy_poller.c     | 328 ++++++++++++++++++
 15 files changed, 807 insertions(+), 11 deletions(-)
 create mode 100755 tools/testing/selftests/net/busy_poll_test.sh
 create mode 100644 tools/testing/selftests/net/busy_poller.c


base-commit: dbb9a7ef347828870df3e5e6ddf19469a3277fc9
-- 
2.25.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 1/7] net: Add napi_struct parameter irq_suspend_timeout
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 2/7] net: Suspend softirq when prefer_busy_poll is set Joe Damato
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, Donald Hunter, David S. Miller, Simon Horman,
	Andrew Lunn, Jesper Dangaard Brouer, Mina Almasry, Xuan Zhuo,
	David Ahern, Sebastian Andrzej Siewior, Lorenzo Bianconi,
	Alexander Lobakin, Jiri Pirko, Johannes Berg, open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

Add a per-NAPI IRQ suspension parameter, which can be get/set with
netdev-genl.

This patch doesn't change any behavior but prepares the code for other
changes in the following commits which use irq_suspend_timeout as a
timeout for IRQ suspension.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
---
 v1 -> v2:
   - rewrote this patch to make irq_suspend_timeout per-napi via
     netdev-genl.

 rfc -> v1:
   - removed napi.rst documentation from this patch; added to patch 6.

 Documentation/netlink/specs/netdev.yaml |  7 +++++++
 include/linux/netdevice.h               |  2 ++
 include/uapi/linux/netdev.h             |  1 +
 net/core/dev.c                          |  2 ++
 net/core/dev.h                          | 25 +++++++++++++++++++++++++
 net/core/netdev-genl-gen.c              |  5 +++--
 net/core/netdev-genl.c                  | 12 ++++++++++++
 tools/include/uapi/linux/netdev.h       |  1 +
 8 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index f9cb97d6106c..cbb544bd6c84 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -263,6 +263,11 @@ attribute-sets:
              the end of a NAPI cycle. This may add receive latency in exchange
              for reducing the number of frames processed by the network stack.
         type: uint
+      -
+        name: irq-suspend-timeout
+        doc: The timeout, in nanoseconds, of how long to suspend irq
+             processing, if event polling finds events
+        type: uint
   -
     name: queue
     attributes:
@@ -653,6 +658,7 @@ operations:
             - pid
             - defer-hard-irqs
             - gro-flush-timeout
+            - irq-suspend-timeout
       dump:
         request:
           attributes:
@@ -704,6 +710,7 @@ operations:
             - id
             - defer-hard-irqs
             - gro-flush-timeout
+            - irq-suspend-timeout
 
 kernel-family:
   headers: [ "linux/list.h"]
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 3c552b648b27..c8ab5f08092b 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -347,6 +347,7 @@ struct gro_list {
  */
 struct napi_config {
 	u64 gro_flush_timeout;
+	u64 irq_suspend_timeout;
 	u32 defer_hard_irqs;
 	unsigned int napi_id;
 };
@@ -383,6 +384,7 @@ struct napi_struct {
 	struct hrtimer		timer;
 	struct task_struct	*thread;
 	unsigned long		gro_flush_timeout;
+	unsigned long		irq_suspend_timeout;
 	u32			defer_hard_irqs;
 	/* control-path-only fields follow */
 	struct list_head	dev_list;
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index e3ebb49f60d2..e4be227d3ad6 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -124,6 +124,7 @@ enum {
 	NETDEV_A_NAPI_PID,
 	NETDEV_A_NAPI_DEFER_HARD_IRQS,
 	NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+	NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 
 	__NETDEV_A_NAPI_MAX,
 	NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
diff --git a/net/core/dev.c b/net/core/dev.c
index 6a31152e4606..4d910872963f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6666,6 +6666,7 @@ static void napi_restore_config(struct napi_struct *n)
 {
 	n->defer_hard_irqs = n->config->defer_hard_irqs;
 	n->gro_flush_timeout = n->config->gro_flush_timeout;
+	n->irq_suspend_timeout = n->config->irq_suspend_timeout;
 	/* a NAPI ID might be stored in the config, if so use it. if not, use
 	 * napi_hash_add to generate one for us. It will be saved to the config
 	 * in napi_disable.
@@ -6680,6 +6681,7 @@ static void napi_save_config(struct napi_struct *n)
 {
 	n->config->defer_hard_irqs = n->defer_hard_irqs;
 	n->config->gro_flush_timeout = n->gro_flush_timeout;
+	n->config->irq_suspend_timeout = n->irq_suspend_timeout;
 	n->config->napi_id = n->napi_id;
 	napi_hash_del(n);
 }
diff --git a/net/core/dev.h b/net/core/dev.h
index 7881bced70a9..d043dee25a68 100644
--- a/net/core/dev.h
+++ b/net/core/dev.h
@@ -236,6 +236,31 @@ static inline void netdev_set_gro_flush_timeout(struct net_device *netdev,
 		netdev->napi_config[i].gro_flush_timeout = timeout;
 }
 
+/**
+ * napi_get_irq_suspend_timeout - get the irq_suspend_timeout
+ * @n: napi struct to get the irq_suspend_timeout from
+ *
+ * Return: the per-NAPI value of the irq_suspend_timeout field.
+ */
+static inline unsigned long
+napi_get_irq_suspend_timeout(const struct napi_struct *n)
+{
+	return READ_ONCE(n->irq_suspend_timeout);
+}
+
+/**
+ * napi_set_irq_suspend_timeout - set the irq_suspend_timeout for a napi
+ * @n: napi struct to set the irq_suspend_timeout
+ * @timeout: timeout value to set
+ *
+ * napi_set_irq_suspend_timeout sets the per-NAPI irq_suspend_timeout
+ */
+static inline void napi_set_irq_suspend_timeout(struct napi_struct *n,
+						unsigned long timeout)
+{
+	WRITE_ONCE(n->irq_suspend_timeout, timeout);
+}
+
 int rps_cpumask_housekeeping(struct cpumask *mask);
 
 #if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL)
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index 21de7e10be16..a89cbd8d87c3 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -92,10 +92,11 @@ static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_DMABUF_FD + 1]
 };
 
 /* NETDEV_CMD_NAPI_SET - do */
-static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT + 1] = {
+static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT + 1] = {
 	[NETDEV_A_NAPI_ID] = { .type = NLA_U32, },
 	[NETDEV_A_NAPI_DEFER_HARD_IRQS] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_napi_defer_hard_irqs_range),
 	[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT] = { .type = NLA_UINT, },
+	[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT] = { .type = NLA_UINT, },
 };
 
 /* Ops table for netdev */
@@ -186,7 +187,7 @@ static const struct genl_split_ops netdev_nl_ops[] = {
 		.cmd		= NETDEV_CMD_NAPI_SET,
 		.doit		= netdev_nl_napi_set_doit,
 		.policy		= netdev_napi_set_nl_policy,
-		.maxattr	= NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+		.maxattr	= NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 		.flags		= GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
 	},
 };
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index b49c3b4e5fbe..765ce7c9d73b 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -161,6 +161,7 @@ static int
 netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
 			const struct genl_info *info)
 {
+	unsigned long irq_suspend_timeout;
 	unsigned long gro_flush_timeout;
 	u32 napi_defer_hard_irqs;
 	void *hdr;
@@ -196,6 +197,11 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
 			napi_defer_hard_irqs))
 		goto nla_put_failure;
 
+	irq_suspend_timeout = napi_get_irq_suspend_timeout(napi);
+	if (nla_put_uint(rsp, NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
+			 irq_suspend_timeout))
+		goto nla_put_failure;
+
 	gro_flush_timeout = napi_get_gro_flush_timeout(napi);
 	if (nla_put_uint(rsp, NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
 			 gro_flush_timeout))
@@ -306,6 +312,7 @@ int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
 static int
 netdev_nl_napi_set_config(struct napi_struct *napi, struct genl_info *info)
 {
+	u64 irq_suspend_timeout = 0;
 	u64 gro_flush_timeout = 0;
 	u32 defer = 0;
 
@@ -314,6 +321,11 @@ netdev_nl_napi_set_config(struct napi_struct *napi, struct genl_info *info)
 		napi_set_defer_hard_irqs(napi, defer);
 	}
 
+	if (info->attrs[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT]) {
+		irq_suspend_timeout = nla_get_uint(info->attrs[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT]);
+		napi_set_irq_suspend_timeout(napi, irq_suspend_timeout);
+	}
+
 	if (info->attrs[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT]) {
 		gro_flush_timeout = nla_get_uint(info->attrs[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT]);
 		napi_set_gro_flush_timeout(napi, gro_flush_timeout);
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index e3ebb49f60d2..e4be227d3ad6 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -124,6 +124,7 @@ enum {
 	NETDEV_A_NAPI_PID,
 	NETDEV_A_NAPI_DEFER_HARD_IRQS,
 	NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+	NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
 
 	__NETDEV_A_NAPI_MAX,
 	NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 2/7] net: Suspend softirq when prefer_busy_poll is set
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 1/7] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 3/7] net: Add control functions for irq suspension Joe Damato
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, David S. Miller, Simon Horman, David Ahern,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Alexander Lobakin,
	open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

When NAPI_F_PREFER_BUSY_POLL is set during busy_poll_stop and the
irq_suspend_timeout is nonzero, this timeout is used to defer softirq
scheduling, potentially longer than gro_flush_timeout. This can be used
to effectively suspend softirq processing during the time it takes for
an application to process data and return to the next busy-poll.

The call to napi->poll in busy_poll_stop might lead to an invocation of
napi_complete_done, but the prefer-busy flag is still set at that time,
so the same logic is used to defer softirq scheduling for
irq_suspend_timeout.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
---
 v3:
   - Removed reference to non-existent sysfs parameter from commit
     message. No functional/code changes.

 net/core/dev.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 4d910872963f..51d88f758e2e 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6239,7 +6239,12 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
 			timeout = napi_get_gro_flush_timeout(n);
 		n->defer_hard_irqs_count = napi_get_defer_hard_irqs(n);
 	}
-	if (n->defer_hard_irqs_count > 0) {
+	if (napi_prefer_busy_poll(n)) {
+		timeout = napi_get_irq_suspend_timeout(n);
+		if (timeout)
+			ret = false;
+	}
+	if (ret && n->defer_hard_irqs_count > 0) {
 		n->defer_hard_irqs_count--;
 		timeout = napi_get_gro_flush_timeout(n);
 		if (timeout)
@@ -6375,9 +6380,13 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock,
 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
 
 	if (flags & NAPI_F_PREFER_BUSY_POLL) {
-		napi->defer_hard_irqs_count = napi_get_defer_hard_irqs(napi);
-		timeout = napi_get_gro_flush_timeout(napi);
-		if (napi->defer_hard_irqs_count && timeout) {
+		timeout = napi_get_irq_suspend_timeout(napi);
+		if (!timeout) {
+			napi->defer_hard_irqs_count = napi_get_defer_hard_irqs(napi);
+			if (napi->defer_hard_irqs_count)
+				timeout = napi_get_gro_flush_timeout(napi);
+		}
+		if (timeout) {
 			hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINNED);
 			skip_schedule = true;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 3/7] net: Add control functions for irq suspension
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 1/7] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 2/7] net: Suspend softirq when prefer_busy_poll is set Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 4/7] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, David S. Miller, Simon Horman, David Ahern,
	Sebastian Andrzej Siewior, Lorenzo Bianconi, Alexander Lobakin,
	open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

The napi_suspend_irqs routine bootstraps irq suspension by elongating
the defer timeout to irq_suspend_timeout.

The napi_resume_irqs routine effectively cancels irq suspension by
forcing the napi to be scheduled immediately.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
---
 v1 -> v2:
   - Added a comment to napi_resume_irqs.

 include/net/busy_poll.h |  3 +++
 net/core/dev.c          | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index f03040baaefd..c858270141bc 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -52,6 +52,9 @@ void napi_busy_loop_rcu(unsigned int napi_id,
 			bool (*loop_end)(void *, unsigned long),
 			void *loop_end_arg, bool prefer_busy_poll, u16 budget);
 
+void napi_suspend_irqs(unsigned int napi_id);
+void napi_resume_irqs(unsigned int napi_id);
+
 #else /* CONFIG_NET_RX_BUSY_POLL */
 static inline unsigned long net_busy_loop_on(void)
 {
diff --git a/net/core/dev.c b/net/core/dev.c
index 51d88f758e2e..9d903ce0c2b0 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6516,6 +6516,45 @@ void napi_busy_loop(unsigned int napi_id,
 }
 EXPORT_SYMBOL(napi_busy_loop);
 
+void napi_suspend_irqs(unsigned int napi_id)
+{
+	struct napi_struct *napi;
+
+	rcu_read_lock();
+	napi = napi_by_id(napi_id);
+	if (napi) {
+		unsigned long timeout = napi_get_irq_suspend_timeout(napi);
+
+		if (timeout)
+			hrtimer_start(&napi->timer, ns_to_ktime(timeout),
+				      HRTIMER_MODE_REL_PINNED);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(napi_suspend_irqs);
+
+void napi_resume_irqs(unsigned int napi_id)
+{
+	struct napi_struct *napi;
+
+	rcu_read_lock();
+	napi = napi_by_id(napi_id);
+	if (napi) {
+		/* If irq_suspend_timeout is set to 0 between the call to
+		 * napi_suspend_irqs and now, the original value still
+		 * determines the safety timeout as intended and napi_watchdog
+		 * will resume irq processing.
+		 */
+		if (napi_get_irq_suspend_timeout(napi)) {
+			local_bh_disable();
+			napi_schedule(napi);
+			local_bh_enable();
+		}
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(napi_resume_irqs);
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 static void __napi_hash_add_with_id(struct napi_struct *napi,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 4/7] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
                   ` (2 preceding siblings ...)
  2024-11-03  5:24 ` [PATCH net-next v5 3/7] net: Add control functions for irq suspension Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 5/7] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, Alexander Viro, Christian Brauner, Jan Kara,
	open list:FILESYSTEMS (VFS and infrastructure), open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

Setting prefer_busy_poll now leads to an effectively nonblocking
iteration though napi_busy_loop, even when busy_poll_usecs is 0.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
---
 v1 -> v2:
   - Rebased to apply now that commit b9ca079dd6b0 ("eventpoll: Annotate
     data-race of busy_poll_usecs") has been picked up from VFS.

 fs/eventpoll.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 1ae4542f0bd8..f9e0d9307dad 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -420,7 +420,9 @@ static bool busy_loop_ep_timeout(unsigned long start_time,
 
 static bool ep_busy_loop_on(struct eventpoll *ep)
 {
-	return !!READ_ONCE(ep->busy_poll_usecs) || net_busy_loop_on();
+	return !!READ_ONCE(ep->busy_poll_usecs) ||
+	       READ_ONCE(ep->prefer_busy_poll) ||
+	       net_busy_loop_on();
 }
 
 static bool ep_busy_loop_end(void *p, unsigned long start_time)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 5/7] eventpoll: Control irq suspension for prefer_busy_poll
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
                   ` (3 preceding siblings ...)
  2024-11-03  5:24 ` [PATCH net-next v5 4/7] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 6/7] selftests: net: Add busy_poll_test Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 7/7] docs: networking: Describe irq suspension Joe Damato
  6 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	Joe Damato, Alexander Viro, Christian Brauner, Jan Kara,
	open list:FILESYSTEMS (VFS and infrastructure), open list

From: Martin Karsten <mkarsten@uwaterloo.ca>

When events are reported to userland and prefer_busy_poll is set, irqs
are temporarily suspended using napi_suspend_irqs.

If no events are found and ep_poll would go to sleep, irq suspension is
cancelled using napi_resume_irqs.

Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Co-developed-by: Joe Damato <jdamato@fastly.com>
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
---
 v5:
   - Only call ep_suspend_napi_irqs when ep_send_events returns a
     positive value. IRQs are not suspended in error (e.g. EINTR)
     cases. This issue was pointed out by Hillf Danton.

 rfc -> v1:
   - move irq resume code from ep_free to a helper which either resumes
     IRQs or does nothing if !defined(CONFIG_NET_RX_BUSY_POLL).

 fs/eventpoll.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index f9e0d9307dad..83bcb559b89f 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -457,6 +457,8 @@ static bool ep_busy_loop(struct eventpoll *ep, int nonblock)
 		 * it back in when we have moved a socket with a valid NAPI
 		 * ID onto the ready list.
 		 */
+		if (prefer_busy_poll)
+			napi_resume_irqs(napi_id);
 		ep->napi_id = 0;
 		return false;
 	}
@@ -540,6 +542,22 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
 	}
 }
 
+static void ep_suspend_napi_irqs(struct eventpoll *ep)
+{
+	unsigned int napi_id = READ_ONCE(ep->napi_id);
+
+	if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
+		napi_suspend_irqs(napi_id);
+}
+
+static void ep_resume_napi_irqs(struct eventpoll *ep)
+{
+	unsigned int napi_id = READ_ONCE(ep->napi_id);
+
+	if (napi_id >= MIN_NAPI_ID && READ_ONCE(ep->prefer_busy_poll))
+		napi_resume_irqs(napi_id);
+}
+
 #else
 
 static inline bool ep_busy_loop(struct eventpoll *ep, int nonblock)
@@ -557,6 +575,14 @@ static long ep_eventpoll_bp_ioctl(struct file *file, unsigned int cmd,
 	return -EOPNOTSUPP;
 }
 
+static void ep_suspend_napi_irqs(struct eventpoll *ep)
+{
+}
+
+static void ep_resume_napi_irqs(struct eventpoll *ep)
+{
+}
+
 #endif /* CONFIG_NET_RX_BUSY_POLL */
 
 /*
@@ -788,6 +814,7 @@ static bool ep_refcount_dec_and_test(struct eventpoll *ep)
 
 static void ep_free(struct eventpoll *ep)
 {
+	ep_resume_napi_irqs(ep);
 	mutex_destroy(&ep->mtx);
 	free_uid(ep->user);
 	wakeup_source_unregister(ep->ws);
@@ -2005,8 +2032,11 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 			 * trying again in search of more luck.
 			 */
 			res = ep_send_events(ep, events, maxevents);
-			if (res)
+			if (res) {
+				if (res > 0)
+					ep_suspend_napi_irqs(ep);
 				return res;
+			}
 		}
 
 		if (timed_out)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 6/7] selftests: net: Add busy_poll_test
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
                   ` (4 preceding siblings ...)
  2024-11-03  5:24 ` [PATCH net-next v5 5/7] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-03  5:24 ` [PATCH net-next v5 7/7] docs: networking: Describe irq suspension Joe Damato
  6 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Joe Damato,
	Martin Karsten, David S. Miller, Simon Horman, Shuah Khan,
	open list, open list:KERNEL SELFTEST FRAMEWORK

Add an epoll busy poll test using netdevsim.

This test is comprised of:
  - busy_poller (via busy_poller.c)
  - busy_poll_test.sh which loads netdevsim, sets up network namespaces,
    and runs busy_poller to receive data and socat to send data.

The selftest tests two different scenarios:
  - busy poll (the pre-existing version in the kernel)
  - busy poll with suspend enabled (what this series adds)

The data transmit is a 1MiB temporary file generated from /dev/urandom
and the test is considered passing if the md5sum of the input file to
socat matches the md5sum of the output file from busy_poller.

netdevsim was chosen instead of veth due to netdevsim's support for
netdev-genl.

For now, this test uses the functionality that netdevsim provides. In the
future, perhaps netdevsim can be extended to emulate device IRQs to more
thoroughly test all pre-existing kernel options (like defer_hard_irqs)
and suspend.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Co-developed-by: Martin Karsten <mkarsten@uwaterloo.ca>
Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
 v5:
   - Updated commit message to replace netcat with socat and fixed
     misspelling of netdevsim. No functional/code changes.

 v4:
   - Updated busy_poll_test.sh:
     - use socat instead of nc
     - drop cli.py usage from the script
     - removed check_ynl
   - Updated busy_poller.c:
     - use netlink to configure napi parameters

 v3:
   - New in v3

 tools/testing/selftests/net/.gitignore        |   1 +
 tools/testing/selftests/net/Makefile          |   3 +-
 tools/testing/selftests/net/busy_poll_test.sh | 164 +++++++++
 tools/testing/selftests/net/busy_poller.c     | 328 ++++++++++++++++++
 4 files changed, 495 insertions(+), 1 deletion(-)
 create mode 100755 tools/testing/selftests/net/busy_poll_test.sh
 create mode 100644 tools/testing/selftests/net/busy_poller.c

diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore
index 217d8b7a7365..85b0c4a2179f 100644
--- a/tools/testing/selftests/net/.gitignore
+++ b/tools/testing/selftests/net/.gitignore
@@ -2,6 +2,7 @@
 bind_bhash
 bind_timewait
 bind_wildcard
+busy_poller
 cmsg_sender
 diag_uid
 epoll_busy_poll
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
index 26a4883a65c9..3ccfe454db1a 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -96,9 +96,10 @@ TEST_PROGS += fdb_flush.sh
 TEST_PROGS += fq_band_pktlimit.sh
 TEST_PROGS += vlan_hw_filter.sh
 TEST_PROGS += bpf_offload.py
+TEST_PROGS += busy_poll_test.sh
 
 # YNL files, must be before "include ..lib.mk"
-YNL_GEN_FILES := ncdevmem
+YNL_GEN_FILES := ncdevmem busy_poller
 TEST_GEN_FILES += $(YNL_GEN_FILES)
 
 TEST_FILES := settings
diff --git a/tools/testing/selftests/net/busy_poll_test.sh b/tools/testing/selftests/net/busy_poll_test.sh
new file mode 100755
index 000000000000..ffc74bc62e5a
--- /dev/null
+++ b/tools/testing/selftests/net/busy_poll_test.sh
@@ -0,0 +1,164 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0-only
+source net_helper.sh
+
+NSIM_DEV_1_ID=$((256 + RANDOM % 256))
+NSIM_DEV_1_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_DEV_1_ID
+NSIM_DEV_2_ID=$((512 + RANDOM % 256))
+NSIM_DEV_2_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_DEV_2_ID
+
+NSIM_DEV_SYS_NEW=/sys/bus/netdevsim/new_device
+NSIM_DEV_SYS_DEL=/sys/bus/netdevsim/del_device
+NSIM_DEV_SYS_LINK=/sys/bus/netdevsim/link_device
+NSIM_DEV_SYS_UNLINK=/sys/bus/netdevsim/unlink_device
+
+setup_ns()
+{
+	set -e
+	ip netns add nssv
+	ip netns add nscl
+
+	NSIM_DEV_1_NAME=$(find $NSIM_DEV_1_SYS/net -maxdepth 1 -type d ! \
+		-path $NSIM_DEV_1_SYS/net -exec basename {} \;)
+	NSIM_DEV_2_NAME=$(find $NSIM_DEV_2_SYS/net -maxdepth 1 -type d ! \
+		-path $NSIM_DEV_2_SYS/net -exec basename {} \;)
+
+	# ensure the server has 1 queue
+	ethtool -L $NSIM_DEV_1_NAME combined 1 2>/dev/null
+
+	ip link set $NSIM_DEV_1_NAME netns nssv
+	ip link set $NSIM_DEV_2_NAME netns nscl
+
+	ip netns exec nssv ip addr add '192.168.1.1/24' dev $NSIM_DEV_1_NAME
+	ip netns exec nscl ip addr add '192.168.1.2/24' dev $NSIM_DEV_2_NAME
+
+	ip netns exec nssv ip link set dev $NSIM_DEV_1_NAME up
+	ip netns exec nscl ip link set dev $NSIM_DEV_2_NAME up
+
+	set +e
+}
+
+cleanup_ns()
+{
+	ip netns del nscl
+	ip netns del nssv
+}
+
+test_busypoll()
+{
+	tmp_file=$(mktemp)
+	out_file=$(mktemp)
+
+	# fill a test file with random data
+	dd if=/dev/urandom of=${tmp_file} bs=1M count=1 2> /dev/null
+
+	timeout -k 1s 30s ip netns exec nssv ./busy_poller -p48675 -b192.168.1.1 -m8 -u0 -P1 -g16 -i${NSIM_DEV_1_IFIDX} -o${out_file}&
+
+	wait_local_port_listen nssv 48675 tcp
+
+	ip netns exec nscl socat -u $tmp_file TCP:192.168.1.1:48675
+
+	wait
+
+	tmp_file_md5sum=$(md5sum $tmp_file | cut -f1 -d' ')
+	out_file_md5sum=$(md5sum $out_file | cut -f1 -d' ')
+
+	if [ "$tmp_file_md5sum" = "$out_file_md5sum" ]; then
+		res=0
+	else
+		echo "md5sum mismatch"
+		echo "input file md5sum: ${tmp_file_md5sum}";
+		echo "output file md5sum: ${out_file_md5sum}";
+		res=1
+	fi
+
+	rm $out_file $tmp_file
+
+	return $res
+}
+
+test_busypoll_with_suspend()
+{
+	tmp_file=$(mktemp)
+	out_file=$(mktemp)
+
+	# fill a test file with random data
+	dd if=/dev/urandom of=${tmp_file} bs=1M count=1 2> /dev/null
+
+	timeout -k 1s 30s ip netns exec nssv ./busy_poller -p48675 -b192.168.1.1 -m8 -u0 -P1 -g16 -d100 -r50000 -s20000000 -i${NSIM_DEV_1_IFIDX} -o${out_file}&
+
+	wait_local_port_listen nssv 48675 tcp
+
+	ip netns exec nscl socat -u $tmp_file TCP:192.168.1.1:48675
+
+	wait
+
+	tmp_file_md5sum=$(md5sum $tmp_file | cut -f1 -d' ')
+	out_file_md5sum=$(md5sum $out_file | cut -f1 -d' ')
+
+	if [ "$tmp_file_md5sum" = "$out_file_md5sum" ]; then
+		res=0
+	else
+		echo "md5sum mismatch"
+		echo "input file md5sum: ${tmp_file_md5sum}";
+		echo "output file md5sum: ${out_file_md5sum}";
+		res=1
+	fi
+
+	rm $out_file $tmp_file
+
+	return $res
+}
+
+###
+### Code start
+###
+
+modprobe netdevsim
+
+# linking
+
+echo $NSIM_DEV_1_ID > $NSIM_DEV_SYS_NEW
+echo $NSIM_DEV_2_ID > $NSIM_DEV_SYS_NEW
+udevadm settle
+
+setup_ns
+
+NSIM_DEV_1_FD=$((256 + RANDOM % 256))
+exec {NSIM_DEV_1_FD}</var/run/netns/nssv
+NSIM_DEV_1_IFIDX=$(ip netns exec nssv cat /sys/class/net/$NSIM_DEV_1_NAME/ifindex)
+
+NSIM_DEV_2_FD=$((256 + RANDOM % 256))
+exec {NSIM_DEV_2_FD}</var/run/netns/nscl
+NSIM_DEV_2_IFIDX=$(ip netns exec nscl cat /sys/class/net/$NSIM_DEV_2_NAME/ifindex)
+
+echo "$NSIM_DEV_1_FD:$NSIM_DEV_1_IFIDX $NSIM_DEV_2_FD:$NSIM_DEV_2_IFIDX" > $NSIM_DEV_SYS_LINK
+if [ $? -ne 0 ]; then
+	echo "linking netdevsim1 with netdevsim2 should succeed"
+	cleanup_ns
+	exit 1
+fi
+
+test_busypoll
+if [ $? -ne 0 ]; then
+	echo "test_busypoll failed"
+	cleanup_ns
+	exit 1
+fi
+
+test_busypoll_with_suspend
+if [ $? -ne 0 ]; then
+	echo "test_busypoll_with_suspend failed"
+	cleanup_ns
+	exit 1
+fi
+
+echo "$NSIM_DEV_1_FD:$NSIM_DEV_1_IFIDX" > $NSIM_DEV_SYS_UNLINK
+
+echo $NSIM_DEV_2_ID > $NSIM_DEV_SYS_DEL
+
+cleanup_ns
+
+modprobe -r netdevsim
+
+exit 0
diff --git a/tools/testing/selftests/net/busy_poller.c b/tools/testing/selftests/net/busy_poller.c
new file mode 100644
index 000000000000..8d8aa9e5939a
--- /dev/null
+++ b/tools/testing/selftests/net/busy_poller.c
@@ -0,0 +1,328 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <assert.h>
+#include <errno.h>
+#include <error.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <limits.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <arpa/inet.h>
+#include <netinet/in.h>
+
+#include <sys/ioctl.h>
+#include <sys/epoll.h>
+#include <sys/socket.h>
+#include <sys/types.h>
+
+#include <linux/netlink.h>
+#include <linux/genetlink.h>
+#include "netdev-user.h"
+#include <ynl.h>
+
+/* if the headers haven't been updated, we need to define some things */
+#if !defined(EPOLL_IOC_TYPE)
+struct epoll_params {
+	uint32_t busy_poll_usecs;
+	uint16_t busy_poll_budget;
+	uint8_t prefer_busy_poll;
+
+	/* pad the struct to a multiple of 64bits */
+	uint8_t __pad;
+};
+
+#define EPOLL_IOC_TYPE 0x8A
+#define EPIOCSPARAMS _IOW(EPOLL_IOC_TYPE, 0x01, struct epoll_params)
+#define EPIOCGPARAMS _IOR(EPOLL_IOC_TYPE, 0x02, struct epoll_params)
+#endif
+
+static uint32_t cfg_port = 8000;
+static struct in_addr cfg_bind_addr = { .s_addr = INADDR_ANY };
+static char *cfg_outfile;
+static int cfg_max_events = 8;
+static int cfg_ifindex;
+
+/* busy poll params */
+static uint32_t cfg_busy_poll_usecs;
+static uint16_t cfg_busy_poll_budget;
+static uint8_t cfg_prefer_busy_poll;
+
+/* IRQ params */
+static uint32_t cfg_defer_hard_irqs;
+static uint64_t cfg_gro_flush_timeout;
+static uint64_t cfg_irq_suspend_timeout;
+
+static void usage(const char *filepath)
+{
+	error(1, 0,
+	      "Usage: %s -p<port> -b<addr> -m<max_events> -u<busy_poll_usecs> -P<prefer_busy_poll> -g<busy_poll_budget> -o<outfile> -d<defer_hard_irqs> -r<gro_flush_timeout> -s<irq_suspend_timeout> -i<ifindex>",
+	      filepath);
+}
+
+static void parse_opts(int argc, char **argv)
+{
+	int ret;
+	int c;
+
+	if (argc <= 1)
+		usage(argv[0]);
+
+	while ((c = getopt(argc, argv, "p:m:b:u:P:g:o:d:r:s:i:")) != -1) {
+		switch (c) {
+		case 'u':
+			cfg_busy_poll_usecs = strtoul(optarg, NULL, 0);
+			if (cfg_busy_poll_usecs == ULONG_MAX ||
+			    cfg_busy_poll_usecs > UINT32_MAX)
+				error(1, ERANGE, "busy_poll_usecs too large");
+			break;
+		case 'P':
+			cfg_prefer_busy_poll = strtoul(optarg, NULL, 0);
+			if (cfg_prefer_busy_poll == ULONG_MAX ||
+			    cfg_prefer_busy_poll > 1)
+				error(1, ERANGE,
+				      "prefer busy poll should be 0 or 1");
+			break;
+		case 'g':
+			cfg_busy_poll_budget = strtoul(optarg, NULL, 0);
+			if (cfg_busy_poll_budget == ULONG_MAX ||
+			    cfg_busy_poll_budget > UINT16_MAX)
+				error(1, ERANGE,
+				      "busy poll budget must be [0, UINT16_MAX]");
+			break;
+		case 'p':
+			cfg_port = strtoul(optarg, NULL, 0);
+			if (cfg_port > UINT16_MAX)
+				error(1, ERANGE, "port must be <= 65535");
+			break;
+		case 'b':
+			ret = inet_aton(optarg, &cfg_bind_addr);
+			if (ret == 0)
+				error(1, errno,
+				      "bind address %s invalid", optarg);
+			break;
+		case 'o':
+			cfg_outfile = strdup(optarg);
+			if (!cfg_outfile)
+				error(1, 0, "outfile invalid");
+			break;
+		case 'm':
+			cfg_max_events = strtol(optarg, NULL, 0);
+
+			if (cfg_max_events == LONG_MIN ||
+			    cfg_max_events == LONG_MAX ||
+			    cfg_max_events <= 0)
+				error(1, ERANGE,
+				      "max events must be > 0 and < LONG_MAX");
+			break;
+		case 'd':
+			cfg_defer_hard_irqs = strtoul(optarg, NULL, 0);
+
+			if (cfg_defer_hard_irqs == ULONG_MAX ||
+			    cfg_defer_hard_irqs > INT32_MAX)
+				error(1, ERANGE,
+				      "defer_hard_irqs must be <= INT32_MAX");
+			break;
+		case 'r':
+			cfg_gro_flush_timeout = strtoull(optarg, NULL, 0);
+
+			if (cfg_gro_flush_timeout == ULLONG_MAX)
+				error(1, ERANGE,
+				      "gro_flush_timeout must be < ULLONG_MAX");
+			break;
+		case 's':
+			cfg_irq_suspend_timeout = strtoull(optarg, NULL, 0);
+
+			if (cfg_irq_suspend_timeout == ULLONG_MAX)
+				error(1, ERANGE,
+				      "irq_suspend_timeout must be < ULLONG_MAX");
+			break;
+		case 'i':
+			cfg_ifindex = strtoul(optarg, NULL, 0);
+			if (cfg_ifindex == ULONG_MAX)
+				error(1, ERANGE,
+				      "ifindex must be < ULONG_MAX");
+			break;
+		}
+	}
+
+	if (!cfg_ifindex)
+		usage(argv[0]);
+
+	if (optind != argc)
+		usage(argv[0]);
+}
+
+static void epoll_ctl_add(int epfd, int fd, uint32_t events)
+{
+	struct epoll_event ev;
+
+	ev.events = events;
+	ev.data.fd = fd;
+	if (epoll_ctl(epfd, EPOLL_CTL_ADD, fd, &ev) == -1)
+		error(1, errno, "epoll_ctl add fd: %d", fd);
+}
+
+static void setnonblock(int sockfd)
+{
+	int flags;
+
+	flags = fcntl(sockfd, F_GETFL, 0);
+
+	if (fcntl(sockfd, F_SETFL, flags | O_NONBLOCK) == -1)
+		error(1, errno, "unable to set socket to nonblocking mode");
+}
+
+static void write_chunk(int fd, char *buf, ssize_t buflen)
+{
+	ssize_t remaining = buflen;
+	char *buf_offset = buf;
+	ssize_t writelen = 0;
+	ssize_t write_result;
+
+	while (writelen < buflen) {
+		write_result = write(fd, buf_offset, remaining);
+		if (write_result == -1)
+			error(1, errno, "unable to write data to outfile");
+
+		writelen += write_result;
+		remaining -= write_result;
+		buf_offset += write_result;
+	}
+}
+
+static void setup_queue(void)
+{
+	struct netdev_napi_get_list *napi_list = NULL;
+	struct netdev_napi_get_req_dump *req = NULL;
+	struct netdev_napi_set_req *set_req = NULL;
+	struct ynl_sock *ys;
+	struct ynl_error yerr;
+	uint32_t napi_id;
+
+	ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+	if (!ys)
+		error(1, 0, "YNL: %s", yerr.msg);
+
+	req = netdev_napi_get_req_dump_alloc();
+	netdev_napi_get_req_dump_set_ifindex(req, cfg_ifindex);
+	napi_list = netdev_napi_get_dump(ys, req);
+
+	/* assume there is 1 NAPI configured and take the first */
+	if (napi_list->obj._present.id)
+		napi_id = napi_list->obj.id;
+	else
+		error(1, 0, "napi ID not present?");
+
+	set_req = netdev_napi_set_req_alloc();
+	netdev_napi_set_req_set_id(set_req, napi_id);
+	netdev_napi_set_req_set_defer_hard_irqs(set_req, cfg_defer_hard_irqs);
+	netdev_napi_set_req_set_gro_flush_timeout(set_req,
+						  cfg_gro_flush_timeout);
+	netdev_napi_set_req_set_irq_suspend_timeout(set_req,
+						    cfg_irq_suspend_timeout);
+
+	if (netdev_napi_set(ys, set_req))
+		error(1, 0, "can't set NAPI params: %s\n", yerr.msg);
+
+	netdev_napi_get_list_free(napi_list);
+	netdev_napi_get_req_dump_free(req);
+	netdev_napi_set_req_free(set_req);
+	ynl_sock_destroy(ys);
+}
+
+static void run_poller(void)
+{
+	struct epoll_event events[cfg_max_events];
+	struct epoll_params epoll_params = {0};
+	struct sockaddr_in server_addr;
+	int i, epfd, nfds;
+	ssize_t readlen;
+	int outfile_fd;
+	char buf[1024];
+	int sockfd;
+	int conn;
+	int val;
+
+	outfile_fd = open(cfg_outfile, O_WRONLY | O_CREAT, 0644);
+	if (outfile_fd == -1)
+		error(1, errno, "unable to open outfile: %s", cfg_outfile);
+
+	sockfd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+	if (sockfd == -1)
+		error(1, errno, "unable to create listen socket");
+
+	server_addr.sin_family = AF_INET;
+	server_addr.sin_port = htons(cfg_port);
+	server_addr.sin_addr = cfg_bind_addr;
+
+	epoll_params.busy_poll_usecs = cfg_busy_poll_usecs;
+	epoll_params.busy_poll_budget = cfg_busy_poll_budget;
+	epoll_params.prefer_busy_poll = cfg_prefer_busy_poll;
+	epoll_params.__pad = 0;
+
+	val = 1;
+	if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &val, sizeof(val)))
+		error(1, errno, "poller setsockopt reuseaddr");
+
+	setnonblock(sockfd);
+
+	if (bind(sockfd, (struct sockaddr *)&server_addr,
+		 sizeof(struct sockaddr_in)))
+		error(0, errno, "poller bind to port: %d\n", cfg_port);
+
+	if (listen(sockfd, 1))
+		error(1, errno, "poller listen");
+
+	epfd = epoll_create1(0);
+	if (ioctl(epfd, EPIOCSPARAMS, &epoll_params) == -1)
+		error(1, errno, "unable to set busy poll params");
+
+	epoll_ctl_add(epfd, sockfd, EPOLLIN | EPOLLOUT | EPOLLET);
+
+	for (;;) {
+		nfds = epoll_wait(epfd, events, cfg_max_events, -1);
+		for (i = 0; i < nfds; i++) {
+			if (events[i].data.fd == sockfd) {
+				conn = accept(sockfd, NULL, NULL);
+				if (conn == -1)
+					error(1, errno,
+					      "accepting incoming connection failed");
+
+				setnonblock(conn);
+				epoll_ctl_add(epfd, conn,
+					      EPOLLIN | EPOLLET | EPOLLRDHUP |
+					      EPOLLHUP);
+			} else if (events[i].events & EPOLLIN) {
+				for (;;) {
+					readlen = read(events[i].data.fd, buf,
+						       sizeof(buf));
+					if (readlen > 0)
+						write_chunk(outfile_fd, buf,
+							    readlen);
+					else
+						break;
+				}
+			} else {
+				/* spurious event ? */
+			}
+			if (events[i].events & (EPOLLRDHUP | EPOLLHUP)) {
+				epoll_ctl(epfd, EPOLL_CTL_DEL,
+					  events[i].data.fd, NULL);
+				close(events[i].data.fd);
+				close(outfile_fd);
+				return;
+			}
+		}
+	}
+}
+
+int main(int argc, char *argv[])
+{
+	parse_opts(argc, argv);
+	setup_queue();
+	run_poller();
+	return 0;
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
                   ` (5 preceding siblings ...)
  2024-11-03  5:24 ` [PATCH net-next v5 6/7] selftests: net: Add busy_poll_test Joe Damato
@ 2024-11-03  5:24 ` Joe Damato
  2024-11-04 10:52   ` Bagas Sanjaya
  6 siblings, 1 reply; 15+ messages in thread
From: Joe Damato @ 2024-11-03  5:24 UTC (permalink / raw)
  To: netdev
  Cc: hdanton, bagasdotme, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Joe Damato,
	Martin Karsten, David S. Miller, Simon Horman, Jonathan Corbet,
	open list:DOCUMENTATION, open list,
	open list:BPF [MISC]:Keyword:(?:b|_)bpf(?:b|_)

Describe irq suspension, the epoll ioctls, and the tradeoffs of using
different gro_flush_timeout values.

Signed-off-by: Joe Damato <jdamato@fastly.com>
Co-developed-by: Martin Karsten <mkarsten@uwaterloo.ca>
Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
---
 v5:
   - Fixed a minor typo in the epoll-based busy polling section
   - Removed short paragraph referring to experimental data as that data
     is not included in the documentation

 v4:
   - Updated documentation to further explain irq suspension
   - Dropped Stanislav's Acked-by tag because of the doc changes
   - Dropped Bagas' Reviewed-by tag because of the doc changes

 v1 -> v2:
   - Updated documentation to describe the per-NAPI configuration
     parameters.

 Documentation/networking/napi.rst | 172 +++++++++++++++++++++++++++++-
 1 file changed, 170 insertions(+), 2 deletions(-)

diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index dfa5d549be9c..bbd58bcc430f 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -192,6 +192,33 @@ is reused to control the delay of the timer, while
 ``napi_defer_hard_irqs`` controls the number of consecutive empty polls
 before NAPI gives up and goes back to using hardware IRQs.
 
+The above parameters can also be set on a per-NAPI basis using netlink via
+netdev-genl. When used with netlink and configured on a per-NAPI basis, the
+parameters mentioned above use hyphens instead of underscores:
+``gro-flush-timeout`` and ``napi-defer-hard-irqs``.
+
+Per-NAPI configuration can be done programmatically in a user application
+or by using a script included in the kernel source tree:
+``tools/net/ynl/cli.py``.
+
+For example, using the script:
+
+.. code-block:: bash
+
+  $ kernel-source/tools/net/ynl/cli.py \
+            --spec Documentation/netlink/specs/netdev.yaml \
+            --do napi-set \
+            --json='{"id": 345,
+                     "defer-hard-irqs": 111,
+                     "gro-flush-timeout": 11111}'
+
+Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
+via netdev-genl. There is no global sysfs parameter for this value.
+
+``irq-suspend-timeout`` is used to determine how long an application can
+completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
+which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
+
 .. _poll:
 
 Busy polling
@@ -207,6 +234,46 @@ selected sockets or using the global ``net.core.busy_poll`` and
 ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
 also exists.
 
+epoll-based busy polling
+------------------------
+
+It is possible to trigger packet processing directly from calls to
+``epoll_wait``. In order to use this feature, a user application must ensure
+all file descriptors which are added to an epoll context have the same NAPI ID.
+
+If the application uses a dedicated acceptor thread, the application can obtain
+the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
+distribute that file descriptor to a worker thread. The worker thread would add
+the file descriptor to its epoll context. This would ensure each worker thread
+has an epoll context with FDs that have the same NAPI ID.
+
+Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program can
+be inserted to distribute incoming connections to threads such that each thread
+is only given incoming connections with the same NAPI ID. Care must be taken to
+carefully handle cases where a system may have multiple NICs.
+
+In order to enable busy polling, there are two choices:
+
+1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
+   loop waiting for events. This is a system-wide setting and will cause all
+   epoll-based applications to busy poll when they call epoll_wait. This may
+   not be desirable as many applications may not have the need to busy poll.
+
+2. Applications using recent kernels can issue an ioctl on the epoll context
+   file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
+   epoll_params``:, which user programs can define as follows:
+
+.. code-block:: c
+
+  struct epoll_params {
+      uint32_t busy_poll_usecs;
+      uint16_t busy_poll_budget;
+      uint8_t prefer_busy_poll;
+
+      /* pad the struct to a multiple of 64bits */
+      uint8_t __pad;
+  };
+
 IRQ mitigation
 ---------------
 
@@ -222,12 +289,113 @@ Such applications can pledge to the kernel that they will perform a busy
 polling operation periodically, and the driver should keep the device IRQs
 permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
 socket option. To avoid system misbehavior the pledge is revoked
-if ``gro_flush_timeout`` passes without any busy poll call.
+if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
+busy polling applications, the ``prefer_busy_poll`` field of ``struct
+epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
+enable this mode. See the above section for more details.
 
 The NAPI budget for busy polling is lower than the default (which makes
 sense given the low latency intention of normal busy polling). This is
 not the case with IRQ mitigation, however, so the budget can be adjusted
-with the ``SO_BUSY_POLL_BUDGET`` socket option.
+with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
+applications, the ``busy_poll_budget`` field can be adjusted to the desired value
+in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
+ioctl. See the above section for more details.
+
+It is important to note that choosing a large value for ``gro_flush_timeout``
+will defer IRQs to allow for better batch processing, but will induce latency
+when the system is not fully loaded. Choosing a small value for
+``gro_flush_timeout`` can cause interference of the user application which is
+attempting to busy poll by device IRQs and softirq processing. This value
+should be chosen carefully with these tradeoffs in mind. epoll-based busy
+polling applications may be able to mitigate how much user processing happens
+by choosing an appropriate value for ``maxevents``.
+
+Users may want to consider an alternate approach, IRQ suspension, to help deal
+with these tradeoffs.
+
+IRQ suspension
+--------------
+
+IRQ suspension is a mechanism wherein device IRQs are masked while epoll
+triggers NAPI packet processing.
+
+While application calls to epoll_wait successfully retrieve events, the kernel will
+defer the IRQ suspension timer. If the kernel does not retrieve any events
+while busy polling (for example, because network traffic levels subsided), IRQ
+suspension is disabled and the IRQ mitigation strategies described above are
+engaged.
+
+This allows users to balance CPU consumption with network processing
+efficiency.
+
+To use this mechanism:
+
+  1. The per-NAPI config parameter ``irq-suspend-timeout`` should be set to the
+     maximum time (in nanoseconds) the application can have its IRQs
+     suspended. This is done using netlink, as described above. This timeout
+     serves as a safety mechanism to restart IRQ driver interrupt processing if
+     the application has stalled. This value should be chosen so that it covers
+     the amount of time the user application needs to process data from its
+     call to epoll_wait, noting that applications can control how much data
+     they retrieve by setting ``max_events`` when calling epoll_wait.
+
+  2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
+     and ``napi_defer_hard_irqs`` can be set to low values. They will be used
+     to defer IRQs after busy poll has found no data.
+
+  3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
+     the ``EPIOCSPARAMS`` ioctl as described above.
+
+  4. The application uses epoll as described above to trigger NAPI packet
+     processing.
+
+As mentioned above, as long as subsequent calls to epoll_wait return events to
+userland, the ``irq-suspend-timeout`` is deferred and IRQs are disabled. This
+allows the application to process data without interference.
+
+Once a call to epoll_wait results in no events being found, IRQ suspension is
+automatically disabled and the ``gro_flush_timeout`` and
+``napi_defer_hard_irqs`` mitigation mechanisms take over.
+
+It is expected that ``irq-suspend-timeout`` will be set to a value much larger
+than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for
+the duration of one userland processing cycle.
+
+While it is not stricly necessary to use ``napi_defer_hard_irqs`` and
+``gro_flush_timeout`` to use IRQ suspension, their use is strongly
+recommended.
+
+IRQ suspension causes the system to alternate between polling mode and
+irq-driven packet delivery. During busy periods, ``irq-suspend-timeout``
+overrides ``gro_flush_timeout`` and keeps the system busy polling, but when
+epoll finds no events, the setting of ``gro_flush_timeout`` and
+``napi_defer_hard_irqs`` determine the next step.
+
+There are essentially three possible loops for network processing and
+packet delivery:
+
+1) hardirq -> softirq   -> napi poll; basic interrupt delivery
+
+2)   timer -> softirq   -> napi poll; deferred irq processing
+
+3)   epoll -> busy-poll -> napi poll; busy looping
+
+Loop 2) can take control from Loop 1), if ``gro_flush_timeout`` and
+``napi_defer_hard_irqs`` are set.
+
+If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2)
+and 3) "wrestle" with each other for control.
+
+During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2),
+which essentially tilts network processing in favour of Loop 3).
+
+If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3)
+cannot take control from Loop 1).
+
+Therefore, setting ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` is
+the recommended usage, because otherwise setting ``irq-suspend-timeout``
+might not have any discernible effect.
 
 .. _threaded:
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-03  5:24 ` [PATCH net-next v5 7/7] docs: networking: Describe irq suspension Joe Damato
@ 2024-11-04 10:52   ` Bagas Sanjaya
  2024-11-04 18:24     ` Joe Damato
  0 siblings, 1 reply; 15+ messages in thread
From: Bagas Sanjaya @ 2024-11-04 10:52 UTC (permalink / raw)
  To: Joe Damato, netdev
  Cc: hdanton, pabeni, namangulati, edumazet, amritha.nambiar,
	sridhar.samudrala, sdf, peter, m2shafiei, bjorn, hch, willy,
	willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Jonathan Corbet,
	Linux Documentation, Linux Kernel Mailing List, Linux BPF

[-- Attachment #1: Type: text/plain, Size: 3332 bytes --]

On Sun, Nov 03, 2024 at 05:24:09AM +0000, Joe Damato wrote:
> +It is important to note that choosing a large value for ``gro_flush_timeout``
> +will defer IRQs to allow for better batch processing, but will induce latency
> +when the system is not fully loaded. Choosing a small value for
> +``gro_flush_timeout`` can cause interference of the user application which is
> +attempting to busy poll by device IRQs and softirq processing. This value
> +should be chosen carefully with these tradeoffs in mind. epoll-based busy
> +polling applications may be able to mitigate how much user processing happens
> +by choosing an appropriate value for ``maxevents``.
> +
> +Users may want to consider an alternate approach, IRQ suspension, to help deal
                                                                     to help dealing
> +with these tradeoffs.
> +
> <snipped>...
> +There are essentially three possible loops for network processing and
> +packet delivery:
> +
> +1) hardirq -> softirq   -> napi poll; basic interrupt delivery
> +
> +2)   timer -> softirq   -> napi poll; deferred irq processing
> +
> +3)   epoll -> busy-poll -> napi poll; busy looping

The loops list are parsed inconsistently due to tabs between the
enumerators and list items. I have to expand them into single space
(along with number reference fix to follow the output):

---- >8 ----
diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index bbd58bcc430fab..848cb19f0becc1 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -375,23 +375,21 @@ epoll finds no events, the setting of ``gro_flush_timeout`` and
 There are essentially three possible loops for network processing and
 packet delivery:
 
-1) hardirq -> softirq   -> napi poll; basic interrupt delivery
+1) hardirq -> softirq -> napi poll; basic interrupt delivery
+2) timer -> softirq -> napi poll; deferred irq processing
+3) epoll -> busy-poll -> napi poll; busy looping
 
-2)   timer -> softirq   -> napi poll; deferred irq processing
-
-3)   epoll -> busy-poll -> napi poll; busy looping
-
-Loop 2) can take control from Loop 1), if ``gro_flush_timeout`` and
+Loop 2 can take control from Loop 1, if ``gro_flush_timeout`` and
 ``napi_defer_hard_irqs`` are set.
 
-If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2)
-and 3) "wrestle" with each other for control.
+If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2
+and 3 "wrestle" with each other for control.
 
-During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2),
-which essentially tilts network processing in favour of Loop 3).
+During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2,
+which essentially tilts network processing in favour of Loop 3.
 
-If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3)
-cannot take control from Loop 1).
+If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3
+cannot take control from Loop 1.
 
 Therefore, setting ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` is
 the recommended usage, because otherwise setting ``irq-suspend-timeout``

Thanks.

-- 
An old man doll... just what I always wanted! - Clara

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-04 10:52   ` Bagas Sanjaya
@ 2024-11-04 18:24     ` Joe Damato
  2024-11-04 18:43       ` Jonathan Corbet
  0 siblings, 1 reply; 15+ messages in thread
From: Joe Damato @ 2024-11-04 18:24 UTC (permalink / raw)
  To: Bagas Sanjaya
  Cc: netdev, hdanton, pabeni, namangulati, edumazet, amritha.nambiar,
	sridhar.samudrala, sdf, peter, m2shafiei, bjorn, hch, willy,
	willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Jonathan Corbet,
	Linux Documentation, Linux Kernel Mailing List, Linux BPF

On Mon, Nov 04, 2024 at 05:52:52PM +0700, Bagas Sanjaya wrote:
> On Sun, Nov 03, 2024 at 05:24:09AM +0000, Joe Damato wrote:
> > +It is important to note that choosing a large value for ``gro_flush_timeout``
> > +will defer IRQs to allow for better batch processing, but will induce latency
> > +when the system is not fully loaded. Choosing a small value for
> > +``gro_flush_timeout`` can cause interference of the user application which is
> > +attempting to busy poll by device IRQs and softirq processing. This value
> > +should be chosen carefully with these tradeoffs in mind. epoll-based busy
> > +polling applications may be able to mitigate how much user processing happens
> > +by choosing an appropriate value for ``maxevents``.
> > +
> > +Users may want to consider an alternate approach, IRQ suspension, to help deal
>                                                                      to help dealing
> > +with these tradeoffs.
> > +

Thanks for the careful review. I read this sentence a few times and
perhaps my English grammar isn't great, but I think it should be
one of:

Users may want to consider an alternate approach, IRQ suspension, to
help deal with these tradeoffs.  (the original)

or

Users may want to consider an alternate approach, IRQ suspension,
which can help to deal with these tradeoffs.

or

Users may want to consider an alternate approach, IRQ suspension,
which can help when dealing with these tradeoffs.

I am thinking of leaving the original unless you have a strong
preference? My apologies if I've gotten the grammar wrong here :)

Please let me know.

> > <snipped>...
> > +There are essentially three possible loops for network processing and
> > +packet delivery:
> > +
> > +1) hardirq -> softirq   -> napi poll; basic interrupt delivery
> > +
> > +2)   timer -> softirq   -> napi poll; deferred irq processing
> > +
> > +3)   epoll -> busy-poll -> napi poll; busy looping
> 
> The loops list are parsed inconsistently due to tabs between the
> enumerators and list items. I have to expand them into single space
> (along with number reference fix to follow the output):

Thank you for doing that. I'll take the suggested patch below and
apply it for our v6.

> ---- >8 ----
> diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
> index bbd58bcc430fab..848cb19f0becc1 100644
> --- a/Documentation/networking/napi.rst
> +++ b/Documentation/networking/napi.rst
> @@ -375,23 +375,21 @@ epoll finds no events, the setting of ``gro_flush_timeout`` and
>  There are essentially three possible loops for network processing and
>  packet delivery:
>  
> -1) hardirq -> softirq   -> napi poll; basic interrupt delivery
> +1) hardirq -> softirq -> napi poll; basic interrupt delivery
> +2) timer -> softirq -> napi poll; deferred irq processing
> +3) epoll -> busy-poll -> napi poll; busy looping
>  
> -2)   timer -> softirq   -> napi poll; deferred irq processing
> -
> -3)   epoll -> busy-poll -> napi poll; busy looping
> -
> -Loop 2) can take control from Loop 1), if ``gro_flush_timeout`` and
> +Loop 2 can take control from Loop 1, if ``gro_flush_timeout`` and
>  ``napi_defer_hard_irqs`` are set.
>  
> -If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2)
> -and 3) "wrestle" with each other for control.
> +If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2
> +and 3 "wrestle" with each other for control.
>  
> -During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2),
> -which essentially tilts network processing in favour of Loop 3).
> +During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2,
> +which essentially tilts network processing in favour of Loop 3.
>  
> -If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3)
> -cannot take control from Loop 1).
> +If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3
> +cannot take control from Loop 1.
>  
>  Therefore, setting ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` is
>  the recommended usage, because otherwise setting ``irq-suspend-timeout``
> 
> Thanks.
> 
> -- 
> An old man doll... just what I always wanted! - Clara



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-04 18:24     ` Joe Damato
@ 2024-11-04 18:43       ` Jonathan Corbet
  2024-11-04 18:51         ` Joe Damato
  2024-11-05  1:21         ` Bagas Sanjaya
  0 siblings, 2 replies; 15+ messages in thread
From: Jonathan Corbet @ 2024-11-04 18:43 UTC (permalink / raw)
  To: Joe Damato, Bagas Sanjaya
  Cc: netdev, hdanton, pabeni, namangulati, edumazet, amritha.nambiar,
	sridhar.samudrala, sdf, peter, m2shafiei, bjorn, hch, willy,
	willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Linux Documentation,
	Linux Kernel Mailing List, Linux BPF

Joe Damato <jdamato@fastly.com> writes:

> On Mon, Nov 04, 2024 at 05:52:52PM +0700, Bagas Sanjaya wrote:
>> On Sun, Nov 03, 2024 at 05:24:09AM +0000, Joe Damato wrote:
>> > +It is important to note that choosing a large value for ``gro_flush_timeout``
>> > +will defer IRQs to allow for better batch processing, but will induce latency
>> > +when the system is not fully loaded. Choosing a small value for
>> > +``gro_flush_timeout`` can cause interference of the user application which is
>> > +attempting to busy poll by device IRQs and softirq processing. This value
>> > +should be chosen carefully with these tradeoffs in mind. epoll-based busy
>> > +polling applications may be able to mitigate how much user processing happens
>> > +by choosing an appropriate value for ``maxevents``.
>> > +
>> > +Users may want to consider an alternate approach, IRQ suspension, to help deal
>>                                                                      to help dealing
>> > +with these tradeoffs.
>> > +
>
> Thanks for the careful review. I read this sentence a few times and
> perhaps my English grammar isn't great, but I think it should be
> one of:
>
> Users may want to consider an alternate approach, IRQ suspension, to
> help deal with these tradeoffs.  (the original)

The original is just fine here.  Bagas, *please* do not bother our
contributors with this kind of stuff, it does not help.

jon

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-04 18:43       ` Jonathan Corbet
@ 2024-11-04 18:51         ` Joe Damato
  2024-11-04 19:21           ` Jonathan Corbet
  2024-11-05  1:21         ` Bagas Sanjaya
  1 sibling, 1 reply; 15+ messages in thread
From: Joe Damato @ 2024-11-04 18:51 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: Bagas Sanjaya, netdev, hdanton, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Linux Documentation,
	Linux Kernel Mailing List, Linux BPF

On Mon, Nov 04, 2024 at 11:43:17AM -0700, Jonathan Corbet wrote:
> Joe Damato <jdamato@fastly.com> writes:
> 
> > On Mon, Nov 04, 2024 at 05:52:52PM +0700, Bagas Sanjaya wrote:
> >> On Sun, Nov 03, 2024 at 05:24:09AM +0000, Joe Damato wrote:
> >> > +It is important to note that choosing a large value for ``gro_flush_timeout``
> >> > +will defer IRQs to allow for better batch processing, but will induce latency
> >> > +when the system is not fully loaded. Choosing a small value for
> >> > +``gro_flush_timeout`` can cause interference of the user application which is
> >> > +attempting to busy poll by device IRQs and softirq processing. This value
> >> > +should be chosen carefully with these tradeoffs in mind. epoll-based busy
> >> > +polling applications may be able to mitigate how much user processing happens
> >> > +by choosing an appropriate value for ``maxevents``.
> >> > +
> >> > +Users may want to consider an alternate approach, IRQ suspension, to help deal
> >>                                                                      to help dealing
> >> > +with these tradeoffs.
> >> > +
> >
> > Thanks for the careful review. I read this sentence a few times and
> > perhaps my English grammar isn't great, but I think it should be
> > one of:
> >
> > Users may want to consider an alternate approach, IRQ suspension, to
> > help deal with these tradeoffs.  (the original)
> 
> The original is just fine here.  Bagas, *please* do not bother our
> contributors with this kind of stuff, it does not help.

Thanks for the feedback. I had been preparing a v6 based on Bagas'
comments below where you snipped about in the documentation, etc.

Should I continue to prepare a v6? It would only contain
documentation changes in this patch; I can't really tell if a v6 is
necessary or not.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-04 18:51         ` Joe Damato
@ 2024-11-04 19:21           ` Jonathan Corbet
  2024-11-04 20:00             ` Joe Damato
  0 siblings, 1 reply; 15+ messages in thread
From: Jonathan Corbet @ 2024-11-04 19:21 UTC (permalink / raw)
  To: Joe Damato
  Cc: Bagas Sanjaya, netdev, hdanton, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Linux Documentation,
	Linux Kernel Mailing List, Linux BPF

Joe Damato <jdamato@fastly.com> writes:

> Thanks for the feedback. I had been preparing a v6 based on Bagas'
> comments below where you snipped about in the documentation, etc.
>
> Should I continue to prepare a v6? It would only contain
> documentation changes in this patch; I can't really tell if a v6 is
> necessary or not.

Look at the generated docs and be sure that results are what you expect;
the enumerated-list change may be necessary.

Thanks,

jon

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-04 19:21           ` Jonathan Corbet
@ 2024-11-04 20:00             ` Joe Damato
  0 siblings, 0 replies; 15+ messages in thread
From: Joe Damato @ 2024-11-04 20:00 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: Bagas Sanjaya, netdev, hdanton, pabeni, namangulati, edumazet,
	amritha.nambiar, sridhar.samudrala, sdf, peter, m2shafiei, bjorn,
	hch, willy, willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Linux Documentation,
	Linux Kernel Mailing List, Linux BPF

On Mon, Nov 04, 2024 at 12:21:02PM -0700, Jonathan Corbet wrote:
> Joe Damato <jdamato@fastly.com> writes:
> 
> > Thanks for the feedback. I had been preparing a v6 based on Bagas'
> > comments below where you snipped about in the documentation, etc.
> >
> > Should I continue to prepare a v6? It would only contain
> > documentation changes in this patch; I can't really tell if a v6 is
> > necessary or not.
> 
> Look at the generated docs and be sure that results are what you expect;
> the enumerated-list change may be necessary.

Right, of course. I just did that and taking Bagas' suggestion does
make the enumerated list look better, so I'll send a v6 with that
change shortly.

Thanks for the guidance.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next v5 7/7] docs: networking: Describe irq suspension
  2024-11-04 18:43       ` Jonathan Corbet
  2024-11-04 18:51         ` Joe Damato
@ 2024-11-05  1:21         ` Bagas Sanjaya
  1 sibling, 0 replies; 15+ messages in thread
From: Bagas Sanjaya @ 2024-11-05  1:21 UTC (permalink / raw)
  To: Jonathan Corbet, Joe Damato
  Cc: netdev, hdanton, pabeni, namangulati, edumazet, amritha.nambiar,
	sridhar.samudrala, sdf, peter, m2shafiei, bjorn, hch, willy,
	willemdebruijn.kernel, skhawaja, kuba, Martin Karsten,
	David S. Miller, Simon Horman, Linux Documentation,
	Linux Kernel Mailing List, Linux BPF

[-- Attachment #1: Type: text/plain, Size: 1685 bytes --]

On Mon, Nov 04, 2024 at 11:43:17AM -0700, Jonathan Corbet wrote:
> Joe Damato <jdamato@fastly.com> writes:
> 
> > On Mon, Nov 04, 2024 at 05:52:52PM +0700, Bagas Sanjaya wrote:
> >> On Sun, Nov 03, 2024 at 05:24:09AM +0000, Joe Damato wrote:
> >> > +It is important to note that choosing a large value for ``gro_flush_timeout``
> >> > +will defer IRQs to allow for better batch processing, but will induce latency
> >> > +when the system is not fully loaded. Choosing a small value for
> >> > +``gro_flush_timeout`` can cause interference of the user application which is
> >> > +attempting to busy poll by device IRQs and softirq processing. This value
> >> > +should be chosen carefully with these tradeoffs in mind. epoll-based busy
> >> > +polling applications may be able to mitigate how much user processing happens
> >> > +by choosing an appropriate value for ``maxevents``.
> >> > +
> >> > +Users may want to consider an alternate approach, IRQ suspension, to help deal
> >>                                                                      to help dealing
> >> > +with these tradeoffs.
> >> > +
> >
> > Thanks for the careful review. I read this sentence a few times and
> > perhaps my English grammar isn't great, but I think it should be
> > one of:
> >
> > Users may want to consider an alternate approach, IRQ suspension, to
> > help deal with these tradeoffs.  (the original)
> 
> The original is just fine here.  Bagas, *please* do not bother our
> contributors with this kind of stuff, it does not help.

I should have hinted the fixes instead of pasting them...

Thanks.

-- 
An old man doll... just what I always wanted! - Clara

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-11-05  1:21 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-03  5:24 [PATCH net-next v5 0/7] Suspend IRQs during application busy periods Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 1/7] net: Add napi_struct parameter irq_suspend_timeout Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 2/7] net: Suspend softirq when prefer_busy_poll is set Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 3/7] net: Add control functions for irq suspension Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 4/7] eventpoll: Trigger napi_busy_loop, if prefer_busy_poll is set Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 5/7] eventpoll: Control irq suspension for prefer_busy_poll Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 6/7] selftests: net: Add busy_poll_test Joe Damato
2024-11-03  5:24 ` [PATCH net-next v5 7/7] docs: networking: Describe irq suspension Joe Damato
2024-11-04 10:52   ` Bagas Sanjaya
2024-11-04 18:24     ` Joe Damato
2024-11-04 18:43       ` Jonathan Corbet
2024-11-04 18:51         ` Joe Damato
2024-11-04 19:21           ` Jonathan Corbet
2024-11-04 20:00             ` Joe Damato
2024-11-05  1:21         ` Bagas Sanjaya

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).