public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI"
@ 2026-02-16 21:31 Daniel Borkmann
  2026-02-16 21:33 ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Daniel Borkmann @ 2026-02-16 21:31 UTC (permalink / raw)
  To: gregkh; +Cc: stable, netdev, Jason, kuba

This reverts the backport of upstream commit db9ae3b6b43c ("wireguard:
device: enable threaded NAPI").

We have had three independent production user reports in combination
with Cilium utilizing WireGuard as encryption underneath that k8s Pod
E/W traffic to certain peer nodes fully stalled. The situation appears
as follows:

  - Occurs very rarely but at random times under heavy networking load.
  - Once the issue triggers the decryption side stops working completely
    for that WireGuard peer, other peers keep working fine. The stall
    happens also for newly initiated connections towards that particular
    WireGuard peer.
  - Only the decryption side is affected, never the encryption side.
  - Once it triggers, it never recovers and remains in this state,
    the CPU/mem on that node looks normal, no leak, busy loop or crash.
  - bpftrace on the affected system shows that wg_prev_queue_enqueue
    fails, thus the MAX_QUEUED_PACKETS (1024 skbs!) for the peer's
    rx_queue is reached.
  - Also, bpftrace shows that wg_packet_rx_poll for that peer is never
    called again after reaching this state for that peer. For other
    peers wg_packet_rx_poll does get called normally.
  - Commit db9ae3b ("wireguard: device: enable threaded NAPI")
    switched WireGuard to threaded NAPI by default. The default has
    not been changed for triggering the issue, neither did CPU
    hotplugging occur (i.e. 5bd8de2 ("wireguard: queueing: always
    return valid online CPU in wg_cpumask_choose_online()")).
  - The issue has been observed with stable kernels of v5.15 as well as
    v6.1. It was reported to us that v5.10 stable is working fine, and
    no report on v6.6 stable either (somewhat related discussion in [0]
    though).
  - In the WireGuard driver the only material difference between v5.10
    stable and v5.15 stable is the switch to threaded NAPI by default.

    [0] https://lore.kernel.org/netdev/CA+wXwBTT74RErDGAnj98PqS=wvdh8eM1pi4q6tTdExtjnokKqA@mail.gmail.com/

Breakdown of the problem:

  1) skbs arriving for decryption are enqueued to the peer->rx_queue in
     wg_packet_consume_data via wg_queue_enqueue_per_device_and_peer.
  2) The latter only moves the skb into the MPSC peer queue if it does
     not surpass MAX_QUEUED_PACKETS (1024) which is kept track in an
     atomic counter via wg_prev_queue_enqueue.
  3) In case enqueueing was successful, the skb is also queued up
     in the device queue, round-robin picks a next online CPU, and
     schedules the decryption worker.
  4) The wg_packet_decrypt_worker, once scheduled, picks these up
     from the queue, decrypts the packets and once done calls into
     wg_queue_enqueue_per_peer_rx.
  5) The latter updates the state to PACKET_STATE_CRYPTED on success
     and calls napi_schedule on the per peer->napi instance.
  6) NAPI then polls via wg_packet_rx_poll. wg_prev_queue_peek checks
     on the peer->rx_queue. It will wg_prev_queue_dequeue if the
     queue->peeked skb was not cached yet, or just return the latter
     otherwise. (wg_prev_queue_drop_peeked later clears the cache.)
  7) From an ordering perspective, the peer->rx_queue has skbs in order
     while the device queue with the per-CPU worker threads from a
     global ordering PoV can finish the decryption and signal the skb
     PACKET_STATE_CRYPTED out of order.
  8) A situation can be observed that the first packet coming in will
     be stuck waiting for the decryption worker to be scheduled for
     a longer time when the system is under pressure.
  9) While this is the case, the other CPUs in the meantime finish
     decryption and call into napi_schedule.
 10) Now in wg_packet_rx_poll it picks up the first in-order skb
     from the peer->rx_queue and sees that its state is still
     PACKET_STATE_UNCRYPTED. The NAPI poll routine then exits early
     with work_done = 0 and calls napi_complete_done, signalling
     it "finished" processing.
 11) The assumption in wg_packet_decrypt_worker is that when the
     decryption finished the subsequent napi_schedule will always
     lead to a later invocation of wg_packet_rx_poll to pick up
     the finished packet.
 12) However, it appears that a later napi_schedule does /not/
     schedule a later poll and thus no wg_packet_rx_poll.
 13) If this situation happens exactly for the corner case where
     the decryption worker of the first packet is stuck and waiting
     to be scheduled, and the network load for WireGuard is very
     high then the queue can build up to MAX_QUEUED_PACKETS.
 14) If this situation occurs, then no new decryption worker will
     be scheduled and also no new napi_schedule to make forward
     progress.
 15) This means the peer->rx_queue stops processing packets completely
     and they are indefinitely stuck waiting for a new NAPI poll on
     that peer which never happens. New packets for that peer are
     then dropped due to full queue, as it has been observed on the
     production machines.

Technically, the backport of commit db9ae3b6b43c ("wireguard: device:
enable threaded NAPI") to stable should not have happened since it is
more of an optimization rather than a pure fix and addresses a NAPI
situation with utilizing many WireGuard tunnel devices in parallel.
Revert it from stable given the backport triggers a regression for
mentioned kernels.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 drivers/net/wireguard/device.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
index 7bf1ec4ccaa9..e5e344af3423 100644
--- a/drivers/net/wireguard/device.c
+++ b/drivers/net/wireguard/device.c
@@ -352,7 +352,6 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
 	if (ret < 0)
 		goto err_free_handshake_queue;
 
-	dev_set_threaded(dev, true);
 	ret = register_netdevice(dev);
 	if (ret < 0)
 		goto err_uninit_ratelimiter;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI"
  2026-02-16 21:31 [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI" Daniel Borkmann
@ 2026-02-16 21:33 ` Jason A. Donenfeld
  2026-02-17 10:33   ` Greg KH
  0 siblings, 1 reply; 5+ messages in thread
From: Jason A. Donenfeld @ 2026-02-16 21:33 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: gregkh, stable, netdev, kuba

On Mon, Feb 16, 2026 at 10:31 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> Technically, the backport of commit db9ae3b6b43c ("wireguard: device:
> enable threaded NAPI") to stable should not have happened since it is
> more of an optimization rather than a pure fix and addresses a NAPI
> situation with utilizing many WireGuard tunnel devices in parallel.

Indeed.

> Revert it from stable given the backport triggers a regression for
> mentioned kernels.

Thanks.

Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>

If that helps with Greg queueing this up.

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI"
  2026-02-16 21:33 ` Jason A. Donenfeld
@ 2026-02-17 10:33   ` Greg KH
  2026-02-17 11:01     ` Daniel Borkmann
  0 siblings, 1 reply; 5+ messages in thread
From: Greg KH @ 2026-02-17 10:33 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: Daniel Borkmann, stable, netdev, kuba

On Mon, Feb 16, 2026 at 10:33:53PM +0100, Jason A. Donenfeld wrote:
> On Mon, Feb 16, 2026 at 10:31 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> > Technically, the backport of commit db9ae3b6b43c ("wireguard: device:
> > enable threaded NAPI") to stable should not have happened since it is
> > more of an optimization rather than a pure fix and addresses a NAPI
> > situation with utilizing many WireGuard tunnel devices in parallel.
> 
> Indeed.
> 
> > Revert it from stable given the backport triggers a regression for
> > mentioned kernels.
> 
> Thanks.
> 
> Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
> 
> If that helps with Greg queueing this up.

I'll go queue it up now, thanks for the revert.  But it's ok being in
the 6.6.y and 6.12.y and newer kernels, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI"
  2026-02-17 10:33   ` Greg KH
@ 2026-02-17 11:01     ` Daniel Borkmann
  2026-02-17 11:25       ` Greg KH
  0 siblings, 1 reply; 5+ messages in thread
From: Daniel Borkmann @ 2026-02-17 11:01 UTC (permalink / raw)
  To: Greg KH, Jason A. Donenfeld
  Cc: stable, netdev, kuba, Daniel Dao, Ignat Korchagin, Jakub Sitnicki

On 2/17/26 11:33 AM, Greg KH wrote:
> On Mon, Feb 16, 2026 at 10:33:53PM +0100, Jason A. Donenfeld wrote:
>> On Mon, Feb 16, 2026 at 10:31 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>> Technically, the backport of commit db9ae3b6b43c ("wireguard: device:
>>> enable threaded NAPI") to stable should not have happened since it is
>>> more of an optimization rather than a pure fix and addresses a NAPI
>>> situation with utilizing many WireGuard tunnel devices in parallel.
>>
>> Indeed.
>>
>>> Revert it from stable given the backport triggers a regression for
>>> mentioned kernels.
>>
>> Thanks.
>>
>> Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
>>
>> If that helps with Greg queueing this up.
> 
> I'll go queue it up now, thanks for the revert.  But it's ok being in
> the 6.6.y and 6.12.y and newer kernels, right?
Great question; from a stable kernel PoV that commit to enable threaded
NAPI for wireguard by default went in natively (aka not via backports) in
linux-6.18.y branch.

Given there have been backports around threaded NAPI and then reverts again
e.g. [0-2] I think it would be best to also queue this revert here for 6.6.y
and 6.12.y stable kernels. The same applies that it's more of an optimization
rather than a pure fix.

I've added CF folks to Cc given they have been testing v6.6 with [2] but
then it later got reverted again in [1] upon request. Feel free to holler
if you think otherwise.

Thanks,
Daniel

   [0] https://lore.kernel.org/netdev/CA+wXwBTT74RErDGAnj98PqS=wvdh8eM1pi4q6tTdExtjnokKqA@mail.gmail.com/
   [1] https://lore.kernel.org/stable/20260204143848.216983148@linuxfoundation.org/
   [2] https://lore.kernel.org/all/20260120103833.4kssDD1Y@linutronix.de/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI"
  2026-02-17 11:01     ` Daniel Borkmann
@ 2026-02-17 11:25       ` Greg KH
  0 siblings, 0 replies; 5+ messages in thread
From: Greg KH @ 2026-02-17 11:25 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Jason A. Donenfeld, stable, netdev, kuba, Daniel Dao,
	Ignat Korchagin, Jakub Sitnicki

On Tue, Feb 17, 2026 at 12:01:15PM +0100, Daniel Borkmann wrote:
> On 2/17/26 11:33 AM, Greg KH wrote:
> > On Mon, Feb 16, 2026 at 10:33:53PM +0100, Jason A. Donenfeld wrote:
> > > On Mon, Feb 16, 2026 at 10:31 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> > > > Technically, the backport of commit db9ae3b6b43c ("wireguard: device:
> > > > enable threaded NAPI") to stable should not have happened since it is
> > > > more of an optimization rather than a pure fix and addresses a NAPI
> > > > situation with utilizing many WireGuard tunnel devices in parallel.
> > > 
> > > Indeed.
> > > 
> > > > Revert it from stable given the backport triggers a regression for
> > > > mentioned kernels.
> > > 
> > > Thanks.
> > > 
> > > Acked-by: Jason A. Donenfeld <Jason@zx2c4.com>
> > > 
> > > If that helps with Greg queueing this up.
> > 
> > I'll go queue it up now, thanks for the revert.  But it's ok being in
> > the 6.6.y and 6.12.y and newer kernels, right?
> Great question; from a stable kernel PoV that commit to enable threaded
> NAPI for wireguard by default went in natively (aka not via backports) in
> linux-6.18.y branch.
> 
> Given there have been backports around threaded NAPI and then reverts again
> e.g. [0-2] I think it would be best to also queue this revert here for 6.6.y
> and 6.12.y stable kernels. The same applies that it's more of an optimization
> rather than a pure fix.
> 
> I've added CF folks to Cc given they have been testing v6.6 with [2] but
> then it later got reverted again in [1] upon request. Feel free to holler
> if you think otherwise.
> 
> Thanks,
> Daniel
> 
>   [0] https://lore.kernel.org/netdev/CA+wXwBTT74RErDGAnj98PqS=wvdh8eM1pi4q6tTdExtjnokKqA@mail.gmail.com/
>   [1] https://lore.kernel.org/stable/20260204143848.216983148@linuxfoundation.org/
>   [2] https://lore.kernel.org/all/20260120103833.4kssDD1Y@linutronix.de/
> 

Ok, to be "safe", I'll queue this revert up for those other kernel
branches as well, thanks!  Worst case, we get someone who asks for it to
come back :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-02-17 11:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-16 21:31 [PATCH stable v5.15,v6.1] Revert "wireguard: device: enable threaded NAPI" Daniel Borkmann
2026-02-16 21:33 ` Jason A. Donenfeld
2026-02-17 10:33   ` Greg KH
2026-02-17 11:01     ` Daniel Borkmann
2026-02-17 11:25       ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox