bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] bnxt: properly flush XDP redirect lists
@ 2025-06-23 16:06 Yan Zhai
  2025-06-24  5:59 ` Jesper Dangaard Brouer
  2025-06-25  1:10 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 5+ messages in thread
From: Yan Zhai @ 2025-06-23 16:06 UTC (permalink / raw)
  To: netdev
  Cc: Michael Chan, Pavan Chebbi, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	Stanislav Fomichev, Andy Gospodarek, netdev, linux-kernel, bpf,
	kernel-team

We encountered following crash when testing a XDP_REDIRECT feature
in production:

[56251.579676] list_add corruption. next->prev should be prev (ffff93120dd40f30), but was ffffb301ef3a6740. (next=ffff93120dd
40f30).
[56251.601413] ------------[ cut here ]------------
[56251.611357] kernel BUG at lib/list_debug.c:29!
[56251.621082] Oops: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
[56251.632073] CPU: 111 UID: 0 PID: 0 Comm: swapper/111 Kdump: loaded Tainted: P           O       6.12.33-cloudflare-2025.6.
3 #1
[56251.653155] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[56251.663877] Hardware name: MiTAC GC68B-B8032-G11P6-GPU/S8032GM-HE-CFR, BIOS V7.020.B10-sig 01/22/2025
[56251.682626] RIP: 0010:__list_add_valid_or_report+0x4b/0xa0
[56251.693203] Code: 0e 48 c7 c7 68 e7 d9 97 e8 42 16 fe ff 0f 0b 48 8b 52 08 48 39 c2 74 14 48 89 f1 48 c7 c7 90 e7 d9 97 48
 89 c6 e8 25 16 fe ff <0f> 0b 4c 8b 02 49 39 f0 74 14 48 89 d1 48 c7 c7 e8 e7 d9 97 4c 89
[56251.725811] RSP: 0018:ffff93120dd40b80 EFLAGS: 00010246
[56251.736094] RAX: 0000000000000075 RBX: ffffb301e6bba9d8 RCX: 0000000000000000
[56251.748260] RDX: 0000000000000000 RSI: ffff9149afda0b80 RDI: ffff9149afda0b80
[56251.760349] RBP: ffff9131e49c8000 R08: 0000000000000000 R09: ffff93120dd40a18
[56251.772382] R10: ffff9159cf2ce1a8 R11: 0000000000000003 R12: ffff911a80850000
[56251.784364] R13: ffff93120fbc7000 R14: 0000000000000010 R15: ffff9139e7510e40
[56251.796278] FS:  0000000000000000(0000) GS:ffff9149afd80000(0000) knlGS:0000000000000000
[56251.809133] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[56251.819561] CR2: 00007f5e85e6f300 CR3: 00000038b85e2006 CR4: 0000000000770ef0
[56251.831365] PKRU: 55555554
[56251.838653] Call Trace:
[56251.845560]  <IRQ>
[56251.851943]  cpu_map_enqueue.cold+0x5/0xa
[56251.860243]  xdp_do_redirect+0x2d9/0x480
[56251.868388]  bnxt_rx_xdp+0x1d8/0x4c0 [bnxt_en]
[56251.877028]  bnxt_rx_pkt+0x5f7/0x19b0 [bnxt_en]
[56251.885665]  ? cpu_max_write+0x1e/0x100
[56251.893510]  ? srso_alias_return_thunk+0x5/0xfbef5
[56251.902276]  __bnxt_poll_work+0x190/0x340 [bnxt_en]
[56251.911058]  bnxt_poll+0xab/0x1b0 [bnxt_en]
[56251.919041]  ? srso_alias_return_thunk+0x5/0xfbef5
[56251.927568]  ? srso_alias_return_thunk+0x5/0xfbef5
[56251.935958]  ? srso_alias_return_thunk+0x5/0xfbef5
[56251.944250]  __napi_poll+0x2b/0x160
[56251.951155]  bpf_trampoline_6442548651+0x79/0x123
[56251.959262]  __napi_poll+0x5/0x160
[56251.966037]  net_rx_action+0x3d2/0x880
[56251.973133]  ? srso_alias_return_thunk+0x5/0xfbef5
[56251.981265]  ? srso_alias_return_thunk+0x5/0xfbef5
[56251.989262]  ? __hrtimer_run_queues+0x162/0x2a0
[56251.996967]  ? srso_alias_return_thunk+0x5/0xfbef5
[56252.004875]  ? srso_alias_return_thunk+0x5/0xfbef5
[56252.012673]  ? bnxt_msix+0x62/0x70 [bnxt_en]
[56252.019903]  handle_softirqs+0xcf/0x270
[56252.026650]  irq_exit_rcu+0x67/0x90
[56252.032933]  common_interrupt+0x85/0xa0
[56252.039498]  </IRQ>
[56252.044246]  <TASK>
[56252.048935]  asm_common_interrupt+0x26/0x40
[56252.055727] RIP: 0010:cpuidle_enter_state+0xb8/0x420
[56252.063305] Code: dc 01 00 00 e8 f9 79 3b ff e8 64 f7 ff ff 49 89 c5 0f 1f 44 00 00 31 ff e8 a5 32 3a ff 45 84 ff 0f 85 ae
 01 00 00 fb 45 85 f6 <0f> 88 88 01 00 00 48 8b 04 24 49 63 ce 4c 89 ea 48 6b f1 68 48 29
[56252.088911] RSP: 0018:ffff93120c97fe98 EFLAGS: 00000202
[56252.096912] RAX: ffff9149afd80000 RBX: ffff9141d3a72800 RCX: 0000000000000000
[56252.106844] RDX: 00003329176c6b98 RSI: ffffffe36db3fdc7 RDI: 0000000000000000
[56252.116733] RBP: 0000000000000002 R08: 0000000000000002 R09: 000000000000004e
[56252.126652] R10: ffff9149afdb30c4 R11: 071c71c71c71c71c R12: ffffffff985ff860
[56252.136637] R13: 00003329176c6b98 R14: 0000000000000002 R15: 0000000000000000
[56252.146667]  ? cpuidle_enter_state+0xab/0x420
[56252.153909]  cpuidle_enter+0x2d/0x40
[56252.160360]  do_idle+0x176/0x1c0
[56252.166456]  cpu_startup_entry+0x29/0x30
[56252.173248]  start_secondary+0xf7/0x100
[56252.179941]  common_startup_64+0x13e/0x141
[56252.186886]  </TASK>

From the crash dump, we found that the cpu_map_flush_list inside
redirect info is partially corrupted: its list_head->next points to
itself, but list_head->prev points to a valid list of unflushed bq
entries.

This turned out to be a result of missed XDP flush on redirect lists. By
digging in the actual source code, we found that
commit 7f0a168b0441 ("bnxt_en: Add completion ring pointer in TX and RX
ring structures") incorrectly overwrites the event mask for XDP_REDIRECT
in bnxt_rx_xdp. We can stably reproduce this crash by returning XDP_TX
and XDP_REDIRECT randomly for incoming packets in a naive XDP program.
Properly propagate the XDP_REDIRECT events back fixes the crash.

Fixes: 7f0a168b0441 ("bnxt_en: Add completion ring pointer in TX and RX ring structures")
Tested-by: Andrew Rzeznik <arzeznik@cloudflare.com>
Signed-off-by: Yan Zhai <yan@cloudflare.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 2cb3185c442c..ae89a981e052 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -2989,6 +2989,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 {
 	struct bnxt_napi *bnapi = cpr->bnapi;
 	u32 raw_cons = cpr->cp_raw_cons;
+	bool flush_xdp = false;
 	u32 cons;
 	int rx_pkts = 0;
 	u8 event = 0;
@@ -3042,6 +3043,8 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 			else
 				rc = bnxt_force_rx_discard(bp, cpr, &raw_cons,
 							   &event);
+			if (event & BNXT_REDIRECT_EVENT)
+				flush_xdp = true;
 			if (likely(rc >= 0))
 				rx_pkts += rc;
 			/* Increment rx_pkts when rc is -ENOMEM to count towards
@@ -3066,7 +3069,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 		}
 	}
 
-	if (event & BNXT_REDIRECT_EVENT) {
+	if (flush_xdp) {
 		xdp_do_flush();
 		event &= ~BNXT_REDIRECT_EVENT;
 	}
-- 
2.39.5



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net] bnxt: properly flush XDP redirect lists
  2025-06-23 16:06 [PATCH net] bnxt: properly flush XDP redirect lists Yan Zhai
@ 2025-06-24  5:59 ` Jesper Dangaard Brouer
  2025-06-24 18:00   ` Michael Chan
  2025-06-25  1:10 ` patchwork-bot+netdevbpf
  1 sibling, 1 reply; 5+ messages in thread
From: Jesper Dangaard Brouer @ 2025-06-24  5:59 UTC (permalink / raw)
  To: Yan Zhai, netdev
  Cc: Michael Chan, Pavan Chebbi, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Andy Gospodarek, linux-kernel, bpf, kernel-team



On 23/06/2025 18.06, Yan Zhai wrote:
> We encountered following crash when testing a XDP_REDIRECT feature
> in production:
> 
[...]
> 
>  From the crash dump, we found that the cpu_map_flush_list inside
> redirect info is partially corrupted: its list_head->next points to
> itself, but list_head->prev points to a valid list of unflushed bq
> entries.
> 
> This turned out to be a result of missed XDP flush on redirect lists. By
> digging in the actual source code, we found that
> commit 7f0a168b0441 ("bnxt_en: Add completion ring pointer in TX and RX
> ring structures") incorrectly overwrites the event mask for XDP_REDIRECT
> in bnxt_rx_xdp.

(To Andy + Michael:)
The initial bug was introduced in [1] commit a7559bc8c17c ("bnxt:
support transmit and free of aggregation buffers") in bnxt_rx_xdp()
where case XDP_TX zeros the *event, that also carries the XDP-redirect
indication.
I'm wondering if the driver should not reset the *event value?
(all other drive code paths doesn't)


> We can stably reproduce this crash by returning XDP_TX
> and XDP_REDIRECT randomly for incoming packets in a naive XDP program.
> Properly propagate the XDP_REDIRECT events back fixes the crash.
> 
> Fixes: 7f0a168b0441 ("bnxt_en: Add completion ring pointer in TX and RX ring structures")

We should also add:

Fixes: a7559bc8c17c ("bnxt: support transmit and free of aggregation 
buffers")

  [0] https://git.kernel.org/torvalds/c/7f0a168b0441 - v6.8-rc1
  [1] https://git.kernel.org/torvalds/c/a7559bc8c17c - v5.19-rc1

> Tested-by: Andrew Rzeznik <arzeznik@cloudflare.com>
> Signed-off-by: Yan Zhai <yan@cloudflare.com>

Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>

> ---
>   drivers/net/ethernet/broadcom/bnxt/bnxt.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 2cb3185c442c..ae89a981e052 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -2989,6 +2989,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
>   {
>   	struct bnxt_napi *bnapi = cpr->bnapi;
>   	u32 raw_cons = cpr->cp_raw_cons;
> +	bool flush_xdp = false;
>   	u32 cons;
>   	int rx_pkts = 0;
>   	u8 event = 0;
> @@ -3042,6 +3043,8 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
>   			else
>   				rc = bnxt_force_rx_discard(bp, cpr, &raw_cons,
>   							   &event);
> +			if (event & BNXT_REDIRECT_EVENT)
> +				flush_xdp = true;
>   			if (likely(rc >= 0))
>   				rx_pkts += rc;
>   			/* Increment rx_pkts when rc is -ENOMEM to count towards
> @@ -3066,7 +3069,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
>   		}
>   	}
>   
> -	if (event & BNXT_REDIRECT_EVENT) {
> +	if (flush_xdp) {
>   		xdp_do_flush();
>   		event &= ~BNXT_REDIRECT_EVENT;
>   	}

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net] bnxt: properly flush XDP redirect lists
  2025-06-24  5:59 ` Jesper Dangaard Brouer
@ 2025-06-24 18:00   ` Michael Chan
  2025-06-24 18:31     ` Andy Gospodarek
  0 siblings, 1 reply; 5+ messages in thread
From: Michael Chan @ 2025-06-24 18:00 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Yan Zhai, netdev, Pavan Chebbi, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Andy Gospodarek, linux-kernel, bpf, kernel-team

[-- Attachment #1: Type: text/plain, Size: 1164 bytes --]

On Mon, Jun 23, 2025 at 10:59 PM Jesper Dangaard Brouer <hawk@kernel.org> wrote:
>
> On 23/06/2025 18.06, Yan Zhai wrote:
> > We encountered following crash when testing a XDP_REDIRECT feature
> > in production:
> >
> [...]
> >
> (To Andy + Michael:)
> The initial bug was introduced in [1] commit a7559bc8c17c ("bnxt:
> support transmit and free of aggregation buffers") in bnxt_rx_xdp()
> where case XDP_TX zeros the *event, that also carries the XDP-redirect
> indication.
> I'm wondering if the driver should not reset the *event value?
> (all other drive code paths doesn't)

Resetting *event was only correct before XDP_REDIRECT support was added.

>
>
> > We can stably reproduce this crash by returning XDP_TX
> > and XDP_REDIRECT randomly for incoming packets in a naive XDP program.
> > Properly propagate the XDP_REDIRECT events back fixes the crash.

Thanks for the patch.  The fix is similar to edc0140cc3b7 ("bnxt_en:
Flush XDP for bnxt_poll_nitroa0()'s NAPI")

Somehow the fix was only applied to one chip's poll function and not
the other chips' poll functions.
Reviewed-by: Michael Chan <michael.chan@broadcom.com>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4196 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net] bnxt: properly flush XDP redirect lists
  2025-06-24 18:00   ` Michael Chan
@ 2025-06-24 18:31     ` Andy Gospodarek
  0 siblings, 0 replies; 5+ messages in thread
From: Andy Gospodarek @ 2025-06-24 18:31 UTC (permalink / raw)
  To: Michael Chan
  Cc: Jesper Dangaard Brouer, Yan Zhai, netdev, Pavan Chebbi,
	Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, linux-kernel, bpf, kernel-team

On Tue, Jun 24, 2025 at 2:00 PM Michael Chan <michael.chan@broadcom.com> wrote:
>
> On Mon, Jun 23, 2025 at 10:59 PM Jesper Dangaard Brouer <hawk@kernel.org> wrote:
> >
> > On 23/06/2025 18.06, Yan Zhai wrote:
> > > We encountered following crash when testing a XDP_REDIRECT feature
> > > in production:
> > >
> > [...]
> > >
> > (To Andy + Michael:)
> > The initial bug was introduced in [1] commit a7559bc8c17c ("bnxt:
> > support transmit and free of aggregation buffers") in bnxt_rx_xdp()
> > where case XDP_TX zeros the *event, that also carries the XDP-redirect
> > indication.
> > I'm wondering if the driver should not reset the *event value?
> > (all other drive code paths doesn't)
>
> Resetting *event was only correct before XDP_REDIRECT support was added.
>
> >
> >
> > > We can stably reproduce this crash by returning XDP_TX
> > > and XDP_REDIRECT randomly for incoming packets in a naive XDP program.
> > > Properly propagate the XDP_REDIRECT events back fixes the crash.
>
> Thanks for the patch.  The fix is similar to edc0140cc3b7 ("bnxt_en:
> Flush XDP for bnxt_poll_nitroa0()'s NAPI")
>
> Somehow the fix was only applied to one chip's poll function and not
> the other chips' poll functions.

Odd that we missed this back then.  Thanks for the fix for all other devices.

> Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Reviewed-by: Andy Gospodarek <gospo@broadcom.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net] bnxt: properly flush XDP redirect lists
  2025-06-23 16:06 [PATCH net] bnxt: properly flush XDP redirect lists Yan Zhai
  2025-06-24  5:59 ` Jesper Dangaard Brouer
@ 2025-06-25  1:10 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 5+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-06-25  1:10 UTC (permalink / raw)
  To: Yan Zhai
  Cc: netdev, michael.chan, pavan.chebbi, andrew+netdev, davem,
	edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, sdf,
	andrew.gospodarek, linux-kernel, bpf, kernel-team

Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Mon, 23 Jun 2025 09:06:38 -0700 you wrote:
> We encountered following crash when testing a XDP_REDIRECT feature
> in production:
> 
> [56251.579676] list_add corruption. next->prev should be prev (ffff93120dd40f30), but was ffffb301ef3a6740. (next=ffff93120dd
> 40f30).
> [56251.601413] ------------[ cut here ]------------
> [56251.611357] kernel BUG at lib/list_debug.c:29!
> [56251.621082] Oops: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
> [56251.632073] CPU: 111 UID: 0 PID: 0 Comm: swapper/111 Kdump: loaded Tainted: P           O       6.12.33-cloudflare-2025.6.
> 3 #1
> [56251.653155] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
> [56251.663877] Hardware name: MiTAC GC68B-B8032-G11P6-GPU/S8032GM-HE-CFR, BIOS V7.020.B10-sig 01/22/2025
> [56251.682626] RIP: 0010:__list_add_valid_or_report+0x4b/0xa0
> [56251.693203] Code: 0e 48 c7 c7 68 e7 d9 97 e8 42 16 fe ff 0f 0b 48 8b 52 08 48 39 c2 74 14 48 89 f1 48 c7 c7 90 e7 d9 97 48
>  89 c6 e8 25 16 fe ff <0f> 0b 4c 8b 02 49 39 f0 74 14 48 89 d1 48 c7 c7 e8 e7 d9 97 4c 89
> [56251.725811] RSP: 0018:ffff93120dd40b80 EFLAGS: 00010246
> [56251.736094] RAX: 0000000000000075 RBX: ffffb301e6bba9d8 RCX: 0000000000000000
> [56251.748260] RDX: 0000000000000000 RSI: ffff9149afda0b80 RDI: ffff9149afda0b80
> [56251.760349] RBP: ffff9131e49c8000 R08: 0000000000000000 R09: ffff93120dd40a18
> [56251.772382] R10: ffff9159cf2ce1a8 R11: 0000000000000003 R12: ffff911a80850000
> [56251.784364] R13: ffff93120fbc7000 R14: 0000000000000010 R15: ffff9139e7510e40
> [56251.796278] FS:  0000000000000000(0000) GS:ffff9149afd80000(0000) knlGS:0000000000000000
> [56251.809133] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [56251.819561] CR2: 00007f5e85e6f300 CR3: 00000038b85e2006 CR4: 0000000000770ef0
> [56251.831365] PKRU: 55555554
> [56251.838653] Call Trace:
> [56251.845560]  <IRQ>
> [56251.851943]  cpu_map_enqueue.cold+0x5/0xa
> [56251.860243]  xdp_do_redirect+0x2d9/0x480
> [56251.868388]  bnxt_rx_xdp+0x1d8/0x4c0 [bnxt_en]
> [56251.877028]  bnxt_rx_pkt+0x5f7/0x19b0 [bnxt_en]
> [56251.885665]  ? cpu_max_write+0x1e/0x100
> [56251.893510]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56251.902276]  __bnxt_poll_work+0x190/0x340 [bnxt_en]
> [56251.911058]  bnxt_poll+0xab/0x1b0 [bnxt_en]
> [56251.919041]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56251.927568]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56251.935958]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56251.944250]  __napi_poll+0x2b/0x160
> [56251.951155]  bpf_trampoline_6442548651+0x79/0x123
> [56251.959262]  __napi_poll+0x5/0x160
> [56251.966037]  net_rx_action+0x3d2/0x880
> [56251.973133]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56251.981265]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56251.989262]  ? __hrtimer_run_queues+0x162/0x2a0
> [56251.996967]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56252.004875]  ? srso_alias_return_thunk+0x5/0xfbef5
> [56252.012673]  ? bnxt_msix+0x62/0x70 [bnxt_en]
> [56252.019903]  handle_softirqs+0xcf/0x270
> [56252.026650]  irq_exit_rcu+0x67/0x90
> [56252.032933]  common_interrupt+0x85/0xa0
> [56252.039498]  </IRQ>
> [56252.044246]  <TASK>
> [56252.048935]  asm_common_interrupt+0x26/0x40
> [56252.055727] RIP: 0010:cpuidle_enter_state+0xb8/0x420
> [56252.063305] Code: dc 01 00 00 e8 f9 79 3b ff e8 64 f7 ff ff 49 89 c5 0f 1f 44 00 00 31 ff e8 a5 32 3a ff 45 84 ff 0f 85 ae
>  01 00 00 fb 45 85 f6 <0f> 88 88 01 00 00 48 8b 04 24 49 63 ce 4c 89 ea 48 6b f1 68 48 29
> [56252.088911] RSP: 0018:ffff93120c97fe98 EFLAGS: 00000202
> [56252.096912] RAX: ffff9149afd80000 RBX: ffff9141d3a72800 RCX: 0000000000000000
> [56252.106844] RDX: 00003329176c6b98 RSI: ffffffe36db3fdc7 RDI: 0000000000000000
> [56252.116733] RBP: 0000000000000002 R08: 0000000000000002 R09: 000000000000004e
> [56252.126652] R10: ffff9149afdb30c4 R11: 071c71c71c71c71c R12: ffffffff985ff860
> [56252.136637] R13: 00003329176c6b98 R14: 0000000000000002 R15: 0000000000000000
> [56252.146667]  ? cpuidle_enter_state+0xab/0x420
> [56252.153909]  cpuidle_enter+0x2d/0x40
> [56252.160360]  do_idle+0x176/0x1c0
> [56252.166456]  cpu_startup_entry+0x29/0x30
> [56252.173248]  start_secondary+0xf7/0x100
> [56252.179941]  common_startup_64+0x13e/0x141
> [56252.186886]  </TASK>
> 
> [...]

Here is the summary with links:
  - [net] bnxt: properly flush XDP redirect lists
    https://git.kernel.org/netdev/net/c/9caca6ac0e26

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-06-25  1:09 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-23 16:06 [PATCH net] bnxt: properly flush XDP redirect lists Yan Zhai
2025-06-24  5:59 ` Jesper Dangaard Brouer
2025-06-24 18:00   ` Michael Chan
2025-06-24 18:31     ` Andy Gospodarek
2025-06-25  1:10 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).