* [PATCH net] vmxnet3: Fix tx queue race condition with XDP
@ 2025-01-24 9:02 Sankararaman Jayaraman
2025-01-24 16:45 ` William Tu
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Sankararaman Jayaraman @ 2025-01-24 9:02 UTC (permalink / raw)
To: netdev
Cc: sankararaman.jayaraman, ronak.doshi, bcm-kernel-feedback-list,
andrew+netdev, davem, u9012063, kuba, edumazet, pabeni, ast,
alexandr.lobakin, alexanderduyck, bpf, daniel, hawk,
john.fastabend
If XDP traffic runs on a CPU which is greater than or equal to
the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
always picks up queue 0 for transmission as it uses reciprocal scale
instead of simple modulo operation.
vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
returned queue without any locking which can lead to race conditions
when multiple XDP xmits run in parallel on different CPU's.
This patch uses a simple module scheme when the current CPU equals or
exceeds the number of Tx queues on the NIC. It also adds locking in
vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com>
Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
---
drivers/net/vmxnet3/vmxnet3_xdp.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
index 1341374a4588..5f177e77cfcb 100644
--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
+++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Linux driver for VMware's vmxnet3 ethernet NIC.
- * Copyright (C) 2008-2023, VMware, Inc. All Rights Reserved.
+ * Copyright (C) 2008-2025, VMware, Inc. All Rights Reserved.
* Maintained by: pv-drivers@vmware.com
*
*/
@@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
if (likely(cpu < tq_number))
tq = &adapter->tx_queue[cpu];
else
- tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)];
+ tq = &adapter->tx_queue[cpu % tq_number];
return tq;
}
@@ -123,7 +123,9 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
struct page *page;
u32 buf_size;
u32 dw2;
+ unsigned long irq_flags;
+ spin_lock_irqsave(&tq->tx_lock, irq_flags);
dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
dw2 |= xdpf->len;
ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
@@ -134,6 +136,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
tq->stats.tx_ring_full++;
+ spin_unlock_irqrestore(&tq->tx_lock, irq_flags);
return -ENOSPC;
}
@@ -142,8 +145,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
xdpf->data, buf_size,
DMA_TO_DEVICE);
- if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr))
+ if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
+ spin_unlock_irqrestore(&tq->tx_lock, irq_flags);
return -EFAULT;
+ }
tbi->map_type |= VMXNET3_MAP_SINGLE;
} else { /* XDP buffer from page pool */
page = virt_to_page(xdpf->data);
@@ -182,6 +187,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
dma_wmb();
gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
VMXNET3_TXD_GEN);
+ spin_unlock_irqrestore(&tq->tx_lock, irq_flags);
/* No need to handle the case when tx_num_deferred doesn't reach
* threshold. Backend driver at hypervisor side will poll and reset
@@ -226,6 +232,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
struct vmxnet3_adapter *adapter = netdev_priv(dev);
struct vmxnet3_tx_queue *tq;
int i;
+ struct netdev_queue *nq;
if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
return -ENETDOWN;
@@ -236,6 +243,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
if (tq->stopped)
return -ENETDOWN;
+ nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
+
+ __netif_tx_lock(nq, smp_processor_id());
for (i = 0; i < n; i++) {
if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
tq->stats.xdp_xmit_err++;
@@ -243,6 +253,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
}
}
tq->stats.xdp_xmit += i;
+ __netif_tx_unlock(nq);
return i;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH net] vmxnet3: Fix tx queue race condition with XDP
2025-01-24 9:02 [PATCH net] vmxnet3: Fix tx queue race condition with XDP Sankararaman Jayaraman
@ 2025-01-24 16:45 ` William Tu
2025-01-27 17:01 ` Simon Horman
2025-01-27 22:36 ` Jakub Kicinski
2 siblings, 0 replies; 7+ messages in thread
From: William Tu @ 2025-01-24 16:45 UTC (permalink / raw)
To: Sankararaman Jayaraman
Cc: netdev, ronak.doshi, bcm-kernel-feedback-list, andrew+netdev,
davem, kuba, edumazet, pabeni, ast, alexandr.lobakin,
alexanderduyck, bpf, daniel, hawk, john.fastabend
On Fri, Jan 24, 2025 at 1:00 AM Sankararaman Jayaraman
<sankararaman.jayaraman@broadcom.com> wrote:
>
> If XDP traffic runs on a CPU which is greater than or equal to
> the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
> always picks up queue 0 for transmission as it uses reciprocal scale
> instead of simple modulo operation.
>
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
> returned queue without any locking which can lead to race conditions
> when multiple XDP xmits run in parallel on different CPU's.
>
> This patch uses a simple module scheme when the current CPU equals or
> exceeds the number of Tx queues on the NIC. It also adds locking in
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
>
> Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
> Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com>
> Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
> ---
LGTM
Acked-by: William Tu <u9012063@gmail.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net] vmxnet3: Fix tx queue race condition with XDP
2025-01-24 9:02 [PATCH net] vmxnet3: Fix tx queue race condition with XDP Sankararaman Jayaraman
2025-01-24 16:45 ` William Tu
@ 2025-01-27 17:01 ` Simon Horman
2025-01-27 22:36 ` Jakub Kicinski
2 siblings, 0 replies; 7+ messages in thread
From: Simon Horman @ 2025-01-27 17:01 UTC (permalink / raw)
To: Sankararaman Jayaraman
Cc: netdev, ronak.doshi, bcm-kernel-feedback-list, andrew+netdev,
davem, u9012063, kuba, edumazet, pabeni, ast, alexandr.lobakin,
alexanderduyck, bpf, daniel, hawk, john.fastabend
On Fri, Jan 24, 2025 at 02:32:11PM +0530, Sankararaman Jayaraman wrote:
> If XDP traffic runs on a CPU which is greater than or equal to
> the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
> always picks up queue 0 for transmission as it uses reciprocal scale
> instead of simple modulo operation.
>
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
> returned queue without any locking which can lead to race conditions
> when multiple XDP xmits run in parallel on different CPU's.
>
> This patch uses a simple module scheme when the current CPU equals or
> exceeds the number of Tx queues on the NIC. It also adds locking in
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
>
> Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
> Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com>
> Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
Reviewed-by: Simon Horman <horms@kernel.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net] vmxnet3: Fix tx queue race condition with XDP
2025-01-24 9:02 [PATCH net] vmxnet3: Fix tx queue race condition with XDP Sankararaman Jayaraman
2025-01-24 16:45 ` William Tu
2025-01-27 17:01 ` Simon Horman
@ 2025-01-27 22:36 ` Jakub Kicinski
2025-01-29 17:34 ` Sankararaman Jayaraman
2025-01-29 18:17 ` [PATCH net v2] " Sankararaman Jayaraman
2 siblings, 2 replies; 7+ messages in thread
From: Jakub Kicinski @ 2025-01-27 22:36 UTC (permalink / raw)
To: Sankararaman Jayaraman
Cc: netdev, ronak.doshi, bcm-kernel-feedback-list, andrew+netdev,
davem, u9012063, edumazet, pabeni, ast, alexandr.lobakin,
alexanderduyck, bpf, daniel, hawk, john.fastabend
On Fri, 24 Jan 2025 14:32:11 +0530 Sankararaman Jayaraman wrote:
> + * Copyright (C) 2008-2025, VMware, Inc. All Rights Reserved.
Please don't update copyright dates in a fix.
It increases the size of the patch and risk of a conflict.
> @@ -123,7 +123,9 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
> struct page *page;
> u32 buf_size;
> u32 dw2;
> + unsigned long irq_flags;
please order variable declaration lines longest to shortest
> + spin_lock_irqsave(&tq->tx_lock, irq_flags);
why _irqsave() ?
--
pw-bot: cr
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH net] vmxnet3: Fix tx queue race condition with XDP
2025-01-27 22:36 ` Jakub Kicinski
@ 2025-01-29 17:34 ` Sankararaman Jayaraman
2025-01-29 18:17 ` [PATCH net v2] " Sankararaman Jayaraman
1 sibling, 0 replies; 7+ messages in thread
From: Sankararaman Jayaraman @ 2025-01-29 17:34 UTC (permalink / raw)
To: kuba
Cc: alexanderduyck, alexandr.lobakin, andrew+netdev, ast,
bcm-kernel-feedback-list, bpf, daniel, davem, edumazet, hawk,
john.fastabend, netdev, pabeni, ronak.doshi,
sankararaman.jayaraman, u9012063
If XDP traffic runs on a CPU which is greater than or equal to
the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
always picks up queue 0 for transmission as it uses reciprocal scale
instead of simple modulo operation.
vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
returned queue without any locking which can lead to race conditions
when multiple XDP xmits run in parallel on different CPU's.
This patch uses a simple module scheme when the current CPU equals or
exceeds the number of Tx queues on the NIC. It also adds locking in
vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com>
Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
---
drivers/net/vmxnet3/vmxnet3_xdp.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
index 1341374a4588..e3f94b3374f9 100644
--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
+++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
@@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
if (likely(cpu < tq_number))
tq = &adapter->tx_queue[cpu];
else
- tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)];
+ tq = &adapter->tx_queue[cpu % tq_number];
return tq;
}
@@ -124,6 +124,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
u32 buf_size;
u32 dw2;
+ spin_lock(&tq->tx_lock);
dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
dw2 |= xdpf->len;
ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
@@ -134,6 +135,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
tq->stats.tx_ring_full++;
+ spin_unlock(&tq->tx_lock);
return -ENOSPC;
}
@@ -142,8 +144,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
xdpf->data, buf_size,
DMA_TO_DEVICE);
- if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr))
+ if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
+ spin_unlock(&tq->tx_lock);
return -EFAULT;
+ }
tbi->map_type |= VMXNET3_MAP_SINGLE;
} else { /* XDP buffer from page pool */
page = virt_to_page(xdpf->data);
@@ -182,6 +186,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
dma_wmb();
gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
VMXNET3_TXD_GEN);
+ spin_unlock(&tq->tx_lock);
/* No need to handle the case when tx_num_deferred doesn't reach
* threshold. Backend driver at hypervisor side will poll and reset
@@ -226,6 +231,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
struct vmxnet3_adapter *adapter = netdev_priv(dev);
struct vmxnet3_tx_queue *tq;
int i;
+ struct netdev_queue *nq;
if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
return -ENETDOWN;
@@ -236,6 +242,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
if (tq->stopped)
return -ENETDOWN;
+ nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
+
+ __netif_tx_lock(nq, smp_processor_id());
for (i = 0; i < n; i++) {
if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
tq->stats.xdp_xmit_err++;
@@ -243,6 +252,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
}
}
tq->stats.xdp_xmit += i;
+ __netif_tx_unlock(nq);
return i;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH net v2] vmxnet3: Fix tx queue race condition with XDP
2025-01-27 22:36 ` Jakub Kicinski
2025-01-29 17:34 ` Sankararaman Jayaraman
@ 2025-01-29 18:17 ` Sankararaman Jayaraman
2025-01-30 1:15 ` Jakub Kicinski
1 sibling, 1 reply; 7+ messages in thread
From: Sankararaman Jayaraman @ 2025-01-29 18:17 UTC (permalink / raw)
To: kuba
Cc: alexanderduyck, alexandr.lobakin, andrew+netdev, ast,
bcm-kernel-feedback-list, bpf, daniel, davem, edumazet, hawk,
john.fastabend, netdev, pabeni, ronak.doshi,
sankararaman.jayaraman, u9012063
If XDP traffic runs on a CPU which is greater than or equal to
the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
always picks up queue 0 for transmission as it uses reciprocal scale
instead of simple modulo operation.
vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
returned queue without any locking which can lead to race conditions
when multiple XDP xmits run in parallel on different CPU's.
This patch uses a simple module scheme when the current CPU equals or
exceeds the number of Tx queues on the NIC. It also adds locking in
vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com>
Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
Changes v1-> v2:
Retained the copyright dates as it is.
Used spin_lock()/spin_unlock() instead of spin_lock_irqsave().
---
drivers/net/vmxnet3/vmxnet3_xdp.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
index 1341374a4588..e3f94b3374f9 100644
--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
+++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
@@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
if (likely(cpu < tq_number))
tq = &adapter->tx_queue[cpu];
else
- tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)];
+ tq = &adapter->tx_queue[cpu % tq_number];
return tq;
}
@@ -124,6 +124,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
u32 buf_size;
u32 dw2;
+ spin_lock(&tq->tx_lock);
dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
dw2 |= xdpf->len;
ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
@@ -134,6 +135,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
tq->stats.tx_ring_full++;
+ spin_unlock(&tq->tx_lock);
return -ENOSPC;
}
@@ -142,8 +144,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
xdpf->data, buf_size,
DMA_TO_DEVICE);
- if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr))
+ if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
+ spin_unlock(&tq->tx_lock);
return -EFAULT;
+ }
tbi->map_type |= VMXNET3_MAP_SINGLE;
} else { /* XDP buffer from page pool */
page = virt_to_page(xdpf->data);
@@ -182,6 +186,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
dma_wmb();
gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
VMXNET3_TXD_GEN);
+ spin_unlock(&tq->tx_lock);
/* No need to handle the case when tx_num_deferred doesn't reach
* threshold. Backend driver at hypervisor side will poll and reset
@@ -226,6 +231,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
struct vmxnet3_adapter *adapter = netdev_priv(dev);
struct vmxnet3_tx_queue *tq;
int i;
+ struct netdev_queue *nq;
if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
return -ENETDOWN;
@@ -236,6 +242,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
if (tq->stopped)
return -ENETDOWN;
+ nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
+
+ __netif_tx_lock(nq, smp_processor_id());
for (i = 0; i < n; i++) {
if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
tq->stats.xdp_xmit_err++;
@@ -243,6 +252,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
}
}
tq->stats.xdp_xmit += i;
+ __netif_tx_unlock(nq);
return i;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH net v2] vmxnet3: Fix tx queue race condition with XDP
2025-01-29 18:17 ` [PATCH net v2] " Sankararaman Jayaraman
@ 2025-01-30 1:15 ` Jakub Kicinski
0 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2025-01-30 1:15 UTC (permalink / raw)
To: Sankararaman Jayaraman
Cc: alexanderduyck, alexandr.lobakin, andrew+netdev, ast,
bcm-kernel-feedback-list, bpf, daniel, davem, edumazet, hawk,
john.fastabend, netdev, pabeni, ronak.doshi, u9012063
On Wed, 29 Jan 2025 23:47:03 +0530 Sankararaman Jayaraman wrote:
> If XDP traffic runs on a CPU which is greater than or equal to
> the number of the Tx queues of the NIC, then vmxnet3_xdp_get_tq()
> always picks up queue 0 for transmission as it uses reciprocal scale
> instead of simple modulo operation.
>
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() use the above
> returned queue without any locking which can lead to race conditions
> when multiple XDP xmits run in parallel on different CPU's.
>
> This patch uses a simple module scheme when the current CPU equals or
> exceeds the number of Tx queues on the NIC. It also adds locking in
> vmxnet3_xdp_xmit() and vmxnet3_xdp_xmit_frame() functions.
>
> Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
> Signed-off-by: Sankararaman Jayaraman <sankararaman.jayaraman@broadcom.com>
> Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com>
Please add a --- separator between commit message and change log
> Changes v1-> v2:
> Retained the copyright dates as it is.
> Used spin_lock()/spin_unlock() instead of spin_lock_irqsave().
Wrong way around AFAICT. The lock is taken on the xmit path,
and driver supports netpoll. But this path won't be called
from IRQ. So the right type of call is very likely _irq().
Please do not post next version of the patch in reply to previous
posting. Instead add to the change log a lore link to previous
posting. See:
https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#changes-requested
Actually, also make sure you read at least the tl;dr section, too.
> @@ -226,6 +231,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
> struct vmxnet3_adapter *adapter = netdev_priv(dev);
> struct vmxnet3_tx_queue *tq;
> int i;
> + struct netdev_queue *nq;
Reverse length order. So:
struct vmxnet3_adapter *adapter = netdev_priv(dev);
struct vmxnet3_tx_queue *tq;
+ struct netdev_queue *nq;
int i;
--
pw-bot: cr
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-01-30 1:15 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-24 9:02 [PATCH net] vmxnet3: Fix tx queue race condition with XDP Sankararaman Jayaraman
2025-01-24 16:45 ` William Tu
2025-01-27 17:01 ` Simon Horman
2025-01-27 22:36 ` Jakub Kicinski
2025-01-29 17:34 ` Sankararaman Jayaraman
2025-01-29 18:17 ` [PATCH net v2] " Sankararaman Jayaraman
2025-01-30 1:15 ` Jakub Kicinski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).