* [PATCH] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction
@ 2026-03-21 3:54 Tyllis Xu
2026-03-23 14:45 ` Simon Horman
0 siblings, 1 reply; 3+ messages in thread
From: Tyllis Xu @ 2026-03-21 3:54 UTC (permalink / raw)
To: netdev
Cc: haren, ricklind, nnac123, sukadev, davem, edumazet, kuba, pabeni,
andrew+netdev, stable, linux-kernel, danisjiang, ychen, Tyllis Xu
When the number of TX queues is reduced (e.g., via ethtool -L), the
Qdisc layer retains previously enqueued skbs with queue mappings from
before the reduction. After the reset completes and tx_queues_active is
set to true, netif_tx_start_all_queues() drains these stale skbs through
ibmvnic_xmit(). The queue index from skb_get_queue_mapping() may exceed
the newly allocated array bounds, causing out-of-bounds reads on
tx_scrq[] and tx_pool[]/tso_pool[], and out-of-bounds writes on
tx_stats_buffers[] in the function's exit path.
The existing tx_queues_active guard does not help here: it is set to
true by __ibmvnic_open() before netif_tx_start_all_queues() restarts
queue draining, so stale skbs pass the check with an invalid queue index.
Add a bounds check against num_active_tx_scrqs immediately after the
tx_queues_active guard. Use a dedicated out_unlock label to skip the
per-queue stats updates (which also index tx_stats_buffers[queue_num])
when the queue index is invalid.
Fixes: 4219196d1f66 ("ibmvnic: fix race between xmit and reset")
Reported-by: Yuhao Jiang <danisjiang@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>
---
drivers/net/ethernet/ibm/ibmvnic.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 5a510eed335e..c939391474cb 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2453,6 +2453,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
goto out;
}
+ if (unlikely(queue_num >= adapter->num_active_tx_scrqs)) {
+ dev_kfree_skb_any(skb);
+ goto out_unlock;
+ }
+
tx_scrq = adapter->tx_scrq[queue_num];
txq = netdev_get_tx_queue(netdev, queue_num);
ind_bufp = &tx_scrq->ind_buf;
@@ -2672,6 +2677,9 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
+ return ret;
+out_unlock:
+ rcu_read_unlock();
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction
2026-03-21 3:54 [PATCH] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction Tyllis Xu
@ 2026-03-23 14:45 ` Simon Horman
2026-03-24 6:16 ` Tyllis Xu
0 siblings, 1 reply; 3+ messages in thread
From: Simon Horman @ 2026-03-23 14:45 UTC (permalink / raw)
To: Tyllis Xu
Cc: netdev, haren, ricklind, nnac123, sukadev, davem, edumazet, kuba,
pabeni, andrew+netdev, stable, linux-kernel, danisjiang, ychen
On Fri, Mar 20, 2026 at 10:54:39PM -0500, Tyllis Xu wrote:
> When the number of TX queues is reduced (e.g., via ethtool -L), the
> Qdisc layer retains previously enqueued skbs with queue mappings from
> before the reduction. After the reset completes and tx_queues_active is
> set to true, netif_tx_start_all_queues() drains these stale skbs through
> ibmvnic_xmit(). The queue index from skb_get_queue_mapping() may exceed
> the newly allocated array bounds, causing out-of-bounds reads on
> tx_scrq[] and tx_pool[]/tso_pool[], and out-of-bounds writes on
> tx_stats_buffers[] in the function's exit path.
>
> The existing tx_queues_active guard does not help here: it is set to
> true by __ibmvnic_open() before netif_tx_start_all_queues() restarts
> queue draining, so stale skbs pass the check with an invalid queue index.
>
> Add a bounds check against num_active_tx_scrqs immediately after the
> tx_queues_active guard. Use a dedicated out_unlock label to skip the
> per-queue stats updates (which also index tx_stats_buffers[queue_num])
> when the queue index is invalid.
>
> Fixes: 4219196d1f66 ("ibmvnic: fix race between xmit and reset")
> Reported-by: Yuhao Jiang <danisjiang@gmail.com>
> Cc: stable@vger.kernel.org
> Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>
> ---
> drivers/net/ethernet/ibm/ibmvnic.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index 5a510eed335e..c939391474cb 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -2453,6 +2453,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> goto out;
> }
>
> + if (unlikely(queue_num >= adapter->num_active_tx_scrqs)) {
> + dev_kfree_skb_any(skb);
> + goto out_unlock;
> + }
> +
This doesn't seem quite right. Shouldn't it be as per other
blocks in this function that drop packets. In which case
it could re-use the existing handling in the conditional immediately above
this hunk.
Also, I don't think unlikely() seems in keeping with the existing
implementation of this function.
I'm suggesting something like (completely untested):
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 5a510eed335e..67e1e62631e3 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2457,7 +2457,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
txq = netdev_get_tx_queue(netdev, queue_num);
ind_bufp = &tx_scrq->ind_buf;
- if (ibmvnic_xmit_workarounds(skb, netdev)) {
+ if (ibmvnic_xmit_workarounds(skb, netdev) ||
+ queue_num >= adapter->num_active_tx_scrqs) {
tx_dropped++;
tx_send_failed++;
ret = NETDEV_TX_OK;
Where the next line is:
goto out;
...
> @@ -2672,6 +2677,9 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
> adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
>
> + return ret;
> +out_unlock:
> + rcu_read_unlock();
> return ret;
> }
My previous comment not, withstanding:
The RCU read side critical section is already enormous.
So perhaps making it slightly better doesn't make a difference.
If so, can we go for this slightly flow here (completely untested).
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 5a510eed335e..1e1cd8c11cf9 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2664,14 +2664,14 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
netif_carrier_off(netdev);
}
out:
- rcu_read_unlock();
adapter->tx_send_failed += tx_send_failed;
adapter->tx_map_failed += tx_map_failed;
adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
-
+out_unlock:
+ rcu_read_unlock();
return ret;
}
--
pw-bot: changes-requested
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction
2026-03-23 14:45 ` Simon Horman
@ 2026-03-24 6:16 ` Tyllis Xu
0 siblings, 0 replies; 3+ messages in thread
From: Tyllis Xu @ 2026-03-24 6:16 UTC (permalink / raw)
To: Simon Horman
Cc: netdev, haren, ricklind, nnac123, sukadev, davem, edumazet, kuba,
pabeni, andrew+netdev, stable, linux-kernel, danisjiang, ychen
I'll try out the suggested changes and use more
of the existing handling to create a new patch.
I'll also remove the unlikely(). Thank you for
your feedback!
On Mon, Mar 23, 2026 at 9:45 AM Simon Horman <horms@kernel.org> wrote:
>
> On Fri, Mar 20, 2026 at 10:54:39PM -0500, Tyllis Xu wrote:
> > When the number of TX queues is reduced (e.g., via ethtool -L), the
> > Qdisc layer retains previously enqueued skbs with queue mappings from
> > before the reduction. After the reset completes and tx_queues_active is
> > set to true, netif_tx_start_all_queues() drains these stale skbs through
> > ibmvnic_xmit(). The queue index from skb_get_queue_mapping() may exceed
> > the newly allocated array bounds, causing out-of-bounds reads on
> > tx_scrq[] and tx_pool[]/tso_pool[], and out-of-bounds writes on
> > tx_stats_buffers[] in the function's exit path.
> >
> > The existing tx_queues_active guard does not help here: it is set to
> > true by __ibmvnic_open() before netif_tx_start_all_queues() restarts
> > queue draining, so stale skbs pass the check with an invalid queue index.
> >
> > Add a bounds check against num_active_tx_scrqs immediately after the
> > tx_queues_active guard. Use a dedicated out_unlock label to skip the
> > per-queue stats updates (which also index tx_stats_buffers[queue_num])
> > when the queue index is invalid.
> >
> > Fixes: 4219196d1f66 ("ibmvnic: fix race between xmit and reset")
> > Reported-by: Yuhao Jiang <danisjiang@gmail.com>
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>
> > ---
> > drivers/net/ethernet/ibm/ibmvnic.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> > index 5a510eed335e..c939391474cb 100644
> > --- a/drivers/net/ethernet/ibm/ibmvnic.c
> > +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> > @@ -2453,6 +2453,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> > goto out;
> > }
> >
> > + if (unlikely(queue_num >= adapter->num_active_tx_scrqs)) {
> > + dev_kfree_skb_any(skb);
> > + goto out_unlock;
> > + }
> > +
>
> This doesn't seem quite right. Shouldn't it be as per other
> blocks in this function that drop packets. In which case
> it could re-use the existing handling in the conditional immediately above
> this hunk.
>
> Also, I don't think unlikely() seems in keeping with the existing
> implementation of this function.
>
> I'm suggesting something like (completely untested):
>
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index 5a510eed335e..67e1e62631e3 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -2457,7 +2457,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> txq = netdev_get_tx_queue(netdev, queue_num);
> ind_bufp = &tx_scrq->ind_buf;
>
> - if (ibmvnic_xmit_workarounds(skb, netdev)) {
> + if (ibmvnic_xmit_workarounds(skb, netdev) ||
> + queue_num >= adapter->num_active_tx_scrqs) {
> tx_dropped++;
> tx_send_failed++;
> ret = NETDEV_TX_OK;
>
> Where the next line is:
>
> goto out;
>
> ...
>
> > @@ -2672,6 +2677,9 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> > adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
> > adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
> >
> > + return ret;
> > +out_unlock:
> > + rcu_read_unlock();
> > return ret;
> > }
>
> My previous comment not, withstanding:
>
> The RCU read side critical section is already enormous.
> So perhaps making it slightly better doesn't make a difference.
>
> If so, can we go for this slightly flow here (completely untested).
>
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index 5a510eed335e..1e1cd8c11cf9 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -2664,14 +2664,14 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> netif_carrier_off(netdev);
> }
> out:
> - rcu_read_unlock();
> adapter->tx_send_failed += tx_send_failed;
> adapter->tx_map_failed += tx_map_failed;
> adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
> adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
> adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
> adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
> -
> +out_unlock:
> + rcu_read_unlock();
> return ret;
> }
>
>
> --
> pw-bot: changes-requested
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-03-24 6:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-21 3:54 [PATCH] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction Tyllis Xu
2026-03-23 14:45 ` Simon Horman
2026-03-24 6:16 ` Tyllis Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox