* [PATCH net-next] ice: fix broken Rx on VFs
@ 2025-11-24 17:07 Alexander Lobakin
2025-11-24 17:17 ` Alexander Lobakin
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Alexander Lobakin @ 2025-11-24 17:07 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni
Cc: Alexander Lobakin, Przemek Kitszel, Tony Nguyen, Jakub Slepecki,
nxne.cnse.osdt.itp.upstreaming, intel-wired-lan, netdev,
linux-kernel
Since the tagged commit, ice stopped respecting Rx buffer length
passed from VFs.
At that point, the buffer length was hardcoded in ice, so VFs still
worked up to some point (until, for example, a VF wanted an MTU
larger than its PF).
The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke
Rx on VFs completely since ice started accounting per-queue buffer
lengths again, but now VF queues always had their length zeroed, as
ice was already ignoring what iavf was passing to it.
Restore the line that initializes the buffer length on VF queues
basing on the virtchnl messages.
Fixes: 3a4f419f7509 ("ice: drop page splitting and recycling")
Reported-by: Jakub Slepecki <jakub.slepecki@intel.com>
Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
I'd like this to go directly to net-next to quickly unbreak VFs
(the related commits are not in the mainline yet).
---
drivers/net/ethernet/intel/ice/virt/queues.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c
index 7928f4e8e788..f73d5a3e83d4 100644
--- a/drivers/net/ethernet/intel/ice/virt/queues.c
+++ b/drivers/net/ethernet/intel/ice/virt/queues.c
@@ -842,6 +842,9 @@ int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
(qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
qpi->rxq.databuffer_size < 1024))
goto error_param;
+
+ ring->rx_buf_len = qpi->rxq.databuffer_size;
+
if (qpi->rxq.max_pkt_size > max_frame_size ||
qpi->rxq.max_pkt_size < 64)
goto error_param;
--
2.51.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] ice: fix broken Rx on VFs
2025-11-24 17:07 [PATCH net-next] ice: fix broken Rx on VFs Alexander Lobakin
@ 2025-11-24 17:17 ` Alexander Lobakin
2025-11-25 6:32 ` [Intel-wired-lan] " Loktionov, Aleksandr
2025-11-25 10:59 ` Jakub Slepecki
2025-11-26 4:00 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 5+ messages in thread
From: Alexander Lobakin @ 2025-11-24 17:17 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni
Cc: Przemek Kitszel, Tony Nguyen, Jakub Slepecki,
nxne.cnse.osdt.itp.upstreaming, intel-wired-lan, netdev,
linux-kernel
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Date: Mon, 24 Nov 2025 18:07:35 +0100
Ooops, missed a tag, sorry...
> Since the tagged commit, ice stopped respecting Rx buffer length
> passed from VFs.
> At that point, the buffer length was hardcoded in ice, so VFs still
> worked up to some point (until, for example, a VF wanted an MTU
> larger than its PF).
> The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke
> Rx on VFs completely since ice started accounting per-queue buffer
> lengths again, but now VF queues always had their length zeroed, as
> ice was already ignoring what iavf was passing to it.
>
> Restore the line that initializes the buffer length on VF queues
> basing on the virtchnl messages.
>
> Fixes: 3a4f419f7509 ("ice: drop page splitting and recycling")
> Reported-by: Jakub Slepecki <jakub.slepecki@intel.com>
Suggested-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> I'd like this to go directly to net-next to quickly unbreak VFs
> (the related commits are not in the mainline yet).
Thanks,
Olek
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [Intel-wired-lan] [PATCH net-next] ice: fix broken Rx on VFs
2025-11-24 17:17 ` Alexander Lobakin
@ 2025-11-25 6:32 ` Loktionov, Aleksandr
0 siblings, 0 replies; 5+ messages in thread
From: Loktionov, Aleksandr @ 2025-11-25 6:32 UTC (permalink / raw)
To: Lobakin, Aleksander, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni
Cc: Kitszel, Przemyslaw, Nguyen, Anthony L, Slepecki, Jakub,
NXNE CNSE OSDT ITP Upstreaming, intel-wired-lan@lists.osuosl.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Alexander Lobakin
> Sent: Monday, November 24, 2025 6:18 PM
> To: Andrew Lunn <andrew+netdev@lunn.ch>; David S. Miller
> <davem@davemloft.net>; Eric Dumazet <edumazet@google.com>; Jakub
> Kicinski <kuba@kernel.org>; Paolo Abeni <pabeni@redhat.com>
> Cc: Kitszel, Przemyslaw <przemyslaw.kitszel@intel.com>; Nguyen,
> Anthony L <anthony.l.nguyen@intel.com>; Slepecki, Jakub
> <jakub.slepecki@intel.com>; NXNE CNSE OSDT ITP Upstreaming
> <nxne.cnse.osdt.itp.upstreaming@intel.com>; intel-wired-
> lan@lists.osuosl.org; netdev@vger.kernel.org; linux-
> kernel@vger.kernel.org
> Subject: Re: [Intel-wired-lan] [PATCH net-next] ice: fix broken Rx on
> VFs
>
> From: Alexander Lobakin <aleksander.lobakin@intel.com>
> Date: Mon, 24 Nov 2025 18:07:35 +0100
>
> Ooops, missed a tag, sorry...
>
> > Since the tagged commit, ice stopped respecting Rx buffer length
> > passed from VFs.
> > At that point, the buffer length was hardcoded in ice, so VFs still
> > worked up to some point (until, for example, a VF wanted an MTU
> larger
> > than its PF).
> > The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke Rx
> on
> > VFs completely since ice started accounting per-queue buffer lengths
> > again, but now VF queues always had their length zeroed, as ice was
> > already ignoring what iavf was passing to it.
> >
> > Restore the line that initializes the buffer length on VF queues
> > basing on the virtchnl messages.
> >
> > Fixes: 3a4f419f7509 ("ice: drop page splitting and recycling")
> > Reported-by: Jakub Slepecki <jakub.slepecki@intel.com>
>
> Suggested-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
>
> > Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> > ---
> > I'd like this to go directly to net-next to quickly unbreak VFs (the
> > related commits are not in the mainline yet).
> Thanks,
> Olek
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] ice: fix broken Rx on VFs
2025-11-24 17:07 [PATCH net-next] ice: fix broken Rx on VFs Alexander Lobakin
2025-11-24 17:17 ` Alexander Lobakin
@ 2025-11-25 10:59 ` Jakub Slepecki
2025-11-26 4:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 5+ messages in thread
From: Jakub Slepecki @ 2025-11-25 10:59 UTC (permalink / raw)
To: Alexander Lobakin
Cc: Przemek Kitszel, Tony Nguyen, nxne.cnse.osdt.itp.upstreaming,
intel-wired-lan, netdev, linux-kernel, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
Tested-by: Jakub Slepecki <jakub.slepecki@intel.com>
As expected, the issue reproduced with commit 53ffcce6fe91 ("ixd: add
devlink support, 2025-11-17"). Applying this patch on top of this commit
allows VFs to receive packets. Network configuration used:
ip netns add $pf_netns
ip l set $pf netns $pf_netns
ip netns exec $pf_netns ip l set lo up
ip netns exec $pf_netns ip l set $pf address $pf_mac up
ip netns exec $pf_netns ip a add 10.0.0.1/24 dev $pf
ip netns add $vf0_netns
ip l set $vf0 netns $vf0_netns
ip netns exec $vf0_netns ip l set lo up
ip netns exec $vf0_netns ip l set $vf0 up
ip netns exec $vf0_netns ip a add 10.0.0.2/24 dev $vf0
ip netns add $vf1_netns
ip l set $vf1 netns $vf1_netns
ip netns exec $vf1_netns ip l set lo up
ip netns exec $vf1_netns ip l set $vf1 up
ip netns exec $vf1_netns ip a add 10.0.0.3/24 dev $vf1
Assume all variables are known and network namespaces are distinct.
External host was able to successfully ping each 10.0.0.[123].
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] ice: fix broken Rx on VFs
2025-11-24 17:07 [PATCH net-next] ice: fix broken Rx on VFs Alexander Lobakin
2025-11-24 17:17 ` Alexander Lobakin
2025-11-25 10:59 ` Jakub Slepecki
@ 2025-11-26 4:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 5+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-11-26 4:00 UTC (permalink / raw)
To: Alexander Lobakin
Cc: andrew+netdev, davem, edumazet, kuba, pabeni, przemyslaw.kitszel,
anthony.l.nguyen, jakub.slepecki, nxne.cnse.osdt.itp.upstreaming,
intel-wired-lan, netdev, linux-kernel
Hello:
This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Mon, 24 Nov 2025 18:07:35 +0100 you wrote:
> Since the tagged commit, ice stopped respecting Rx buffer length
> passed from VFs.
> At that point, the buffer length was hardcoded in ice, so VFs still
> worked up to some point (until, for example, a VF wanted an MTU
> larger than its PF).
> The next commit 93f53db9f9dc ("ice: switch to Page Pool"), broke
> Rx on VFs completely since ice started accounting per-queue buffer
> lengths again, but now VF queues always had their length zeroed, as
> ice was already ignoring what iavf was passing to it.
>
> [...]
Here is the summary with links:
- [net-next] ice: fix broken Rx on VFs
https://git.kernel.org/netdev/net-next/c/436fa8e7d1a1
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-11-26 4:01 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-24 17:07 [PATCH net-next] ice: fix broken Rx on VFs Alexander Lobakin
2025-11-24 17:17 ` Alexander Lobakin
2025-11-25 6:32 ` [Intel-wired-lan] " Loktionov, Aleksandr
2025-11-25 10:59 ` Jakub Slepecki
2025-11-26 4:00 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox