netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* htb offload on vlan (mlx5)
@ 2023-03-03 18:04 Stanisław Czech
  2023-03-06  9:35 ` Maxim Mikityanskiy
  0 siblings, 1 reply; 4+ messages in thread
From: Stanisław Czech @ 2023-03-03 18:04 UTC (permalink / raw)
  To: netdev; +Cc: Maxim Mikityanskiy

Hi,

I'm trying to use htb offload on vlan interface using  ConnectX-6 card 
(01:00.0 Ethernet controller: Mellanox Technologies MT28908 Family 
[ConnectX-6])
but it seems there is no such a capability on the vlan interface?

On a physical interface:

ethtool -k eth0 | grep hw-tc-offload
hw-tc-offload: on

On a vlan:

ethtool -k eth0.4 | grep hw-tc-offload
hw-tc-offload: off [fixed]

so while there is no problem with:
tc qdisc replace dev eth0 root handle 1:0 htb offload default 2

I can't do:
tc qdisc replace dev eth0.4 root handle 1:0 htb offload default 2
Error: hw-tc-offload ethtool feature flag must be on.


modinfo mlx5_core
filename: 
/lib/modules/6.2.1-1.el9.elrepo.x86_64/kernel/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko.xz
license:        Dual BSD/GPL
description:    Mellanox 5th generation network adapters (ConnectX 
series) core driver
author:         Eli Cohen <eli@mellanox.com>
srcversion:     59FA0D4A4E95B726AB8900D

Is there a different way to use htb offload on the vlan interface?

Greetings,
*Stanisław Czech*


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: htb offload on vlan (mlx5)
  2023-03-03 18:04 htb offload on vlan (mlx5) Stanisław Czech
@ 2023-03-06  9:35 ` Maxim Mikityanskiy
  2023-03-06 13:59   ` Stanisław Czech
  0 siblings, 1 reply; 4+ messages in thread
From: Maxim Mikityanskiy @ 2023-03-06  9:35 UTC (permalink / raw)
  To: Stanisław Czech; +Cc: netdev, Gal Pressman, Tariq Toukan

On Fri, Mar 03, 2023 at 07:04:43PM +0100, Stanisław Czech wrote:
> Hi,
> 
> I'm trying to use htb offload on vlan interface using  ConnectX-6 card
> (01:00.0 Ethernet controller: Mellanox Technologies MT28908 Family
> [ConnectX-6])
> but it seems there is no such a capability on the vlan interface?
> 
> On a physical interface:
> 
> ethtool -k eth0 | grep hw-tc-offload
> hw-tc-offload: on
> 
> On a vlan:
> 
> ethtool -k eth0.4 | grep hw-tc-offload
> hw-tc-offload: off [fixed]
> 
> so while there is no problem with:
> tc qdisc replace dev eth0 root handle 1:0 htb offload default 2
> 
> I can't do:
> tc qdisc replace dev eth0.4 root handle 1:0 htb offload default 2
> Error: hw-tc-offload ethtool feature flag must be on.

Hi Stanisław,

That's expected, vlan_features doesn't contain NETIF_F_HW_TC, and I
think that's the case for all drivers. Regarding HTB offload, I don't
think the current implementation in mlx5e can be easily modified to
support being attached to a VLAN only, because the current
implementation relies on objects created globally in the NIC.

CCed Nvidia folks in case they have more comments.

> 
> 
> modinfo mlx5_core
> filename: /lib/modules/6.2.1-1.el9.elrepo.x86_64/kernel/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko.xz
> license:        Dual BSD/GPL
> description:    Mellanox 5th generation network adapters (ConnectX series)
> core driver
> author:         Eli Cohen <eli@mellanox.com>
> srcversion:     59FA0D4A4E95B726AB8900D
> 
> Is there a different way to use htb offload on the vlan interface?
> 
> Greetings,
> *Stanisław Czech*
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: htb offload on vlan (mlx5)
  2023-03-06  9:35 ` Maxim Mikityanskiy
@ 2023-03-06 13:59   ` Stanisław Czech
  2023-03-06 14:25     ` Maxim Mikityanskiy
  0 siblings, 1 reply; 4+ messages in thread
From: Stanisław Czech @ 2023-03-06 13:59 UTC (permalink / raw)
  To: Maxim Mikityanskiy; +Cc: netdev, Gal Pressman, Tariq Toukan

06.03.2023  10:35, Maxim Mikityanskiy wrote:
> That's expected, vlan_features doesn't contain NETIF_F_HW_TC, and I
> think that's the case for all drivers. Regarding HTB offload, I don't
> think the current implementation in mlx5e can be easily modified to
> support being attached to a VLAN only, because the current
> implementation relies on objects created globally in the NIC.
>
> CCed Nvidia folks in case they have more comments.
>

Thank you for you answer Maxim... I tried to use SR IOV and use the HTB 
offload functionality on the VF
but it's not possible either:

ethtool -K enp1s0np0 hw-tc-offload  on
echo 7 > /sys/class/infiniband/mlx5_0/device/mlx5_num_vfs
ethtool -K enp1s0f7v6 hw-tc-offload  on

ip l s dev enp1s0np0 name eth0
ip l s dev eth0 vf 6 vlan 4

and I see in
ethtool -k eth0
hw-tc-offload: on

but still:
Error: mlx5_core: Missing QoS capabilities. Try disabling SRIOV or use a 
supported device.

So I guess there is no way to use HTB offloading anywhere else than on 
the PF device itself...

Anyway, maybe using multiple VFS to support multiple VLANs (single VF 
for single vlan) would
be more efficent than simple vlans on PF interface (regarding qdisc lock 
problem) ?
I would like to utilize more CPU cores as the vlans on a single PF 
interface use only a single
cpu core ( the 100% ksoftirqd problem)

Could this be some workaround?


Greetings,
*Stanisław Czech*


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: htb offload on vlan (mlx5)
  2023-03-06 13:59   ` Stanisław Czech
@ 2023-03-06 14:25     ` Maxim Mikityanskiy
  0 siblings, 0 replies; 4+ messages in thread
From: Maxim Mikityanskiy @ 2023-03-06 14:25 UTC (permalink / raw)
  To: Stanisław Czech; +Cc: netdev, Gal Pressman, Tariq Toukan

On Mon, Mar 06, 2023 at 02:59:40PM +0100, Stanisław Czech wrote:
> 06.03.2023  10:35, Maxim Mikityanskiy wrote:
> > That's expected, vlan_features doesn't contain NETIF_F_HW_TC, and I
> > think that's the case for all drivers. Regarding HTB offload, I don't
> > think the current implementation in mlx5e can be easily modified to
> > support being attached to a VLAN only, because the current
> > implementation relies on objects created globally in the NIC.
> > 
> > CCed Nvidia folks in case they have more comments.
> > 
> 
> Thank you for you answer Maxim... I tried to use SR IOV and use the HTB
> offload functionality on the VF
> but it's not possible either:
> 
> ethtool -K enp1s0np0 hw-tc-offload  on
> echo 7 > /sys/class/infiniband/mlx5_0/device/mlx5_num_vfs
> ethtool -K enp1s0f7v6 hw-tc-offload  on
> 
> ip l s dev enp1s0np0 name eth0
> ip l s dev eth0 vf 6 vlan 4
> 
> and I see in
> ethtool -k eth0
> hw-tc-offload: on
> 
> but still:
> Error: mlx5_core: Missing QoS capabilities. Try disabling SRIOV or use a
> supported device.
> 
> So I guess there is no way to use HTB offloading anywhere else than on the
> PF device itself...

Yes, as the error message suggests, when SRIOV is enabled, the firmware
doesn't expose the needed capabilities for HTB offload. That means these
two features aren't compatible at the moment, and there is nothing the
driver could do, because the limitation comes from the firmware side.

> 
> Anyway, maybe using multiple VFS to support multiple VLANs (single VF for
> single vlan) would
> be more efficent than simple vlans on PF interface (regarding qdisc lock
> problem) ?

You mean with non-offloaded HTB? You might try, but there will still be
the lock contention issue in case of multiple queues. There will be
multiple locks, though (one per VF), which might alleviate the
contention, but there are too many variables to guess without actually
testing it. It also depends on how many VLANs you have, because each VF
has its memory footprint. It also may be worth looking at SFs, which are
lighter than VFs.

> I would like to utilize more CPU cores as the vlans on a single PF interface
> use only a single
> cpu core ( the 100% ksoftirqd problem)
> 
> Could this be some workaround?
> 
> 
> Greetings,
> *Stanisław Czech*
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-03-06 14:26 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-03 18:04 htb offload on vlan (mlx5) Stanisław Czech
2023-03-06  9:35 ` Maxim Mikityanskiy
2023-03-06 13:59   ` Stanisław Czech
2023-03-06 14:25     ` Maxim Mikityanskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).