linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 3/9] net: ethtool: require drivers to opt into the per-RSS ctx RXFH
       [not found] <20250611145949.2674086-1-kuba@kernel.org>
@ 2025-06-11 14:59 ` Jakub Kicinski
  2025-06-12 21:09   ` Joe Damato
  0 siblings, 1 reply; 2+ messages in thread
From: Jakub Kicinski @ 2025-06-11 14:59 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, andrew+netdev, horms, ecree.xilinx,
	Jakub Kicinski, saeedm, tariqt, leon, linux-rdma,
	linux-net-drivers

RX Flow Hashing supports using different configuration for different
RSS contexts. Only two drivers seem to support it. Make sure we
uniformly error out for drivers which don't.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: saeedm@nvidia.com
CC: tariqt@nvidia.com
CC: leon@kernel.org
CC: ecree.xilinx@gmail.com
CC: linux-rdma@vger.kernel.org
CC: linux-net-drivers@amd.com
---
 include/linux/ethtool.h                              | 3 +++
 drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c | 1 +
 drivers/net/ethernet/sfc/ethtool.c                   | 1 +
 net/ethtool/ioctl.c                                  | 8 ++++++++
 4 files changed, 13 insertions(+)

diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
index 5e0dd333ad1f..fc1c2379e7ff 100644
--- a/include/linux/ethtool.h
+++ b/include/linux/ethtool.h
@@ -855,6 +855,8 @@ struct kernel_ethtool_ts_info {
  * @cap_rss_ctx_supported: indicates if the driver supports RSS
  *	contexts via legacy API, drivers implementing @create_rxfh_context
  *	do not have to set this bit.
+ * @rxfh_per_ctx_fields: device supports selecting different header fields
+ *	for Rx hash calculation and RSS for each additional context.
  * @rxfh_per_ctx_key: device supports setting different RSS key for each
  *	additional context. Netlink API should report hfunc, key, and input_xfrm
  *	for every context, not just context 0.
@@ -1084,6 +1086,7 @@ struct ethtool_ops {
 	u32     supported_input_xfrm:8;
 	u32     cap_link_lanes_supported:1;
 	u32     cap_rss_ctx_supported:1;
+	u32	rxfh_per_ctx_fields:1;
 	u32	rxfh_per_ctx_key:1;
 	u32	cap_rss_rxnfc_adds:1;
 	u32	rxfh_indir_space;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
index ea078c9f5d15..90c760057bb6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
@@ -2619,6 +2619,7 @@ static void mlx5e_get_ts_stats(struct net_device *netdev,
 const struct ethtool_ops mlx5e_ethtool_ops = {
 	.cap_link_lanes_supported = true,
 	.cap_rss_ctx_supported	= true,
+	.rxfh_per_ctx_fields	= true,
 	.rxfh_per_ctx_key	= true,
 	.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
 				     ETHTOOL_COALESCE_MAX_FRAMES |
diff --git a/drivers/net/ethernet/sfc/ethtool.c b/drivers/net/ethernet/sfc/ethtool.c
index 83d715544f7f..afbedca63b29 100644
--- a/drivers/net/ethernet/sfc/ethtool.c
+++ b/drivers/net/ethernet/sfc/ethtool.c
@@ -262,6 +262,7 @@ const struct ethtool_ops efx_ethtool_ops = {
 	.set_rxnfc		= efx_ethtool_set_rxnfc,
 	.get_rxfh_indir_size	= efx_ethtool_get_rxfh_indir_size,
 	.get_rxfh_key_size	= efx_ethtool_get_rxfh_key_size,
+	.rxfh_per_ctx_fields	= true,
 	.rxfh_per_ctx_key	= true,
 	.cap_rss_rxnfc_adds	= true,
 	.rxfh_priv_size		= sizeof(struct efx_rss_context_priv),
diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
index 330ca99800ce..bd9fd95bb82f 100644
--- a/net/ethtool/ioctl.c
+++ b/net/ethtool/ioctl.c
@@ -1075,6 +1075,10 @@ ethtool_set_rxfh_fields(struct net_device *dev, u32 cmd, void __user *useraddr)
 	if (rc)
 		return rc;
 
+	if (info.flow_type & FLOW_RSS && info.rss_context &&
+	    !ops->rxfh_per_ctx_fields)
+		return -EINVAL;
+
 	if (ops->get_rxfh) {
 		struct ethtool_rxfh_param rxfh = {};
 
@@ -1105,6 +1109,10 @@ ethtool_get_rxfh_fields(struct net_device *dev, u32 cmd, void __user *useraddr)
 	if (ret)
 		return ret;
 
+	if (info.flow_type & FLOW_RSS && info.rss_context &&
+	    !ops->rxfh_per_ctx_fields)
+		return -EINVAL;
+
 	ret = ops->get_rxnfc(dev, &info, NULL);
 	if (ret < 0)
 		return ret;
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH net-next 3/9] net: ethtool: require drivers to opt into the per-RSS ctx RXFH
  2025-06-11 14:59 ` [PATCH net-next 3/9] net: ethtool: require drivers to opt into the per-RSS ctx RXFH Jakub Kicinski
@ 2025-06-12 21:09   ` Joe Damato
  0 siblings, 0 replies; 2+ messages in thread
From: Joe Damato @ 2025-06-12 21:09 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
	ecree.xilinx, saeedm, tariqt, leon, linux-rdma, linux-net-drivers

On Wed, Jun 11, 2025 at 07:59:43AM -0700, Jakub Kicinski wrote:
> RX Flow Hashing supports using different configuration for different
> RSS contexts. Only two drivers seem to support it. Make sure we
> uniformly error out for drivers which don't.
> 
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> CC: saeedm@nvidia.com
> CC: tariqt@nvidia.com
> CC: leon@kernel.org
> CC: ecree.xilinx@gmail.com
> CC: linux-rdma@vger.kernel.org
> CC: linux-net-drivers@amd.com
> ---
>  include/linux/ethtool.h                              | 3 +++
>  drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c | 1 +
>  drivers/net/ethernet/sfc/ethtool.c                   | 1 +
>  net/ethtool/ioctl.c                                  | 8 ++++++++
>  4 files changed, 13 insertions(+)

Reviewed-by: Joe Damato <joe@dama.to>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-06-12 21:09 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20250611145949.2674086-1-kuba@kernel.org>
2025-06-11 14:59 ` [PATCH net-next 3/9] net: ethtool: require drivers to opt into the per-RSS ctx RXFH Jakub Kicinski
2025-06-12 21:09   ` Joe Damato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).