netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [iwl-next v1 0/3] ice: multiqueue on subfunction
@ 2024-10-31  6:00 Michal Swiatkowski
  2024-10-31  6:00 ` [iwl-next v1 1/3] ice: support max_io_eqs for subfunction Michal Swiatkowski
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Michal Swiatkowski @ 2024-10-31  6:00 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, sridhar.samudrala

Hi,

This patch is adding support for multiqueue on subfunction device.
mostly reuse pf code. Use max_io_eqs parameter as the max rx/tx queues
value.

Also add statistic as part of this changes.

Michal Swiatkowski (3):
  ice: support max_io_eqs for subfunction
  ice: ethtool support for SF
  ice: allow changing SF VSI queues number

 drivers/net/ethernet/intel/ice/devlink/port.c | 37 +++++++++++
 drivers/net/ethernet/intel/ice/ice.h          |  2 +
 drivers/net/ethernet/intel/ice/ice_ethtool.c  | 65 +++++++++++++++----
 drivers/net/ethernet/intel/ice/ice_lib.c      | 63 ++++++++++--------
 drivers/net/ethernet/intel/ice/ice_sf_eth.c   |  1 +
 5 files changed, 128 insertions(+), 40 deletions(-)

-- 
2.42.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [iwl-next v1 1/3] ice: support max_io_eqs for subfunction
  2024-10-31  6:00 [iwl-next v1 0/3] ice: multiqueue on subfunction Michal Swiatkowski
@ 2024-10-31  6:00 ` Michal Swiatkowski
  2024-11-06  9:53   ` Simon Horman
  2024-10-31  6:00 ` [iwl-next v1 2/3] ice: ethtool support for SF Michal Swiatkowski
  2024-10-31  6:00 ` [iwl-next v1 3/3] ice: allow changing SF VSI queues number Michal Swiatkowski
  2 siblings, 1 reply; 5+ messages in thread
From: Michal Swiatkowski @ 2024-10-31  6:00 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, sridhar.samudrala

Implement get and set for the maximum IO event queues for SF.
It is used to derive the maximum number of Rx/Tx queues on subfunction
device.

If the value isn't set when activating set it to the low default value.

Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
---
 drivers/net/ethernet/intel/ice/devlink/port.c | 37 +++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice.h          |  2 +
 2 files changed, 39 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/devlink/port.c b/drivers/net/ethernet/intel/ice/devlink/port.c
index 767419a67fef..a723895e4dff 100644
--- a/drivers/net/ethernet/intel/ice/devlink/port.c
+++ b/drivers/net/ethernet/intel/ice/devlink/port.c
@@ -530,6 +530,33 @@ void ice_devlink_destroy_sf_dev_port(struct ice_sf_dev *sf_dev)
 	devl_port_unregister(&sf_dev->priv->devlink_port);
 }
 
+static int
+ice_devlink_port_fn_max_io_eqs_set(struct devlink_port *port, u32 max_io_eqs,
+				   struct netlink_ext_ack *extack)
+{
+	struct ice_dynamic_port *dyn_port = ice_devlink_port_to_dyn(port);
+
+	if (max_io_eqs > num_online_cpus()) {
+		NL_SET_ERR_MSG_MOD(extack, "Supplied value out of range");
+		return -EINVAL;
+	}
+
+	dyn_port->vsi->max_io_eqs = max_io_eqs;
+
+	return 0;
+}
+
+static int
+ice_devlink_port_fn_max_io_eqs_get(struct devlink_port *port, u32 *max_io_eqs,
+				   struct netlink_ext_ack *extack)
+{
+	struct ice_dynamic_port *dyn_port = ice_devlink_port_to_dyn(port);
+
+	*max_io_eqs = dyn_port->vsi->max_io_eqs;
+
+	return 0;
+}
+
 /**
  * ice_activate_dynamic_port - Activate a dynamic port
  * @dyn_port: dynamic port instance to activate
@@ -548,6 +575,14 @@ ice_activate_dynamic_port(struct ice_dynamic_port *dyn_port,
 	if (dyn_port->active)
 		return 0;
 
+	if (!dyn_port->vsi->max_io_eqs) {
+		err = ice_devlink_port_fn_max_io_eqs_set(&dyn_port->devlink_port,
+							 ICE_SF_DEFAULT_EQS,
+							 extack);
+		if (err)
+			return err;
+	}
+
 	err = ice_sf_eth_activate(dyn_port, extack);
 	if (err)
 		return err;
@@ -807,6 +842,8 @@ static const struct devlink_port_ops ice_devlink_port_sf_ops = {
 	.port_fn_hw_addr_set = ice_devlink_port_fn_hw_addr_set,
 	.port_fn_state_get = ice_devlink_port_fn_state_get,
 	.port_fn_state_set = ice_devlink_port_fn_state_set,
+	.port_fn_max_io_eqs_set = ice_devlink_port_fn_max_io_eqs_set,
+	.port_fn_max_io_eqs_get = ice_devlink_port_fn_max_io_eqs_get,
 };
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 70d5294a558c..ca0739625d3b 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -109,6 +109,7 @@
 #define ICE_Q_WAIT_MAX_RETRY	(5 * ICE_Q_WAIT_RETRY_LIMIT)
 #define ICE_MAX_LG_RSS_QS	256
 #define ICE_INVAL_Q_INDEX	0xffff
+#define ICE_SF_DEFAULT_EQS	8
 
 #define ICE_MAX_RXQS_PER_TC		256	/* Used when setting VSI context per TC Rx queues */
 
@@ -443,6 +444,7 @@ struct ice_vsi {
 	u8 old_numtc;
 	u16 old_ena_tc;
 
+	u32 max_io_eqs;
 	/* setup back reference, to which aggregator node this VSI
 	 * corresponds to
 	 */
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [iwl-next v1 2/3] ice: ethtool support for SF
  2024-10-31  6:00 [iwl-next v1 0/3] ice: multiqueue on subfunction Michal Swiatkowski
  2024-10-31  6:00 ` [iwl-next v1 1/3] ice: support max_io_eqs for subfunction Michal Swiatkowski
@ 2024-10-31  6:00 ` Michal Swiatkowski
  2024-10-31  6:00 ` [iwl-next v1 3/3] ice: allow changing SF VSI queues number Michal Swiatkowski
  2 siblings, 0 replies; 5+ messages in thread
From: Michal Swiatkowski @ 2024-10-31  6:00 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, sridhar.samudrala

Initial support for subfunction device. Mostly it is sharing the same
ethtool ops as the PF, however, define new ops structure to support
only needed part of ethtool ops.

Define new function for getting stats length as subfunction VSI have
less stats available than PF one.

Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
---
 drivers/net/ethernet/intel/ice/ice_ethtool.c | 28 ++++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_sf_eth.c  |  1 +
 2 files changed, 29 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index b552439fc1f9..9e2f20ed55d5 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -47,6 +47,7 @@ static int ice_q_stats_len(struct net_device *netdev)
 		 / sizeof(u64))
 #define ICE_ALL_STATS_LEN(n)	(ICE_PF_STATS_LEN + ICE_PFC_STATS_LEN + \
 				 ICE_VSI_STATS_LEN + ice_q_stats_len(n))
+#define ICE_SF_STATS_LEN(n)	(ICE_VSI_STATS_LEN + ice_q_stats_len(n))
 
 static const struct ice_stats ice_gstrings_vsi_stats[] = {
 	ICE_VSI_STAT("rx_unicast", eth_stats.rx_unicast),
@@ -4431,6 +4432,16 @@ static int ice_repr_get_sset_count(struct net_device *netdev, int sset)
 	}
 }
 
+static int ice_sf_get_sset_count(struct net_device *netdev, int sset)
+{
+	switch (sset) {
+	case ETH_SS_STATS:
+		return ICE_SF_STATS_LEN(netdev);
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 #define ICE_I2C_EEPROM_DEV_ADDR		0xA0
 #define ICE_I2C_EEPROM_DEV_ADDR2	0xA2
 #define ICE_MODULE_TYPE_SFP		0x03
@@ -4870,6 +4881,23 @@ void ice_set_ethtool_repr_ops(struct net_device *netdev)
 	netdev->ethtool_ops = &ice_ethtool_repr_ops;
 }
 
+static const struct ethtool_ops ice_ethtool_sf_ops = {
+	.get_drvinfo		= ice_get_drvinfo,
+	.get_link		= ethtool_op_get_link,
+	.get_channels		= ice_get_channels,
+	.set_channels		= ice_set_channels,
+	.get_ringparam		= ice_get_ringparam,
+	.set_ringparam		= ice_set_ringparam,
+	.get_strings		= ice_get_strings,
+	.get_ethtool_stats	= ice_get_ethtool_stats,
+	.get_sset_count		= ice_sf_get_sset_count,
+};
+
+void ice_set_ethtool_sf_ops(struct net_device *netdev)
+{
+	netdev->ethtool_ops = &ice_ethtool_sf_ops;
+}
+
 /**
  * ice_set_ethtool_ops - setup netdev ethtool ops
  * @netdev: network interface device structure
diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
index 1a2c94375ca7..d63492c25949 100644
--- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c
+++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
@@ -58,6 +58,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn_port,
 	eth_hw_addr_set(netdev, dyn_port->hw_addr);
 	ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr);
 	netdev->netdev_ops = &ice_sf_netdev_ops;
+	ice_set_ethtool_sf_ops(netdev);
 	SET_NETDEV_DEVLINK_PORT(netdev, devlink_port);
 
 	err = register_netdev(netdev);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [iwl-next v1 3/3] ice: allow changing SF VSI queues number
  2024-10-31  6:00 [iwl-next v1 0/3] ice: multiqueue on subfunction Michal Swiatkowski
  2024-10-31  6:00 ` [iwl-next v1 1/3] ice: support max_io_eqs for subfunction Michal Swiatkowski
  2024-10-31  6:00 ` [iwl-next v1 2/3] ice: ethtool support for SF Michal Swiatkowski
@ 2024-10-31  6:00 ` Michal Swiatkowski
  2 siblings, 0 replies; 5+ messages in thread
From: Michal Swiatkowski @ 2024-10-31  6:00 UTC (permalink / raw)
  To: intel-wired-lan; +Cc: netdev, sridhar.samudrala

Move setting number of Rx and Tx queues to the separate functions and
use it in SF case.

Adjust getting max Rx and Tx queues for SF usecase.

Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
---
 drivers/net/ethernet/intel/ice/ice_ethtool.c | 37 +++++++-----
 drivers/net/ethernet/intel/ice/ice_lib.c     | 63 ++++++++++++--------
 2 files changed, 60 insertions(+), 40 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 9e2f20ed55d5..c68f7796b83e 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -3786,22 +3786,31 @@ ice_get_ts_info(struct net_device *dev, struct kernel_ethtool_ts_info *info)
 
 /**
  * ice_get_max_txq - return the maximum number of Tx queues for in a PF
- * @pf: PF structure
+ * @vsi: VSI structure
  */
-static int ice_get_max_txq(struct ice_pf *pf)
+static int ice_get_max_txq(struct ice_vsi *vsi)
 {
-	return min3(pf->num_lan_msix, (u16)num_online_cpus(),
-		    (u16)pf->hw.func_caps.common_cap.num_txq);
+	u16 num_queues = vsi->back->num_lan_msix;
+
+	if (vsi->max_io_eqs)
+		num_queues = vsi->max_io_eqs;
+	return min3(num_queues, (u16)num_online_cpus(),
+		    (u16)vsi->back->hw.func_caps.common_cap.num_txq);
 }
 
 /**
  * ice_get_max_rxq - return the maximum number of Rx queues for in a PF
- * @pf: PF structure
+ * @vsi: VSI structure
  */
-static int ice_get_max_rxq(struct ice_pf *pf)
+static int ice_get_max_rxq(struct ice_vsi *vsi)
 {
-	return min3(pf->num_lan_msix, (u16)num_online_cpus(),
-		    (u16)pf->hw.func_caps.common_cap.num_rxq);
+	u16 num_queues = vsi->back->num_lan_msix;
+
+	if (vsi->max_io_eqs)
+		num_queues = vsi->max_io_eqs;
+
+	return min3(num_queues, (u16)num_online_cpus(),
+		    (u16)vsi->back->hw.func_caps.common_cap.num_rxq);
 }
 
 /**
@@ -3839,8 +3848,8 @@ ice_get_channels(struct net_device *dev, struct ethtool_channels *ch)
 	struct ice_pf *pf = vsi->back;
 
 	/* report maximum channels */
-	ch->max_rx = ice_get_max_rxq(pf);
-	ch->max_tx = ice_get_max_txq(pf);
+	ch->max_rx = ice_get_max_rxq(vsi);
+	ch->max_tx = ice_get_max_txq(vsi);
 	ch->max_combined = min_t(int, ch->max_rx, ch->max_tx);
 
 	/* report current channels */
@@ -3958,14 +3967,14 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
 			   vsi->tc_cfg.numtc);
 		return -EINVAL;
 	}
-	if (new_rx > ice_get_max_rxq(pf)) {
+	if (new_rx > ice_get_max_rxq(vsi)) {
 		netdev_err(dev, "Maximum allowed Rx channels is %d\n",
-			   ice_get_max_rxq(pf));
+			   ice_get_max_rxq(vsi));
 		return -EINVAL;
 	}
-	if (new_tx > ice_get_max_txq(pf)) {
+	if (new_tx > ice_get_max_txq(vsi)) {
 		netdev_err(dev, "Maximum allowed Tx channels is %d\n",
-			   ice_get_max_txq(pf));
+			   ice_get_max_txq(vsi));
 		return -EINVAL;
 	}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 01220e21cc81..64a6152eaaef 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -157,6 +157,32 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi)
 	}
 }
 
+static void ice_vsi_set_num_txqs(struct ice_vsi *vsi, u16 def_qs)
+{
+	if (vsi->req_txq) {
+		vsi->alloc_txq = vsi->req_txq;
+		vsi->num_txq = vsi->req_txq;
+	} else {
+		vsi->alloc_txq = min_t(u16, def_qs, (u16)num_online_cpus());
+	}
+}
+
+static void ice_vsi_set_num_rxqs(struct ice_vsi *vsi, bool rss_ena, u16 def_qs)
+{
+	/* only 1 Rx queue unless RSS is enabled */
+	if (rss_ena) {
+		vsi->alloc_rxq = 1;
+		return;
+	}
+
+	if (vsi->req_rxq) {
+		vsi->alloc_rxq = vsi->req_rxq;
+		vsi->num_rxq = vsi->req_rxq;
+	} else {
+		vsi->alloc_rxq = min_t(u16, def_qs, (u16)num_online_cpus());
+	}
+}
+
 /**
  * ice_vsi_set_num_qs - Set number of queues, descriptors and vectors for a VSI
  * @vsi: the VSI being configured
@@ -174,31 +200,13 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
 
 	switch (vsi_type) {
 	case ICE_VSI_PF:
-		if (vsi->req_txq) {
-			vsi->alloc_txq = vsi->req_txq;
-			vsi->num_txq = vsi->req_txq;
-		} else {
-			vsi->alloc_txq = min3(pf->num_lan_msix,
-					      ice_get_avail_txq_count(pf),
-					      (u16)num_online_cpus());
-		}
-
+		ice_vsi_set_num_txqs(vsi, min(pf->num_lan_msix,
+					      ice_get_avail_txq_count(pf)));
 		pf->num_lan_tx = vsi->alloc_txq;
 
-		/* only 1 Rx queue unless RSS is enabled */
-		if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {
-			vsi->alloc_rxq = 1;
-		} else {
-			if (vsi->req_rxq) {
-				vsi->alloc_rxq = vsi->req_rxq;
-				vsi->num_rxq = vsi->req_rxq;
-			} else {
-				vsi->alloc_rxq = min3(pf->num_lan_msix,
-						      ice_get_avail_rxq_count(pf),
-						      (u16)num_online_cpus());
-			}
-		}
-
+		ice_vsi_set_num_rxqs(vsi, !test_bit(ICE_FLAG_RSS_ENA, pf->flags),
+				     min(pf->num_lan_msix,
+					 ice_get_avail_rxq_count(pf)));
 		pf->num_lan_rx = vsi->alloc_rxq;
 
 		vsi->num_q_vectors = min_t(int, pf->num_lan_msix,
@@ -206,9 +214,12 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
 						 vsi->alloc_txq));
 		break;
 	case ICE_VSI_SF:
-		vsi->alloc_txq = 1;
-		vsi->alloc_rxq = 1;
-		vsi->num_q_vectors = 1;
+		ice_vsi_set_num_txqs(vsi, min(vsi->max_io_eqs,
+					      ice_get_avail_txq_count(pf)));
+		ice_vsi_set_num_rxqs(vsi, !test_bit(ICE_FLAG_RSS_ENA, pf->flags),
+				     min(vsi->max_io_eqs,
+					 ice_get_avail_rxq_count(pf)));
+		vsi->num_q_vectors = max_t(int, vsi->alloc_rxq, vsi->alloc_txq);
 		vsi->irq_dyn_alloc = true;
 		break;
 	case ICE_VSI_VF:
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [iwl-next v1 1/3] ice: support max_io_eqs for subfunction
  2024-10-31  6:00 ` [iwl-next v1 1/3] ice: support max_io_eqs for subfunction Michal Swiatkowski
@ 2024-11-06  9:53   ` Simon Horman
  0 siblings, 0 replies; 5+ messages in thread
From: Simon Horman @ 2024-11-06  9:53 UTC (permalink / raw)
  To: Michal Swiatkowski; +Cc: intel-wired-lan, netdev, sridhar.samudrala

On Thu, Oct 31, 2024 at 07:00:07AM +0100, Michal Swiatkowski wrote:
> Implement get and set for the maximum IO event queues for SF.
> It is used to derive the maximum number of Rx/Tx queues on subfunction
> device.
> 
> If the value isn't set when activating set it to the low default value.
> 
> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> ---
>  drivers/net/ethernet/intel/ice/devlink/port.c | 37 +++++++++++++++++++
>  drivers/net/ethernet/intel/ice/ice.h          |  2 +
>  2 files changed, 39 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/devlink/port.c b/drivers/net/ethernet/intel/ice/devlink/port.c

...

> @@ -548,6 +575,14 @@ ice_activate_dynamic_port(struct ice_dynamic_port *dyn_port,
>  	if (dyn_port->active)
>  		return 0;
>  
> +	if (!dyn_port->vsi->max_io_eqs) {
> +		err = ice_devlink_port_fn_max_io_eqs_set(&dyn_port->devlink_port,
> +							 ICE_SF_DEFAULT_EQS,
> +							 extack);

Hi Michal,

I am a little confused about the relationship between this,
where ICE_SF_DEFAULT_EQS is 8, and the following check in
ice_devlink_port_fn_max_io_eqs_set().

	if (max_io_eqs > num_online_cpus()) {
		NL_SET_ERR_MSG_MOD(extack, "Supplied value out of range");
		return -EINVAL;
	}

What is the behaviour on systems with more than 8 online CPUs?

> +		if (err)
> +			return err;
> +	}
> +
>  	err = ice_sf_eth_activate(dyn_port, extack);
>  	if (err)
>  		return err;

...

> diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
> index 70d5294a558c..ca0739625d3b 100644
> --- a/drivers/net/ethernet/intel/ice/ice.h
> +++ b/drivers/net/ethernet/intel/ice/ice.h
> @@ -109,6 +109,7 @@
>  #define ICE_Q_WAIT_MAX_RETRY	(5 * ICE_Q_WAIT_RETRY_LIMIT)
>  #define ICE_MAX_LG_RSS_QS	256
>  #define ICE_INVAL_Q_INDEX	0xffff
> +#define ICE_SF_DEFAULT_EQS	8
>  
>  #define ICE_MAX_RXQS_PER_TC		256	/* Used when setting VSI context per TC Rx queues */
>  

...

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-11-06  9:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-31  6:00 [iwl-next v1 0/3] ice: multiqueue on subfunction Michal Swiatkowski
2024-10-31  6:00 ` [iwl-next v1 1/3] ice: support max_io_eqs for subfunction Michal Swiatkowski
2024-11-06  9:53   ` Simon Horman
2024-10-31  6:00 ` [iwl-next v1 2/3] ice: ethtool support for SF Michal Swiatkowski
2024-10-31  6:00 ` [iwl-next v1 3/3] ice: allow changing SF VSI queues number Michal Swiatkowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).