netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Horman <horms@kernel.org>
To: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
	sridhar.samudrala@intel.com
Subject: Re: [iwl-next v1 1/3] ice: support max_io_eqs for subfunction
Date: Wed, 6 Nov 2024 09:53:50 +0000	[thread overview]
Message-ID: <20241106095350.GJ4507@kernel.org> (raw)
In-Reply-To: <20241031060009.38979-2-michal.swiatkowski@linux.intel.com>

On Thu, Oct 31, 2024 at 07:00:07AM +0100, Michal Swiatkowski wrote:
> Implement get and set for the maximum IO event queues for SF.
> It is used to derive the maximum number of Rx/Tx queues on subfunction
> device.
> 
> If the value isn't set when activating set it to the low default value.
> 
> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> ---
>  drivers/net/ethernet/intel/ice/devlink/port.c | 37 +++++++++++++++++++
>  drivers/net/ethernet/intel/ice/ice.h          |  2 +
>  2 files changed, 39 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/devlink/port.c b/drivers/net/ethernet/intel/ice/devlink/port.c

...

> @@ -548,6 +575,14 @@ ice_activate_dynamic_port(struct ice_dynamic_port *dyn_port,
>  	if (dyn_port->active)
>  		return 0;
>  
> +	if (!dyn_port->vsi->max_io_eqs) {
> +		err = ice_devlink_port_fn_max_io_eqs_set(&dyn_port->devlink_port,
> +							 ICE_SF_DEFAULT_EQS,
> +							 extack);

Hi Michal,

I am a little confused about the relationship between this,
where ICE_SF_DEFAULT_EQS is 8, and the following check in
ice_devlink_port_fn_max_io_eqs_set().

	if (max_io_eqs > num_online_cpus()) {
		NL_SET_ERR_MSG_MOD(extack, "Supplied value out of range");
		return -EINVAL;
	}

What is the behaviour on systems with more than 8 online CPUs?

> +		if (err)
> +			return err;
> +	}
> +
>  	err = ice_sf_eth_activate(dyn_port, extack);
>  	if (err)
>  		return err;

...

> diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
> index 70d5294a558c..ca0739625d3b 100644
> --- a/drivers/net/ethernet/intel/ice/ice.h
> +++ b/drivers/net/ethernet/intel/ice/ice.h
> @@ -109,6 +109,7 @@
>  #define ICE_Q_WAIT_MAX_RETRY	(5 * ICE_Q_WAIT_RETRY_LIMIT)
>  #define ICE_MAX_LG_RSS_QS	256
>  #define ICE_INVAL_Q_INDEX	0xffff
> +#define ICE_SF_DEFAULT_EQS	8
>  
>  #define ICE_MAX_RXQS_PER_TC		256	/* Used when setting VSI context per TC Rx queues */
>  

...

  reply	other threads:[~2024-11-06  9:53 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-31  6:00 [iwl-next v1 0/3] ice: multiqueue on subfunction Michal Swiatkowski
2024-10-31  6:00 ` [iwl-next v1 1/3] ice: support max_io_eqs for subfunction Michal Swiatkowski
2024-11-06  9:53   ` Simon Horman [this message]
2024-10-31  6:00 ` [iwl-next v1 2/3] ice: ethtool support for SF Michal Swiatkowski
2024-10-31  6:00 ` [iwl-next v1 3/3] ice: allow changing SF VSI queues number Michal Swiatkowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241106095350.GJ4507@kernel.org \
    --to=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=michal.swiatkowski@linux.intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=sridhar.samudrala@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).