* [PATCH iwl-next v2] ice: use netif_get_num_default_rss_queues()
@ 2025-10-28 7:06 Michal Swiatkowski
2025-10-28 7:48 ` [Intel-wired-lan] " Loktionov, Aleksandr
0 siblings, 1 reply; 3+ messages in thread
From: Michal Swiatkowski @ 2025-10-28 7:06 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, pmenzel, aleksander.lobakin, przemyslaw.kitszel,
jacob.e.keller, Michal Swiatkowski
On some high-core systems (like AMD EPYC Bergamo, Intel Clearwater
Forest) loading ice driver with default values can lead to queue/irq
exhaustion. It will result in no additional resources for SR-IOV.
In most cases there is no performance reason for more than half
num_cpus(). Limit the default value to it using generic
netif_get_num_default_rss_queues().
Still, using ethtool the number of queues can be changed up to
num_online_cpus(). It can be done by calling:
$ethtool -L ethX combined max_cpu
This change affects only the default queue amount.
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
---
v1 --> v2:
* Follow Olek's comment and switch from custom limiting to the generic
netif_...() function.
* Add more info in commit message (Paul)
* Dropping RB tags, as it is different patch now
---
drivers/net/ethernet/intel/ice/ice_irq.c | 5 +++--
drivers/net/ethernet/intel/ice/ice_lib.c | 12 ++++++++----
2 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c b/drivers/net/ethernet/intel/ice/ice_irq.c
index 30801fd375f0..1d9b2d646474 100644
--- a/drivers/net/ethernet/intel/ice/ice_irq.c
+++ b/drivers/net/ethernet/intel/ice/ice_irq.c
@@ -106,9 +106,10 @@ static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf,
#define ICE_RDMA_AEQ_MSIX 1
static int ice_get_default_msix_amount(struct ice_pf *pf)
{
- return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() +
+ return ICE_MIN_LAN_OICR_MSIX + netif_get_num_default_rss_queues() +
(test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX : 0) +
- (ice_is_rdma_ena(pf) ? num_online_cpus() + ICE_RDMA_AEQ_MSIX : 0);
+ (ice_is_rdma_ena(pf) ? netif_get_num_default_rss_queues() +
+ ICE_RDMA_AEQ_MSIX : 0);
}
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index bac481e8140d..e366d089bef9 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi)
static u16 ice_get_rxq_count(struct ice_pf *pf)
{
- return min(ice_get_avail_rxq_count(pf), num_online_cpus());
+ return min(ice_get_avail_rxq_count(pf),
+ netif_get_num_default_rss_queues());
}
static u16 ice_get_txq_count(struct ice_pf *pf)
{
- return min(ice_get_avail_txq_count(pf), num_online_cpus());
+ return min(ice_get_avail_txq_count(pf),
+ netif_get_num_default_rss_queues());
}
/**
@@ -907,13 +909,15 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi)
if (vsi->type == ICE_VSI_CHNL)
vsi->rss_size = min_t(u16, vsi->num_rxq, max_rss_size);
else
- vsi->rss_size = min_t(u16, num_online_cpus(),
+ vsi->rss_size = min_t(u16,
+ netif_get_num_default_rss_queues(),
max_rss_size);
vsi->rss_lut_type = ICE_LUT_PF;
break;
case ICE_VSI_SF:
vsi->rss_table_size = ICE_LUT_VSI_SIZE;
- vsi->rss_size = min_t(u16, num_online_cpus(), max_rss_size);
+ vsi->rss_size = min_t(u16, netif_get_num_default_rss_queues(),
+ max_rss_size);
vsi->rss_lut_type = ICE_LUT_VSI;
break;
case ICE_VSI_VF:
--
2.49.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* RE: [Intel-wired-lan] [PATCH iwl-next v2] ice: use netif_get_num_default_rss_queues()
2025-10-28 7:06 [PATCH iwl-next v2] ice: use netif_get_num_default_rss_queues() Michal Swiatkowski
@ 2025-10-28 7:48 ` Loktionov, Aleksandr
2025-10-28 9:08 ` Michal Swiatkowski
0 siblings, 1 reply; 3+ messages in thread
From: Loktionov, Aleksandr @ 2025-10-28 7:48 UTC (permalink / raw)
To: Michal Swiatkowski, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, pmenzel@molgen.mpg.de,
Lobakin, Aleksander, Kitszel, Przemyslaw, Keller, Jacob E
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Michal Swiatkowski
> Sent: Tuesday, October 28, 2025 8:07 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; pmenzel@molgen.mpg.de; Lobakin, Aleksander
> <aleksander.lobakin@intel.com>; Kitszel, Przemyslaw
> <przemyslaw.kitszel@intel.com>; Keller, Jacob E
> <jacob.e.keller@intel.com>; Michal Swiatkowski
> <michal.swiatkowski@linux.intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-next v2] ice: use
> netif_get_num_default_rss_queues()
>
> On some high-core systems (like AMD EPYC Bergamo, Intel Clearwater
> Forest) loading ice driver with default values can lead to queue/irq
> exhaustion. It will result in no additional resources for SR-IOV.
>
> In most cases there is no performance reason for more than half
> num_cpus(). Limit the default value to it using generic
> netif_get_num_default_rss_queues().
>
> Still, using ethtool the number of queues can be changed up to
> num_online_cpus(). It can be done by calling:
> $ethtool -L ethX combined max_cpu
>
It could be nice to use $(nproc)?
$ ethtool -L ethX combined $(nproc)
> This change affects only the default queue amount.
>
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> ---
> v1 --> v2:
> * Follow Olek's comment and switch from custom limiting to the
> generic
> netif_...() function.
> * Add more info in commit message (Paul)
> * Dropping RB tags, as it is different patch now
> ---
> drivers/net/ethernet/intel/ice/ice_irq.c | 5 +++--
> drivers/net/ethernet/intel/ice/ice_lib.c | 12 ++++++++----
> 2 files changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c
> b/drivers/net/ethernet/intel/ice/ice_irq.c
> index 30801fd375f0..1d9b2d646474 100644
> --- a/drivers/net/ethernet/intel/ice/ice_irq.c
> +++ b/drivers/net/ethernet/intel/ice/ice_irq.c
> @@ -106,9 +106,10 @@ static struct ice_irq_entry
> *ice_get_irq_res(struct ice_pf *pf,
> #define ICE_RDMA_AEQ_MSIX 1
> static int ice_get_default_msix_amount(struct ice_pf *pf)
> {
> - return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() +
> + return ICE_MIN_LAN_OICR_MSIX +
> netif_get_num_default_rss_queues() +
> (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX :
> 0) +
> - (ice_is_rdma_ena(pf) ? num_online_cpus() +
> ICE_RDMA_AEQ_MSIX : 0);
> + (ice_is_rdma_ena(pf) ?
> netif_get_num_default_rss_queues() +
> + ICE_RDMA_AEQ_MSIX : 0);
> }
>
> /**
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c
> b/drivers/net/ethernet/intel/ice/ice_lib.c
> index bac481e8140d..e366d089bef9 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi
> *vsi)
>
> static u16 ice_get_rxq_count(struct ice_pf *pf)
> {
> - return min(ice_get_avail_rxq_count(pf), num_online_cpus());
> + return min(ice_get_avail_rxq_count(pf),
> + netif_get_num_default_rss_queues());
> }
min(a, b) resolves to the type of the expression, which here will be int due to netif_get_num_default_rss_queues() returning int.
That implicitly truncates to u16 on return.
What do you think about to make this explicit with min_t() to avoid type surprises?
>
> static u16 ice_get_txq_count(struct ice_pf *pf)
> {
> - return min(ice_get_avail_txq_count(pf), num_online_cpus());
> + return min(ice_get_avail_txq_count(pf),
> + netif_get_num_default_rss_queues());
> }
Same min_t() ?
Otherwise, fine for me.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [Intel-wired-lan] [PATCH iwl-next v2] ice: use netif_get_num_default_rss_queues()
2025-10-28 7:48 ` [Intel-wired-lan] " Loktionov, Aleksandr
@ 2025-10-28 9:08 ` Michal Swiatkowski
0 siblings, 0 replies; 3+ messages in thread
From: Michal Swiatkowski @ 2025-10-28 9:08 UTC (permalink / raw)
To: Loktionov, Aleksandr
Cc: Michal Swiatkowski, intel-wired-lan@lists.osuosl.org,
netdev@vger.kernel.org, pmenzel@molgen.mpg.de,
Lobakin, Aleksander, Kitszel, Przemyslaw, Keller, Jacob E
On Tue, Oct 28, 2025 at 07:48:11AM +0000, Loktionov, Aleksandr wrote:
>
>
> > -----Original Message-----
> > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> > Of Michal Swiatkowski
> > Sent: Tuesday, October 28, 2025 8:07 AM
> > To: intel-wired-lan@lists.osuosl.org
> > Cc: netdev@vger.kernel.org; pmenzel@molgen.mpg.de; Lobakin, Aleksander
> > <aleksander.lobakin@intel.com>; Kitszel, Przemyslaw
> > <przemyslaw.kitszel@intel.com>; Keller, Jacob E
> > <jacob.e.keller@intel.com>; Michal Swiatkowski
> > <michal.swiatkowski@linux.intel.com>
> > Subject: [Intel-wired-lan] [PATCH iwl-next v2] ice: use
> > netif_get_num_default_rss_queues()
> >
> > On some high-core systems (like AMD EPYC Bergamo, Intel Clearwater
> > Forest) loading ice driver with default values can lead to queue/irq
> > exhaustion. It will result in no additional resources for SR-IOV.
> >
> > In most cases there is no performance reason for more than half
> > num_cpus(). Limit the default value to it using generic
> > netif_get_num_default_rss_queues().
> >
> > Still, using ethtool the number of queues can be changed up to
> > num_online_cpus(). It can be done by calling:
> > $ethtool -L ethX combined max_cpu
> >
> It could be nice to use $(nproc)?
> $ ethtool -L ethX combined $(nproc)
Will change
>
> > This change affects only the default queue amount.
> >
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > ---
> > v1 --> v2:
> > * Follow Olek's comment and switch from custom limiting to the
> > generic
> > netif_...() function.
> > * Add more info in commit message (Paul)
> > * Dropping RB tags, as it is different patch now
> > ---
> > drivers/net/ethernet/intel/ice/ice_irq.c | 5 +++--
> > drivers/net/ethernet/intel/ice/ice_lib.c | 12 ++++++++----
> > 2 files changed, 11 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c
> > b/drivers/net/ethernet/intel/ice/ice_irq.c
> > index 30801fd375f0..1d9b2d646474 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_irq.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_irq.c
> > @@ -106,9 +106,10 @@ static struct ice_irq_entry
> > *ice_get_irq_res(struct ice_pf *pf,
> > #define ICE_RDMA_AEQ_MSIX 1
> > static int ice_get_default_msix_amount(struct ice_pf *pf)
> > {
> > - return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() +
> > + return ICE_MIN_LAN_OICR_MSIX +
> > netif_get_num_default_rss_queues() +
> > (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX :
> > 0) +
> > - (ice_is_rdma_ena(pf) ? num_online_cpus() +
> > ICE_RDMA_AEQ_MSIX : 0);
> > + (ice_is_rdma_ena(pf) ?
> > netif_get_num_default_rss_queues() +
> > + ICE_RDMA_AEQ_MSIX : 0);
> > }
> >
> > /**
> > diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c
> > b/drivers/net/ethernet/intel/ice/ice_lib.c
> > index bac481e8140d..e366d089bef9 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> > @@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi
> > *vsi)
> >
> > static u16 ice_get_rxq_count(struct ice_pf *pf)
> > {
> > - return min(ice_get_avail_rxq_count(pf), num_online_cpus());
> > + return min(ice_get_avail_rxq_count(pf),
> > + netif_get_num_default_rss_queues());
> > }
> min(a, b) resolves to the type of the expression, which here will be int due to netif_get_num_default_rss_queues() returning int.
> That implicitly truncates to u16 on return.
> What do you think about to make this explicit with min_t() to avoid type surprises?
We will just hide the truncing in the min_t() call. Probably if we
assuming that cpu / 2 can be higher than U16_MAX we should check that
here. Is it needed? (Previous situation is the same, num_online_cpus() is
returning int).
>
> >
> > static u16 ice_get_txq_count(struct ice_pf *pf)
> > {
> > - return min(ice_get_avail_txq_count(pf), num_online_cpus());
> > + return min(ice_get_avail_txq_count(pf),
> > + netif_get_num_default_rss_queues());
> > }
>
> Same min_t() ?
>
> Otherwise, fine for me.
Thanks
>
> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-10-28 9:10 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-28 7:06 [PATCH iwl-next v2] ice: use netif_get_num_default_rss_queues() Michal Swiatkowski
2025-10-28 7:48 ` [Intel-wired-lan] " Loktionov, Aleksandr
2025-10-28 9:08 ` Michal Swiatkowski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).