Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Jacob Keller <jacob.e.keller@intel.com>
To: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,
	<intel-wired-lan@lists.osuosl.org>
Cc: <netdev@vger.kernel.org>, <pawel.chmielewski@intel.com>,
	<sridhar.samudrala@intel.com>, <pio.raczynski@gmail.com>,
	<konrad.knitter@intel.com>, <marcin.szycik@intel.com>,
	<wojciech.drewek@intel.com>, <nex.sw.ncis.nat.hpm.dev@intel.com>,
	<przemyslaw.kitszel@intel.com>, <jiri@resnulli.us>,
	<horms@kernel.org>, <David.Laight@ACULAB.COM>
Subject: Re: [Intel-wired-lan] [iwl-next v6 0/9] ice: managing MSI-X in driver
Date: Wed, 30 Oct 2024 13:23:20 -0700	[thread overview]
Message-ID: <cd6b6dd9-1d07-4951-b052-b2bc03db4d5f@intel.com> (raw)
In-Reply-To: <20241028100341.16631-1-michal.swiatkowski@linux.intel.com>



On 10/28/2024 3:03 AM, Michal Swiatkowski wrote:
> Hi,
> 
> It is another try to allow user to manage amount of MSI-X used for each
> feature in ice. First was via devlink resources API, it wasn't accepted
> in upstream. Also static MSI-X allocation using devlink resources isn't
> really user friendly.
> 
> This try is using more dynamic way. "Dynamic" across whole kernel when
> platform supports it and "dynamic" across the driver when not.
> 
> To achieve that reuse global devlink parameter pf_msix_max and
> pf_msix_min. It fits how ice hardware counts MSI-X. In case of ice amount
> of MSI-X reported on PCI is a whole MSI-X for the card (with MSI-X for
> VFs also). Having pf_msix_max allow user to statically set how many
> MSI-X he wants on PF and how many should be reserved for VFs.
> 
> pf_msix_min is used to set minimum number of MSI-X with which ice driver
> should probe correctly.
> 
> Meaning of this field in case of dynamic vs static allocation:
> - on system with dynamic MSI-X allocation support
>  * alloc pf_msix_min as static, rest will be allocated dynamically
> - on system without dynamic MSI-X allocation support
>  * try alloc pf_msix_max as static, minimum acceptable result is
>  pf_msix_min
> 
> As Jesse and Piotr suggested pf_msix_max and pf_msix_min can (an
> probably should) be stored in NVM. This patchset isn't implementing
> that.
> 
> Dynamic (kernel or driver) way means that splitting MSI-X across the
> RDMA and eth in case there is a MSI-X shortage isn't correct. Can work
> when dynamic is only on driver site, but can't when dynamic is on kernel
> site.
> 
> Let's remove this code and move to MSI-X allocation feature by feature.
> If there is no more MSI-X for a feature, a feature is working with less
> MSI-X or it is turned off.
> 
> There is a regression here. With MSI-X splitting user can run RDMA and
> eth even on system with not enough MSI-X. Now only eth will work. RDMA
> can be turned on by changing number of PF queues (lowering) and reprobe
> RDMA driver.
> 
> Example:
> 72 CPU number, eth, RDMA and flow director (1 MSI-X), 1 MSI-X for OICR
> on PF, and 1 more for RDMA. Card is using 1 + 72 + 1 + 72 + 1 = 147.
> 
> We set pf_msix_min = 2, pf_msix_max = 128
> 
> OICR: 1
> eth: 72
> flow director: 1
> RDMA: 128 - 74 = 54
> 
> We can change number of queues on pf to 36 and do devlink reinit
> 
> OICR: 1
> eth: 36
> RDMA: 73
> flow director: 1
> 
> We can also (implemented in "ice: enable_rdma devlink param") turned
> RDMA off.
> 
> OICR: 1
> eth: 72
> RDMA: 0 (turned off)
> flow director: 1
> 
> After this changes we have a static base vector for SRIOV (SIOV probably
> in the feature). Last patch from this series is simplifying managing VF
> MSI-X code based on static vector.
> 
> Now changing queues using ethtool is also changing MSI-X. If there is
> enough MSI-X it is always one to one. When there is not enough there
> will be more queues than MSI-X. There is a lack of ability to set how
> many queues should be used per MSI-X. Maybe we should introduce another
> ethtool param for it? Sth like queues_per_vector?
> 
> v5 --> v6: [5]
>  * set default MSI-X max value based on needs instead of const define
>    (patch 3)
> 
> v4 --> v5: [4]
>  * count combined queues in ethtool for case the vectors aren't mapped
>    1:1 to queues (patch 1)
>  * change min_t to min where the casting isn't needed (and can hide
>    problems) (patch 4)
>  * load msix_max and msix_min value after devlink reload; it accidentally
>    wasn't added after removing loading in probe path to mitigate error
>    from devl_para_driverinit...() (patch 2)
>  * add documentation in develink/ice for new parameters (patch 2)
> 
> v3 --> v4: [3]
>  * drop unnecessary text in devlink validation comments
>  * assume that devl_param_driverinit...() shouldn't return error in
>    normal execution path
> 
> v2 --> v3: [2]
>  * move flow director init before RDMA init
>  * fix unrolling RDMA MSI-X allocation
>  * add comment in commit message about lowering control RDMA MSI-X
>    amount
> 
> v1 --> v2: [1]
>  * change permanent MSI-X cmode parameters to driverinit
>  * remove locking during devlink parameter registration (it is now
>    locked for whole init/deinit part)
> 
> [5] https://lore.kernel.org/netdev/20241024121230.5861-1-michal.swiatkowski@linux.intel.com/T/#t
> [4] https://lore.kernel.org/netdev/20240930120402.3468-1-michal.swiatkowski@linux.intel.com/
> [3] https://lore.kernel.org/netdev/20240808072016.10321-1-michal.swiatkowski@linux.intel.com/
> [2] https://lore.kernel.org/netdev/20240801093115.8553-1-michal.swiatkowski@linux.intel.com/
> [1] https://lore.kernel.org/netdev/20240213073509.77622-1-michal.swiatkowski@linux.intel.com/
> 

This version looks good to me! A lot of great simplification here too.

Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>

Thanks,
Jake

> Michal Swiatkowski (9):
>   ice: count combined queues using Rx/Tx count
>   ice: devlink PF MSI-X max and min parameter
>   ice: remove splitting MSI-X between features
>   ice: get rid of num_lan_msix field
>   ice, irdma: move interrupts code to irdma
>   ice: treat dyn_allowed only as suggestion
>   ice: enable_rdma devlink param
>   ice: simplify VF MSI-X managing
>   ice: init flow director before RDMA
> 
>  Documentation/networking/devlink/ice.rst      |  11 +
>  drivers/infiniband/hw/irdma/hw.c              |   2 -
>  drivers/infiniband/hw/irdma/main.c            |  46 ++-
>  drivers/infiniband/hw/irdma/main.h            |   3 +
>  .../net/ethernet/intel/ice/devlink/devlink.c  | 102 ++++++-
>  drivers/net/ethernet/intel/ice/ice.h          |  21 +-
>  drivers/net/ethernet/intel/ice/ice_base.c     |  10 +-
>  drivers/net/ethernet/intel/ice/ice_ethtool.c  |   9 +-
>  drivers/net/ethernet/intel/ice/ice_idc.c      |  64 +---
>  drivers/net/ethernet/intel/ice/ice_irq.c      | 275 ++++++------------
>  drivers/net/ethernet/intel/ice/ice_irq.h      |  13 +-
>  drivers/net/ethernet/intel/ice/ice_lib.c      |  35 ++-
>  drivers/net/ethernet/intel/ice/ice_main.c     |   6 +-
>  drivers/net/ethernet/intel/ice/ice_sriov.c    | 154 +---------
>  include/linux/net/intel/iidc.h                |   2 +
>  15 files changed, 328 insertions(+), 425 deletions(-)
> 


      parent reply	other threads:[~2024-10-30 20:23 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-28 10:03 [Intel-wired-lan] [iwl-next v6 0/9] ice: managing MSI-X in driver Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 1/9] ice: count combined queues using Rx/Tx count Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 2/9] ice: devlink PF MSI-X max and min parameter Michal Swiatkowski
2024-10-31 21:48   ` Michal Schmidt
2024-11-04  7:02     ` Michal Swiatkowski
2024-11-04  8:51       ` David Laight
2024-11-04  9:09         ` [Intel-wired-lan] Small Integers: Big Penalty (was: [iwl-next v6 2/9] ice: devlink PF MSI-X max and min parameter) Paul Menzel
2024-11-04  9:12           ` [Intel-wired-lan] Small Integers: Big Penalty Paul Menzel
2024-11-04 11:24             ` Michal Swiatkowski
2024-11-05 22:36               ` Keller, Jacob E
2024-10-31 21:58   ` [Intel-wired-lan] [iwl-next v6 2/9] ice: devlink PF MSI-X max and min parameter Michal Schmidt
2024-11-04  7:03     ` Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 3/9] ice: remove splitting MSI-X between features Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 4/9] ice: get rid of num_lan_msix field Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 5/9] ice, irdma: move interrupts code to irdma Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 6/9] ice: treat dyn_allowed only as suggestion Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 7/9] ice: enable_rdma devlink param Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 8/9] ice: simplify VF MSI-X managing Michal Swiatkowski
2024-10-28 10:03 ` [Intel-wired-lan] [iwl-next v6 9/9] ice: init flow director before RDMA Michal Swiatkowski
2024-10-30 20:23 ` Jacob Keller [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cd6b6dd9-1d07-4951-b052-b2bc03db4d5f@intel.com \
    --to=jacob.e.keller@intel.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jiri@resnulli.us \
    --cc=konrad.knitter@intel.com \
    --cc=marcin.szycik@intel.com \
    --cc=michal.swiatkowski@linux.intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nex.sw.ncis.nat.hpm.dev@intel.com \
    --cc=pawel.chmielewski@intel.com \
    --cc=pio.raczynski@gmail.com \
    --cc=przemyslaw.kitszel@intel.com \
    --cc=sridhar.samudrala@intel.com \
    --cc=wojciech.drewek@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox