From mboxrd@z Thu Jan 1 00:00:00 1970 From: Samudrala, Sridhar Date: Wed, 23 Jun 2021 09:21:02 -0700 Subject: [Intel-wired-lan] [PATCH net-next v2] ice: Enable configuration of number of qps per VF via devlink In-Reply-To: References: <20210426181940.14847-1-sridhar.samudrala@intel.com> Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: On 5/17/2021 2:49 PM, Creeley, Brett wrote: >> -----Original Message----- >> From: Intel-wired-lan On Behalf Of Sridhar Samudrala >> Sent: Monday, April 26, 2021 11:20 AM >> To: intel-wired-lan at lists.osuosl.org; Nguyen, Anthony L ; Samudrala, Sridhar >> >> Subject: [Intel-wired-lan] [PATCH net-next v2] ice: Enable configuration of number of qps per VF via devlink >> >> Introduce a devlink parameter 'num_qps_per_vf' to allow user >> to configure the maximum number of queue pairs given to SR-IOV >> VFs before they are created. >> >> This is currently determined by the driver based on the number >> of SR-IOV VFs created. In order to keep this behavior by default >> the parameter is initialized to 0. To change the default behavior, >> user can set num_qps_per_vf parameter via devlink and this will >> be used as the preferred value to determine the queues and vectors >> assigned per VF. > What if the host administrator wants to give the VF a different number > of vectors than queues? For example, if the admin knows the VF > instance will be exercising VF RDMA and the VF needs more vectors > for RDMA traffic. > > Should we have 2 separate values, i.e. "num_qps_per_vf" and > "num_msix_per_vf"? I missed responding to this comment. Sure. We can add num_msix_pr_vf as a later patch to enable additional vectors for RDMA. Tony, Can this patch be included in your series when you submit to netdev? OR do i need to rebase it based on the latest net-next? Thanks Sridhar > >> USAGE: >> On a 2 port NIC >> # devlink dev param show >> pci/0000:42:00.0: >> name num_qps_per_vf type driver-specific >> values: >> cmode runtime value 0 >> pci/0000:42:00.1: >> name num_qps_per_vf type driver-specific >> values: >> cmode runtime value 0 >> >> /* Set num_qps_per_vf to 4 */ >> # devlink dev param set pci/0000:42:00.0 name num_qps_per_vf value 4 cmode runtime >> >> # devlink dev param show pci/0000:42:00.0 name num_qps_per_vf >> pci/0000:42:00.0: >> name num_qps_per_vf type driver-specific >> values: >> cmode runtime value 4 >> >> # echo 8 > /sys/class/net/enp66s0f0/device/sriov_numvfs >> >> This will create 8 VFs with 4 queue pairs and 5 vectors per VF >> compared to the default behavior of 16 queue pairs and 17 vectors >> per VF. >> >> v2: >> Fixed kdoc for ice_devlink_num_qps_per_vf_validate() >> >> Signed-off-by: Sridhar Samudrala >> --- >> Documentation/networking/devlink/ice.rst | 23 ++++ >> drivers/net/ethernet/intel/ice/ice_devlink.c | 110 +++++++++++++++++- >> drivers/net/ethernet/intel/ice/ice_main.c | 3 + >> .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 5 +- >> 4 files changed, 139 insertions(+), 2 deletions(-) >> > >