netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Saeed Mahameed <saeed@kernel.org>
Cc: Yinjun Zhang <yinjun.zhang@corigine.com>,
	Simon Horman <simon.horman@corigine.com>,
	David Miller <davem@davemloft.net>,
	Paolo Abeni <pabeni@redhat.com>,
	Michael Chan <michael.chan@broadcom.com>,
	Andy Gospodarek <andy@greyhouse.net>,
	Gal Pressman <gal@nvidia.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	Edward Cree <ecree.xilinx@gmail.com>,
	Vladimir Oltean <vladimir.oltean@nxp.com>,
	Andrew Lunn <andrew@lunn.ch>,
	Nole Zhang <peng.zhang@corigine.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	oss-drivers <oss-drivers@corigine.com>
Subject: Re: [PATCH net-next 0/3] nfp: support VF multi-queues configuration
Date: Wed, 26 Oct 2022 13:07:41 -0700	[thread overview]
Message-ID: <20221026130741.1b8f118c@kernel.org> (raw)
In-Reply-To: <20221026142221.7vp4pkk6qgbwcrjk@sx1>

On Wed, 26 Oct 2022 15:22:21 +0100 Saeed Mahameed wrote:
> >Sorry, I thought you meant each VF creates a devlink obj. So still one devlink obj
> >and each VF registers a devlink port, right? But the configuration is supposed to
> >be done before VFs are created, it maybe not appropriate to register ports before
> >relevant VFs are created I think.
> 
> Usually you create the VFs unbound, configure them and then bind them.
> otherwise a query will have to query any possible VF which for some vendors
> can be thousands ! it's better to work on created but not yet deployed vfs

And the vendors who need to configure before spawning will do what,
exactly? Let's be practical.

> >> can you provide an example of how you imagine the reosurce vf-max-queue
> >> api
> >> will look like ?  
> >
> >Two options,
> >one is from VF's perspective, you need configure one by one, very straightforward:
> >```
> >pci/xxxx:xx:xx.x:
> >  name max_q size 128 unit entry
> >    resources:
> >      name VF0 size 1 unit entry size_min 1 size_max 128 size_gran 1
> >      name VF1 size 1 unit entry size_min 1 size_max 128 size_gran 1
> >      ...  
> 
> the above semantics are really weird, 
> VF0 can't be a sub-resource of max_q ! 
> 
> sorry i can't think of a way where devlink resoruce semantics can work for
> VF resource allocation.

It's just an API section. New forms of configuration can be added.
In fact they should so we can stop having this conversation.

> Unless a VF becomes a resource and it's q_table becomes a sub resource of that
> VF, which means you will have to register each vf as a resource individually.
> 
> Note that i called the resource "q_table" and not "max_queues",
> since semantically max_queues is a parameter where q_table can be looked at
> as a sub-resource of the VF, the q_table size decides the max_queues a VF
> will accept, so there you go ! 

Somewhere along the way you missed the other requirements to also allow
configuring guaranteed count that came from brcm as some point.

> arghh weird.. just make it an attribute for devlink port function and name it
> max_q as god intended it to be ;)

Let's not equate what fits the odd world of Melvidia FW with god.

> Fix your FW to allow changing VF maxqueue for unbound VFs if needed.

Not every device out there is all FW. Thankfully.

  reply	other threads:[~2022-10-26 20:08 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-19 14:09 [PATCH net-next 0/3] nfp: support VF multi-queues configuration Simon Horman
2022-10-19 14:09 ` [PATCH net-next 1/3] " Simon Horman
2022-10-19 14:09 ` [PATCH net-next 2/3] devlink: Add new "max_vf_queue" generic device param Simon Horman
2022-10-23 11:28   ` Gal Pressman
2022-10-24  1:47     ` Yinjun Zhang
2022-10-24  8:36       ` Leon Romanovsky
2022-10-19 14:09 ` [PATCH net-next 3/3] nfp: devlink: add the devlink parameter "max_vf_queue" support Simon Horman
2022-10-20  1:01 ` [PATCH net-next 0/3] nfp: support VF multi-queues configuration Jakub Kicinski
2022-10-20  1:35   ` Yinjun Zhang
2022-10-25  7:51     ` Saeed Mahameed
2022-10-25 10:41       ` Yinjun Zhang
2022-10-25 11:05         ` Saeed Mahameed
2022-10-25 11:39           ` Yinjun Zhang
2022-10-26 14:22             ` Saeed Mahameed
2022-10-26 20:07               ` Jakub Kicinski [this message]
2022-10-26 22:25                 ` Saeed Mahameed
2022-10-27  2:11               ` Yinjun Zhang
2022-10-27  5:53                 ` Leon Romanovsky
2022-10-27  6:01                 ` Leon Romanovsky
2022-10-27  8:46                 ` Saeed Mahameed
2022-10-27  9:46                   ` Yinjun Zhang
2022-10-27 10:49                     ` Saeed Mahameed
2022-10-29  3:32                       ` Yinjun Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221026130741.1b8f118c@kernel.org \
    --to=kuba@kernel.org \
    --cc=andrew@lunn.ch \
    --cc=andy@greyhouse.net \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=ecree.xilinx@gmail.com \
    --cc=gal@nvidia.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=michael.chan@broadcom.com \
    --cc=netdev@vger.kernel.org \
    --cc=oss-drivers@corigine.com \
    --cc=pabeni@redhat.com \
    --cc=peng.zhang@corigine.com \
    --cc=saeed@kernel.org \
    --cc=simon.horman@corigine.com \
    --cc=vladimir.oltean@nxp.com \
    --cc=yinjun.zhang@corigine.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).