linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Jiri Pirko <jiri@resnulli.us>
Cc: Tariq Toukan <tariqt@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>, Jiri Pirko <jiri@nvidia.com>,
	Cosmin Ratiu <cratiu@nvidia.com>,
	Carolina Jubran <cjubran@nvidia.com>,
	Gal Pressman <gal@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
	Donald Hunter <donald.hunter@gmail.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-rdma@vger.kernel.org
Subject: Re: [PATCH net-next 03/10] devlink: Serialize access to rate domains
Date: Mon, 3 Mar 2025 14:06:23 -0800	[thread overview]
Message-ID: <20250303140623.5df9f990@kernel.org> (raw)
In-Reply-To: <kmjgcuyao7a7zb2u4554rj724ucpd2xqmf5yru4spdqim7zafk@2ry67hbehjgx>

On Thu, 27 Feb 2025 13:22:25 +0100 Jiri Pirko wrote:
> >> I'm not sure how you imagine getting rid of them. One PCI PF
> >> instantiates one devlink now. There are lots of configuration (e.g. params)
> >> that is per-PF. You need this instance for that, how else would you do
> >> per-PF things on shared ASIC instance?  
> >
> >There are per-PF ports, right?  
> 
> Depends. On normal host sr-iov, no. On smartnic where you have PF in
> host, yes.

Yet another "great choice" in mlx5 other drivers have foreseen
problems with and avoided.

> >> Creating SFs is per-PF operation for example. I didn't to thorough
> >> analysis, but I'm sure there are couple of per-PF things like these.  
> >
> >Seems like adding a port attribute to SF creation would be a much
> >smaller extension than adding a layer of objects.
> >  
> >> Also not breaking the existing users may be an argument to keep per-PF
> >> instances.  
> >
> >We're talking about multi-PF devices only. Besides pretty sure we 
> >moved multiple params and health reporters to be per port, so IDK 
> >what changed now.  
> 
> Looks like pretty much all current NICs are multi-PFs, aren't they?

Not in a way which requires cross-port state sharing, no.
You should know this.

  reply	other threads:[~2025-03-03 22:06 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-13 18:01 [PATCH net-next 00/10] devlink and mlx5: Introduce rate domains Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 01/10] devlink: Remove unused param of devlink_rate_nodes_check Tariq Toukan
2025-02-18  2:54   ` Kalesh Anakkur Purayil
2025-02-13 18:01 ` [PATCH net-next 02/10] devlink: Store devlink rates in a rate domain Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 03/10] devlink: Serialize access to rate domains Tariq Toukan
2025-02-14 12:54   ` Jiri Pirko
2025-02-19  2:21     ` Jakub Kicinski
2025-02-25 13:36       ` Jiri Pirko
2025-02-26  1:40         ` Jakub Kicinski
2025-02-26 14:44           ` Jiri Pirko
2025-02-27  2:53             ` Jakub Kicinski
2025-02-27 12:22               ` Jiri Pirko
2025-03-03 22:06                 ` Jakub Kicinski [this message]
2025-03-04 13:11                   ` Jiri Pirko
2025-03-05  0:04                     ` Jakub Kicinski
2025-03-05 11:48                       ` Jiri Pirko
2025-02-13 18:01 ` [PATCH net-next 04/10] devlink: Introduce shared " Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 05/10] devlink: Allow specifying parent device for rate commands Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 06/10] devlink: Allow rate node parents from other devlinks Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 07/10] net/mlx5: qos: Introduce shared esw qos domains Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 08/10] net/mlx5: qos: Support cross-esw tx scheduling Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 09/10] net/mlx5: qos: Init shared devlink rate domain Tariq Toukan
2025-02-13 18:01 ` [PATCH net-next 10/10] net/mlx5: Document devlink rates and cross-esw scheduling Tariq Toukan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250303140623.5df9f990@kernel.org \
    --to=kuba@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=cjubran@nvidia.com \
    --cc=corbet@lwn.net \
    --cc=cratiu@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=donald.hunter@gmail.com \
    --cc=edumazet@google.com \
    --cc=gal@nvidia.com \
    --cc=jiri@nvidia.com \
    --cc=jiri@resnulli.us \
    --cc=leon@kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).