rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Leon Romanovsky <leon@kernel.org>
Cc: "Danilo Krummrich" <dakr@kernel.org>,
	"Peter Colberg" <pcolberg@redhat.com>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Krzysztof Wilczyński" <kwilczynski@kernel.org>,
	"Miguel Ojeda" <ojeda@kernel.org>,
	"Alex Gaynor" <alex.gaynor@gmail.com>,
	"Boqun Feng" <boqun.feng@gmail.com>,
	"Gary Guo" <gary@garyguo.net>,
	"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
	"Benno Lossin" <lossin@kernel.org>,
	"Andreas Hindborg" <a.hindborg@kernel.org>,
	"Alice Ryhl" <aliceryhl@google.com>,
	"Trevor Gross" <tmgross@umich.edu>,
	"Abdiel Janulgue" <abdiel.janulgue@gmail.com>,
	"Daniel Almeida" <daniel.almeida@collabora.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	"Dave Ertman" <david.m.ertman@intel.com>,
	"Ira Weiny" <ira.weiny@intel.com>,
	linux-pci@vger.kernel.org, rust-for-linux@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	"Alexandre Courbot" <acourbot@nvidia.com>,
	"Alistair Popple" <apopple@nvidia.com>,
	"Joel Fernandes" <joelagnelf@nvidia.com>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"Zhi Wang" <zhiw@nvidia.com>
Subject: Re: [PATCH 7/8] rust: pci: add physfn(), to return PF device for VF device
Date: Mon, 24 Nov 2025 09:57:25 -0400	[thread overview]
Message-ID: <20251124135725.GO233636@ziepe.ca> (raw)
In-Reply-To: <20251123111823.GD16619@unreal>

On Sun, Nov 23, 2025 at 01:18:23PM +0200, Leon Romanovsky wrote:
> > >> That sounds a bit odd to me, what exactly do you mean with "reuse the PF for
> > >> VFIO"? What do you do with the PF after driver unload instead? Load another
> > >> driver? If so, why separate ones?
> > >
> > > One of the main use cases for SR-IOV is to provide to VM users/customers
> > > devices with performance and security promises as physical ones. In this
> > > case, the VMs are created through PF and not bound to any driver. Once
> > > customer/user requests VM, that VF is bound to vfio-pci driver and
> > > attached to that VM.
> > >
> > > In many cases, PF is unbound too from its original driver and attached
> > > to some other VM. It allows for these VM providers to maximize
> > > utilization of their SR-IOV devices.
> > >
> > > At least in PCI spec 6.0.1, it stays clearly that PF can be attached to SI (VM in spec language).
> > > "Physical Function (PF) - A PF is a PCIe Function that supports the SR-IOV Extended Capability
> > > and is accessible to an SR-PCIM, a VI, or an SI."
> > 
> > Hm, that's possible, but do we have cases of this in practice where we bind and
> > unbind the same PF multiple times, pass it to different VMs, etc.?
> 
> It is very common case, when the goal is to maximize hardware utilization.

It is a sort of common configuration, but VFIO should be driving the
PF directly using its native SRIOV support. There is no need to rebind
a driver while SRIOV is still enabled.

> > You're mixing two things here. The driver model lifecycle requires that if
> > driver A calls into driver B - where B accesses its device private data - that B
> > is bound for the full duration of the call.
> 
> I'm aware of this, and we are not talking about driver model. Whole
> discussion is "if PF can be unbound, while VFs exist". The answer is yes, it can,
> both from PCI spec perspective and from operational one.

This whole discussion highlights my original feeling.. While I think
it makes alot of sense to tie the VF lifecycle to the PF driver
binding universally there are enough contrary opinions.

> > At least conditionally (as proposed above), it's an improvement for cases where
> > there is PF <-> VF interactions, i.e. why let drivers take care if the bus can
> > already do it for them.

Drivers like mlx5 have a sequencing requirement during shutdown, it
wants to see SRIOV turned off before it moves on to other steps. This
is probably somewhat common..

So while it is nice for the bus to guarentee it, it probably also
signals there is a bug somewhere if that code gets used..

Jason

  reply	other threads:[~2025-11-24 13:57 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-19 22:19 [PATCH 0/8] rust: pci: add abstractions for SR-IOV capability Peter Colberg
2025-11-19 22:19 ` [PATCH 1/8] rust: pci: add is_virtfn(), to check for VFs Peter Colberg
2025-11-21  3:00   ` kernel test robot
2025-11-21 18:27     ` Peter Colberg
2025-11-19 22:19 ` [PATCH 2/8] rust: pci: add is_physfn(), to check for PFs Peter Colberg
2025-11-21  4:35   ` kernel test robot
2025-11-19 22:19 ` [PATCH 3/8] rust: pci: add {enable,disable}_sriov(), to control SR-IOV capability Peter Colberg
2025-11-21 23:28   ` Jason Gunthorpe
2025-11-19 22:19 ` [PATCH 4/8] rust: pci: add num_vf(), to return number of VFs Peter Colberg
2025-11-19 22:19 ` [PATCH 5/8] rust: pci: add vtable attribute to pci::Driver trait Peter Colberg
2025-11-19 22:19 ` [PATCH 6/8] rust: pci: add bus callback sriov_configure(), to control SR-IOV from sysfs Peter Colberg
2025-11-21  6:00   ` kernel test robot
2025-11-19 22:19 ` [PATCH 7/8] rust: pci: add physfn(), to return PF device for VF device Peter Colberg
2025-11-21  7:57   ` kernel test robot
2025-11-21 23:26   ` Jason Gunthorpe
2025-11-22 10:23     ` Danilo Krummrich
2025-11-22 16:16       ` Jason Gunthorpe
2025-11-22 18:57         ` Leon Romanovsky
2025-11-22 22:26           ` Danilo Krummrich
2025-11-23  6:34             ` Leon Romanovsky
2025-11-23 10:07               ` Danilo Krummrich
2025-11-23 11:18                 ` Leon Romanovsky
2025-11-24 13:57                   ` Jason Gunthorpe [this message]
2025-11-24 14:53                     ` Leon Romanovsky
2025-11-24 15:04                       ` Jason Gunthorpe
2025-11-24 15:11                         ` Leon Romanovsky
2025-11-22 22:43         ` Danilo Krummrich
2025-11-24 13:59           ` Jason Gunthorpe
2025-11-19 22:19 ` [PATCH 8/8] samples: rust: add SR-IOV driver sample Peter Colberg
2025-11-20  6:41   ` Zhi Wang
2025-11-20 15:49     ` Peter Colberg
2025-11-20  6:32 ` [PATCH 0/8] rust: pci: add abstractions for SR-IOV capability Zhi Wang
2025-11-20 15:03   ` Peter Colberg
2025-11-20 18:34     ` Zhi Wang
2025-11-20 21:16       ` Zhi Wang
2025-11-21 17:05       ` Peter Colberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251124135725.GO233636@ziepe.ca \
    --to=jgg@ziepe.ca \
    --cc=a.hindborg@kernel.org \
    --cc=abdiel.janulgue@gmail.com \
    --cc=acourbot@nvidia.com \
    --cc=alex.gaynor@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=apopple@nvidia.com \
    --cc=bhelgaas@google.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=dakr@kernel.org \
    --cc=daniel.almeida@collabora.com \
    --cc=david.m.ertman@intel.com \
    --cc=gary@garyguo.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=ira.weiny@intel.com \
    --cc=jhubbard@nvidia.com \
    --cc=joelagnelf@nvidia.com \
    --cc=kwilczynski@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lossin@kernel.org \
    --cc=ojeda@kernel.org \
    --cc=pcolberg@redhat.com \
    --cc=robin.murphy@arm.com \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=tmgross@umich.edu \
    --cc=zhiw@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).