From: Jason Gunthorpe <jgg@ziepe.ca>
To: Christoph Hellwig <hch@lst.de>
Cc: Lei Rao <lei.rao@intel.com>,
kbusch@kernel.org, axboe@fb.com, kch@nvidia.com,
sagi@grimberg.me, alex.williamson@redhat.com, cohuck@redhat.com,
yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com,
kevin.tian@intel.com, mjrosato@linux.ibm.com,
linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
kvm@vger.kernel.org, eddie.dong@intel.com, yadong.li@intel.com,
yi.l.liu@intel.com, Konrad.wilk@oracle.com,
stephen@eideticom.com, hang.yuan@intel.com
Subject: Re: [RFC PATCH 1/5] nvme-pci: add function nvme_submit_vf_cmd to issue admin commands for VF driver.
Date: Tue, 13 Dec 2022 13:49:45 -0400 [thread overview]
Message-ID: <Y5i7OWihTNCKXGEJ@ziepe.ca> (raw)
In-Reply-To: <20221213160807.GA626@lst.de>
On Tue, Dec 13, 2022 at 05:08:07PM +0100, Christoph Hellwig wrote:
> On Tue, Dec 13, 2022 at 10:01:03AM -0400, Jason Gunthorpe wrote:
> > > So now we need to write a vfio shim for every function even if there
> > > is absolutely nothing special about that function? Migrating really
> > > is the controlling functions behavior, and writing a new vfio bit
> > > for every controlled thing just does not scale.
> >
> > Huh? "does not scale?" We are looking at boilerplates of around 20-30
> > lines to make a VFIO driver for a real PCI device. Why is that even
> > something we should worry about optimizing?
>
> But we need a new driver for every controlled function now, which
> is very different from the classic VFIO model where we had one
> vfio_pci.
To be fair, mainly vfio_pci had that model. Other uses of VFIO have
device specific drivers already. We have the reset drivers in vfio
platform, and the mdevs already. SIOV drivers are coming and they will
not be general either. I know a few coming non-migration VFIO PCI
variant drivers as well to deal with HW issues.
Remember, we did a bunch of work to make this reasonable. Userspace
can properly probe the correct VFIO driver for the HW it wants to use,
just like normal devices. If we spawn the VFIO from the controlling
function then it obviously will bring the correct driver along too.
The mental model I have for VFIO is that every vfio_device has a
driver, and we have three "universal" drivers that wildcard match to
many devices (pci, fsl, and platform acpi reset). Otherwise VFIO is
like every other driver subsystem out there, with physical devices and
matching drivers that support them.
Creating drivers for HW is not a problem, that is what a driver
subsystem is for. We already invested effort in VFIO to make this
scalable.
> > And when you get into exciting future devices like SIOV you already
> > need to make a special VFIO driver anyhow.
>
> You need to special support for it. It's probably not another
> Linux driver but part of the parent one, though.
The designs we have done in mlx5 are split. The "parent" has just
enough shim to describe what the SIOV is in terms of a 'slice of the
parents resources' and then we layer another driver, located in the
proper subsystem, to operate that slice. VDPA makes a
/dev/virtio-whatever, VFIO would make a fake PCI function, mlx5 makes
a netdev, etc.
It is not so different from how a PF/VF relationship works, just that
the SIOV is described by a struct auxillary_device not a struct
pci_dev.
I don't really like implementing VFIO drivers outside drivers/vfio, I
think that has historically had bad outcomes in other subsystems.
> > So far 100% of the drivers that have been presented, including the two
> > RFC ones, have entanglements between live migration and vfio. Shifting
> > things to dev/live_migration doesn't make the "communication problem"
> > away, it just shifted it into another subsystem.
>
> The main entanglement seems to be that it needs to support a vfio
> interface for live migration while the actual commands go to the
> parent device.
Not at all, that is only a couple function calls in 4 of the drivers
so far.
The entanglement is that the live migration FSM and the VFIO device
operation are not isolated. I keep repeating this - mlx5 and the two
RFC drivers must trap VFIO operations and relay them to their
migration logic. hns has to mangle its BARs. These are things that
only exist on the VFIO side.
So, you are viewing live migration as orthogonal and separable to
VFIO, and I don't agree with this because I haven't yet seen any proof
in implementations.
Let's go through the nvme spec process and see how it works out. If
NVMe can address things which are tripping up other implemenations,
like FLR of the controlled function. Then we may have the first
example. If not, then it is just how things are.
FLR is trickey, it not obvious to me that you want a definition of
migration that isolates controlled function FLR from the migration
FSM..
There are advantages to having a reliable, universal, way to bring a
function back to a clean slate, including restoring it to full
operation (ie canceling any migration operation). The current
definition of FLR provides this.
> > It is worse than just VFIO vs one kernel driver, like mlx5 could spawn
> > a controlled function that is NVMe, VDPA, mlx5, virtio-net, VFIO,
> > etc.
>
> This seems to violate the PCIe spec, which says:
>
> "All VFs associated with a PF must be the same device type as the PF,
> (e.g., the same network device type or the same storage device type.)",
For VFs there are multiple PFs to follow the above, and for SIOV this
language doesn't apply.
It seems the PDS RFC driver does violate this spec requirement though..
> > When we create the function we really want to tell the device what
> > kind of function it is, and that also tells the kernel what driver
> > should be bound to it.
>
> I'd rather have different ways to probe by passing a "kind" or "type"
> argument along the device IDs during probing. E.g. "driver"
> and "vfio", and then only match for the kind the creator of the device
> added them to the device model for.
Not everything can be done during driver probing. There are certainly
steps at SIOV instantiation time or VF provisioning that impact what
exactly is available on the controlled function. Eg on mlx5 when we
create a VDPA device it actually is different from a full-function
mlx5 device and that customization was done before any driver was
probed.
In fact, not only is it done before driver binding, but it can be
enforced as a security property from the DPU side when the DPU is the
thing creating the function.
I like the general idea of type to help specify the driver to probe,
we tried to work on something like that once and it didn't go far, but
I did like the concept of it.
> > mlx5 even has weird limitations, like a controlled function that is
> > live migration capable has fewer features than a function that is
> > not. So the user must specify what parameters it wants the controlled
> > function to have..
>
> I don't think that is weird. If you want to live migrate, you need to
>
> a) make sure the feature set is compatible with the other side
> b) there is only state that actually is migratable
>
> so I'd expect that for any other sufficiently complex device. NVMe
> for sure will have limits like this.
Oy, this has been pretty hard to define in mlx5 already :( Hopefully
nvme-cli can sort it out for NVMe configurables.
Jason
next prev parent reply other threads:[~2022-12-13 17:50 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-06 5:58 [RFC PATCH 0/5] Add new VFIO PCI driver for NVMe devices Lei Rao
2022-12-06 5:58 ` [RFC PATCH 1/5] nvme-pci: add function nvme_submit_vf_cmd to issue admin commands for VF driver Lei Rao
2022-12-06 6:19 ` Christoph Hellwig
2022-12-06 13:44 ` Jason Gunthorpe
2022-12-06 13:51 ` Keith Busch
2022-12-06 14:27 ` Jason Gunthorpe
2022-12-06 13:58 ` Christoph Hellwig
2022-12-06 15:22 ` Jason Gunthorpe
2022-12-06 15:38 ` Christoph Hellwig
2022-12-06 15:51 ` Jason Gunthorpe
2022-12-06 16:55 ` Christoph Hellwig
2022-12-06 19:15 ` Jason Gunthorpe
2022-12-07 2:30 ` Max Gurtovoy
2022-12-07 7:58 ` Christoph Hellwig
2022-12-09 2:11 ` Tian, Kevin
2022-12-12 7:41 ` Christoph Hellwig
2022-12-07 7:54 ` Christoph Hellwig
2022-12-07 10:59 ` Max Gurtovoy
2022-12-07 13:46 ` Christoph Hellwig
2022-12-07 14:50 ` Max Gurtovoy
2022-12-07 16:35 ` Christoph Hellwig
2022-12-07 13:34 ` Jason Gunthorpe
2022-12-07 13:52 ` Christoph Hellwig
2022-12-07 15:07 ` Jason Gunthorpe
2022-12-07 16:38 ` Christoph Hellwig
2022-12-07 17:31 ` Jason Gunthorpe
2022-12-07 18:33 ` Christoph Hellwig
2022-12-07 20:08 ` Jason Gunthorpe
2022-12-09 2:50 ` Tian, Kevin
2022-12-09 18:56 ` Dong, Eddie
2022-12-11 11:39 ` Max Gurtovoy
2022-12-12 7:55 ` Christoph Hellwig
2022-12-12 14:49 ` Max Gurtovoy
2022-12-12 7:50 ` Christoph Hellwig
2022-12-13 14:01 ` Jason Gunthorpe
2022-12-13 16:08 ` Christoph Hellwig
2022-12-13 17:49 ` Jason Gunthorpe [this message]
2022-12-06 5:58 ` [RFC PATCH 2/5] nvme-vfio: add new vfio-pci driver for NVMe device Lei Rao
2022-12-06 5:58 ` [RFC PATCH 3/5] nvme-vfio: enable the function of VFIO live migration Lei Rao
2023-01-19 10:21 ` Max Gurtovoy
2023-02-09 9:09 ` Rao, Lei
2022-12-06 5:58 ` [RFC PATCH 4/5] nvme-vfio: check if the hardware supports " Lei Rao
2022-12-06 13:47 ` Keith Busch
2022-12-06 5:58 ` [RFC PATCH 5/5] nvme-vfio: Add a document for the NVMe device Lei Rao
2022-12-06 6:26 ` Christoph Hellwig
2022-12-06 13:05 ` Jason Gunthorpe
2022-12-06 13:09 ` Christoph Hellwig
2022-12-06 13:52 ` Jason Gunthorpe
2022-12-06 14:00 ` Christoph Hellwig
2022-12-06 14:20 ` Jason Gunthorpe
2022-12-06 14:31 ` Christoph Hellwig
2022-12-06 14:48 ` Jason Gunthorpe
2022-12-06 15:01 ` Christoph Hellwig
2022-12-06 15:28 ` Jason Gunthorpe
2022-12-06 15:35 ` Christoph Hellwig
2022-12-06 18:00 ` Dong, Eddie
2022-12-12 7:57 ` Christoph Hellwig
2022-12-11 12:05 ` Max Gurtovoy
2022-12-11 13:21 ` Rao, Lei
2022-12-11 14:51 ` Max Gurtovoy
2022-12-12 1:20 ` Rao, Lei
2022-12-12 8:09 ` Christoph Hellwig
2022-12-09 2:05 ` Tian, Kevin
2022-12-09 16:53 ` Li, Yadong
2022-12-12 8:11 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y5i7OWihTNCKXGEJ@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=Konrad.wilk@oracle.com \
--cc=alex.williamson@redhat.com \
--cc=axboe@fb.com \
--cc=cohuck@redhat.com \
--cc=eddie.dong@intel.com \
--cc=hang.yuan@intel.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=lei.rao@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mjrosato@linux.ibm.com \
--cc=sagi@grimberg.me \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=stephen@eideticom.com \
--cc=yadong.li@intel.com \
--cc=yi.l.liu@intel.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox