From: Christoph Hellwig <hch@lst.de>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
Christoph Hellwig <hch@lst.de>,
"Tian, Kevin" <kevin.tian@intel.com>,
Brett Creeley <bcreeley@amd.com>,
Brett Creeley <brett.creeley@amd.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"yishaih@nvidia.com" <yishaih@nvidia.com>,
"shameerali.kolothum.thodi@huawei.com"
<shameerali.kolothum.thodi@huawei.com>,
"horms@kernel.org" <horms@kernel.org>,
"shannon.nelson@amd.com" <shannon.nelson@amd.com>
Subject: Re: [PATCH v14 vfio 6/8] vfio/pds: Add support for dirty page tracking
Date: Sat, 12 Aug 2023 12:49:51 +0200 [thread overview]
Message-ID: <20230812104951.GC11480@lst.de> (raw)
In-Reply-To: <ZNUcLM/oRaCd7Ig2@nvidia.com>
On Thu, Aug 10, 2023 at 02:19:40PM -0300, Jason Gunthorpe wrote:
> > It's somewhat a strange requirement since we have no expectation of
> > compatibility between vendors for any other device type, but how far
> > are we going to take it? Is it enough that the device table here only
> > includes the Ethernet VF ID or do we want to actively prevent what
> > might be a trivial enabling of migration for another device type
> > because we envision it happening through an industry standard that
> > currently doesn't exist? Sorry if I'm not familiar with the dynamics
> > of the NVMe working group or previous agreements. Thanks,
>
> I don't really have a solid answer. Christoph and others in the NVMe
> space are very firm that NVMe related things must go through
> standards, I think that is their right.
Yes, anything that uses a class code needs a standardized way of
being managed. That is very different from say mlx5 which is obviously
controlled by Mellanox.
So I don't think any vfio driver except for the plain passthrough ones
should bind anything but very specific PCI IDs.
And AMD really needs to join the NVMe working group where the passthrough
work is happening right now. If you need help finding the right persons
at AMD to work with NVMe send me a mail offline, I can point you to them.
next prev parent reply other threads:[~2023-08-12 10:52 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-07 20:57 [PATCH v14 vfio 0/8] pds-vfio-pci driver Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 1/8] vfio: Commonize combine_ranges for use in other VFIO drivers Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 2/8] vfio/pds: Initial support for pds VFIO driver Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 3/8] pds_core: Require callers of register/unregister to pass PF drvdata Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 4/8] vfio/pds: register with the pds_core PF Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 5/8] vfio/pds: Add VFIO live migration support Brett Creeley
2023-08-08 22:27 ` Alex Williamson
2023-08-09 15:44 ` Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 6/8] vfio/pds: Add support for dirty page tracking Brett Creeley
2023-08-08 22:27 ` Alex Williamson
2023-08-09 15:44 ` Brett Creeley
2023-08-09 17:33 ` Alex Williamson
2023-08-09 18:06 ` Jason Gunthorpe
2023-08-10 2:47 ` Tian, Kevin
2023-08-10 16:47 ` Alex Williamson
2023-08-10 17:19 ` Jason Gunthorpe
2023-08-10 17:40 ` Alex Williamson
2023-08-10 17:43 ` Jason Gunthorpe
2023-08-10 17:54 ` Alex Williamson
2023-08-10 18:11 ` Jason Gunthorpe
2023-08-11 3:25 ` Tian, Kevin
2023-08-14 18:41 ` Brett Creeley
2023-08-15 2:45 ` Tian, Kevin
2023-08-11 15:53 ` Alex Williamson
2023-08-12 10:49 ` Christoph Hellwig [this message]
2023-08-14 22:51 ` Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 7/8] vfio/pds: Add support for firmware recovery Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 8/8] vfio/pds: Add Kconfig and documentation Brett Creeley
2023-08-10 8:32 ` [PATCH v14 vfio 0/8] pds-vfio-pci driver Shameerali Kolothum Thodi
2023-08-15 2:47 ` Tian, Kevin
2023-08-17 17:53 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230812104951.GC11480@lst.de \
--to=hch@lst.de \
--cc=alex.williamson@redhat.com \
--cc=bcreeley@amd.com \
--cc=brett.creeley@amd.com \
--cc=horms@kernel.org \
--cc=jgg@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shannon.nelson@amd.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).