From: Alex Williamson <alex.williamson@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>,
"Tian, Kevin" <kevin.tian@intel.com>,
Brett Creeley <bcreeley@amd.com>,
Brett Creeley <brett.creeley@amd.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"yishaih@nvidia.com" <yishaih@nvidia.com>,
"shameerali.kolothum.thodi@huawei.com"
<shameerali.kolothum.thodi@huawei.com>,
"horms@kernel.org" <horms@kernel.org>,
"shannon.nelson@amd.com" <shannon.nelson@amd.com>
Subject: Re: [PATCH v14 vfio 6/8] vfio/pds: Add support for dirty page tracking
Date: Thu, 10 Aug 2023 11:40:08 -0600 [thread overview]
Message-ID: <20230810114008.6b038d2a.alex.williamson@redhat.com> (raw)
In-Reply-To: <ZNUcLM/oRaCd7Ig2@nvidia.com>
On Thu, 10 Aug 2023 14:19:40 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Thu, Aug 10, 2023 at 10:47:34AM -0600, Alex Williamson wrote:
> > On Thu, 10 Aug 2023 02:47:15 +0000
> > "Tian, Kevin" <kevin.tian@intel.com> wrote:
> >
> > > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > > Sent: Thursday, August 10, 2023 2:06 AM
> > > >
> > > > On Wed, Aug 09, 2023 at 11:33:00AM -0600, Alex Williamson wrote:
> > > >
> > > > > Shameer, Kevin, Jason, Yishai, I'm hoping one or more of you can
> > > > > approve this series as well. Thanks,
> > > >
> > > > I've looked at it a few times now, I think it is OK, aside from the
> > > > nvme issue.
> > > >
> > >
> > > My only concern is the duplication of backing storage management
> > > of the migration file which I didn't take time to review.
> > >
> > > If all others are fine to leave it as is then I will not insist.
> >
> > There's leverage now if you feel strongly about it, but code
> > consolidation could certainly come later.
> >
> > Are either of you willing to provide a R-b?
>
> The code structure is good enough (though I agree with Kevin), so sure:
>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
>
> > What are we looking for relative to NVMe? AIUI, the first couple
> > revisions of this series specified an NVMe device ID, then switched to
> > a wildcard, then settled on an Ethernet device ID, all with no obvious
> > changes that would suggest support is limited to a specific device
> > type. I think we're therefore concerned that migration of an NVMe VF
> > could be enabled by overriding/adding device IDs, whereas we'd like to
> > standardize NVMe migration to avoid avoid incompatible implementations.
>
> Yeah
>
> > It's somewhat a strange requirement since we have no expectation of
> > compatibility between vendors for any other device type, but how far
> > are we going to take it? Is it enough that the device table here only
> > includes the Ethernet VF ID or do we want to actively prevent what
> > might be a trivial enabling of migration for another device type
> > because we envision it happening through an industry standard that
> > currently doesn't exist? Sorry if I'm not familiar with the dynamics
> > of the NVMe working group or previous agreements. Thanks,
>
> I don't really have a solid answer. Christoph and others in the NVMe
> space are very firm that NVMe related things must go through
> standards, I think that is their right.
>
> It does not seem good to allow undermining that approach.
If we wanted to enforce something like this the probe function could
reject NVMe class devices, but...
> On the flip side, if we are going to allow this driver, why are we not
> letting them enable their full device functionality with all their
> non-compliant VF/PF combinations? They shouldn't have to hide what
> they are actually doing just to get merged.
This. Is it enough that this appears to implement device type agnostic
migration support for devices hosted by this distributed services card
and NVMe happens to be one of those device types? Is that a high
enough bar that this is not simply a vendor specific NVMe migration
implementation?
> If we want to block anything it should be to block the PCI spec
> non-compliance of having PF/VF IDs that are different.
PCI Express® Base Specification Revision 6.0.1, pg 1461:
9.3.3.11 VF Device ID (Offset 1Ah)
This field contains the Device ID that should be presented for every VF to the SI.
VF Device ID may be different from the PF Device ID...
That? Thanks,
Alex
next prev parent reply other threads:[~2023-08-10 17:40 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-07 20:57 [PATCH v14 vfio 0/8] pds-vfio-pci driver Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 1/8] vfio: Commonize combine_ranges for use in other VFIO drivers Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 2/8] vfio/pds: Initial support for pds VFIO driver Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 3/8] pds_core: Require callers of register/unregister to pass PF drvdata Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 4/8] vfio/pds: register with the pds_core PF Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 5/8] vfio/pds: Add VFIO live migration support Brett Creeley
2023-08-08 22:27 ` Alex Williamson
2023-08-09 15:44 ` Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 6/8] vfio/pds: Add support for dirty page tracking Brett Creeley
2023-08-08 22:27 ` Alex Williamson
2023-08-09 15:44 ` Brett Creeley
2023-08-09 17:33 ` Alex Williamson
2023-08-09 18:06 ` Jason Gunthorpe
2023-08-10 2:47 ` Tian, Kevin
2023-08-10 16:47 ` Alex Williamson
2023-08-10 17:19 ` Jason Gunthorpe
2023-08-10 17:40 ` Alex Williamson [this message]
2023-08-10 17:43 ` Jason Gunthorpe
2023-08-10 17:54 ` Alex Williamson
2023-08-10 18:11 ` Jason Gunthorpe
2023-08-11 3:25 ` Tian, Kevin
2023-08-14 18:41 ` Brett Creeley
2023-08-15 2:45 ` Tian, Kevin
2023-08-11 15:53 ` Alex Williamson
2023-08-12 10:49 ` Christoph Hellwig
2023-08-14 22:51 ` Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 7/8] vfio/pds: Add support for firmware recovery Brett Creeley
2023-08-07 20:57 ` [PATCH v14 vfio 8/8] vfio/pds: Add Kconfig and documentation Brett Creeley
2023-08-10 8:32 ` [PATCH v14 vfio 0/8] pds-vfio-pci driver Shameerali Kolothum Thodi
2023-08-15 2:47 ` Tian, Kevin
2023-08-17 17:53 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230810114008.6b038d2a.alex.williamson@redhat.com \
--to=alex.williamson@redhat.com \
--cc=bcreeley@amd.com \
--cc=brett.creeley@amd.com \
--cc=hch@lst.de \
--cc=horms@kernel.org \
--cc=jgg@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shannon.nelson@amd.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).