From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jason Wang <jasowang@redhat.com>,
Parav Pandit <parav@mellanox.com>,
Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
"davem@davemloft.net" <davem@davemloft.net>,
"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
Dave Ertman <david.m.ertman@intel.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"nhorman@redhat.com" <nhorman@redhat.com>,
"sassmann@redhat.com" <sassmann@redhat.com>,
Kiran Patil <kiran.patil@intel.com>,
Alex Williamson <alex.williamson@redhat.com>,
"Bie, Tiwei" <tiwei.bie@intel.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus
Date: Wed, 20 Nov 2019 02:38:08 -0500 [thread overview]
Message-ID: <20191120022141-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20191120014653.GR4991@ziepe.ca>
On Tue, Nov 19, 2019 at 09:46:53PM -0400, Jason Gunthorpe wrote:
> On Tue, Nov 19, 2019 at 07:16:21PM -0500, Michael S. Tsirkin wrote:
> > On Tue, Nov 19, 2019 at 07:10:23PM -0400, Jason Gunthorpe wrote:
> > > On Tue, Nov 19, 2019 at 04:33:40PM -0500, Michael S. Tsirkin wrote:
> > > > On Tue, Nov 19, 2019 at 03:15:47PM -0400, Jason Gunthorpe wrote:
> > > > > On Tue, Nov 19, 2019 at 01:58:42PM -0500, Michael S. Tsirkin wrote:
> > > > > > On Tue, Nov 19, 2019 at 12:46:32PM -0400, Jason Gunthorpe wrote:
> > > > > > > As always, this is all very hard to tell without actually seeing real
> > > > > > > accelerated drivers implement this.
> > > > > > >
> > > > > > > Your patch series might be a bit premature in this regard.
> > > > > >
> > > > > > Actually drivers implementing this have been posted, haven't they?
> > > > > > See e.g. https://lwn.net/Articles/804379/
> > > > >
> > > > > Is that a real driver? It looks like another example quality
> > > > > thing.
> > > > >
> > > > > For instance why do we need any of this if it has '#define
> > > > > IFCVF_MDEV_LIMIT 1' ?
> > > > >
> > > > > Surely for this HW just use vfio over the entire PCI function and be
> > > > > done with it?
> > > >
> > > > What this does is allow using it with unmodified virtio drivers
> > > > within guests. You won't get this with passthrough as it only
> > > > implements parts of virtio in hardware.
> > >
> > > I don't mean use vfio to perform passthrough, I mean to use vfio to
> > > implement the software parts in userspace while vfio to talk to the
> > > hardware.
> >
> > You repeated vfio twice here, hard to decode what you meant actually.
>
> 'while using vifo to talk to the hardware'
Sorry still have trouble reading that.
> > > kernel -> vfio -> user space virtio driver -> qemu -> guest
> >
> > Exactly what has been implemented for control path.
>
> I do not mean the modified mediated vfio this series proposes, I mean
> vfio-pci, on a full PCI VF, exactly like we have today.
>
> > The interface between vfio and userspace is
> > based on virtio which is IMHO much better than
> > a vendor specific one. userspace stays vendor agnostic.
>
> Why is that even a good thing? It is much easier to provide drivers
> via qemu/etc in user space then it is to make kernel upgrades. We've
> learned this lesson many times.
>
> This is why we have had the philosophy that if it doesn't need to be
> in the kernel it should be in userspace.
>
> > > Generally we don't want to see things in the kernel that can be done
> > > in userspace, and to me, at least for this driver, this looks
> > > completely solvable in userspace.
> >
> > I don't think that extends as far as actively encouraging userspace
> > drivers poking at hardware in a vendor specific way.
>
> Yes, it does, if you can implement your user space requirements using
> vfio then why do you need a kernel driver?
People's requirements differ. You are happy with just pass through a VF
you can already use it. Case closed. There are enough people who have
a fixed userspace that people have built virtio accelerators,
now there's value in supporting that, and a vendor specific
userspace blob is not supporting that requirement.
> The kernel needs to be involved when there are things only the kernel
> can do. If IFC has such things they should be spelled out to justify
> using a mediated device.
>
> > That has lots of security and portability implications and isn't
> > appropriate for everyone.
>
> This is already using vfio.
It's using the IOMMU parts since these are portable.
But the userspace interface is vendor-independent here.
> It doesn't make sense to claim that using
> vfio properly is somehow less secure or less portable.
>
> What I find particularly ugly is that this 'IFC VF NIC' driver
> pretends to be a mediated vfio device, but actually bypasses all the
> mediated device ops for managing dma security and just directly plugs
> the system IOMMU for the underlying PCI device into vfio.
>
> I suppose this little hack is what is motivating this abuse of vfio in
> the first place?
>
> Frankly I think a kernel driver touching a PCI function for which vfio
> is now controlling the system iommu for is a violation of the security
> model, and I'm very surprised AlexW didn't NAK this idea.
>
> Perhaps it is because none of the patches actually describe how the
> DMA security model for this so-called mediated device works? :(
That can be improved, good point.
> Or perhaps it is because this submission is split up so much it is
> hard to see what is being proposed? (I note this IFC driver is the
> first user of the mdev_set_iommu_device() function)
I agree it's hard, but then 3 people seem to work on that
at the same time.
> > It is kernel's job to abstract hardware away and present a unified
> > interface as far as possible.
>
> Sure, you could create a virtio accelerator driver framework in our
> new drivers/accel I hear was started. That could make some sense, if
> we had HW that actually required/benefited from kernel involvement.
>
> Jason
next prev parent reply other threads:[~2019-11-20 7:38 UTC|newest]
Thread overview: 86+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-15 22:33 [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus Jeff Kirsher
2019-11-15 23:25 ` Parav Pandit
2019-11-19 3:58 ` Ertman, David M
2019-11-19 4:31 ` Parav Pandit
2019-11-19 4:39 ` Parav Pandit
2019-11-19 17:46 ` Ertman, David M
2019-11-19 18:39 ` Jason Gunthorpe
2019-11-19 17:44 ` Ertman, David M
2019-11-19 4:08 ` Jason Wang
2019-11-19 4:36 ` Parav Pandit
2019-11-19 6:51 ` Jason Wang
2019-11-19 7:13 ` Parav Pandit
2019-11-19 7:37 ` Jason Wang
2019-11-19 15:14 ` Parav Pandit
2019-11-20 3:15 ` Jason Wang
2019-11-20 3:38 ` Parav Pandit
2019-11-20 4:07 ` Jason Wang
2019-11-20 13:41 ` Jason Gunthorpe
2019-11-21 4:06 ` Jason Wang
2019-11-20 8:52 ` Michael S. Tsirkin
2019-11-20 12:03 ` Jiri Pirko
2019-11-19 16:46 ` Jason Gunthorpe
2019-11-19 18:58 ` Michael S. Tsirkin
2019-11-19 19:03 ` Jason Gunthorpe
2019-11-19 21:34 ` Michael S. Tsirkin
2019-11-19 19:15 ` Jason Gunthorpe
2019-11-19 21:33 ` Michael S. Tsirkin
2019-11-19 23:10 ` Jason Gunthorpe
2019-11-20 0:16 ` Michael S. Tsirkin
2019-11-20 1:46 ` Jason Gunthorpe
2019-11-20 3:59 ` Jason Wang
2019-11-20 5:34 ` Jason Wang
2019-11-20 13:38 ` Jason Gunthorpe
2019-11-20 14:15 ` Michael S. Tsirkin
2019-11-20 17:28 ` Alex Williamson
2019-11-20 18:11 ` Jason Gunthorpe
2019-11-20 22:07 ` Alex Williamson
2019-11-20 22:39 ` Parav Pandit
2019-11-21 8:17 ` Jason Wang
2019-11-21 3:03 ` Jason Gunthorpe
2019-11-21 4:24 ` Michael S. Tsirkin
2019-11-21 13:44 ` Jason Gunthorpe
2019-11-23 16:50 ` Michael S. Tsirkin
2019-11-21 7:21 ` Jason Wang
2019-11-21 14:17 ` Jason Gunthorpe
2019-11-22 8:45 ` Jason Wang
2019-11-22 18:02 ` Jason Gunthorpe
2019-11-23 4:39 ` Tiwei Bie
2019-11-23 23:09 ` Jason Gunthorpe
2019-11-24 11:00 ` Michael S. Tsirkin
2019-11-24 14:56 ` Tiwei Bie
2019-11-25 0:07 ` Jason Gunthorpe
2019-11-24 14:51 ` Tiwei Bie
2019-11-24 15:07 ` Michael S. Tsirkin
2019-11-25 0:09 ` Jason Gunthorpe
2019-11-25 12:59 ` Jason Wang
2019-11-23 16:48 ` Michael S. Tsirkin
2019-11-21 5:22 ` Jason Wang
2019-11-21 6:59 ` Jason Wang
2019-11-21 3:52 ` Jason Wang
2019-11-20 7:38 ` Michael S. Tsirkin [this message]
2019-11-20 13:03 ` Jason Gunthorpe
2019-11-20 13:43 ` Michael S. Tsirkin
2019-11-20 14:30 ` Jason Gunthorpe
2019-11-20 14:57 ` Michael S. Tsirkin
2019-11-20 16:45 ` Jason Gunthorpe
2019-11-20 22:05 ` Michael S. Tsirkin
2019-11-21 1:38 ` Jason Gunthorpe
2019-11-21 4:53 ` Jason Wang
2019-11-20 3:29 ` Jason Wang
2019-11-20 3:24 ` Jason Wang
2019-11-20 13:33 ` Jason Gunthorpe
2019-11-21 3:57 ` Jason Wang
2019-11-21 15:10 ` Martin Habets
2019-11-22 9:13 ` Jason Wang
2019-11-22 16:19 ` Parav Pandit
2019-11-26 12:26 ` Martin Habets
2019-11-27 10:58 ` Jason Wang
2019-11-27 11:03 ` Jason Wang
2019-11-15 23:42 ` Parav Pandit
2019-11-18 7:48 ` Greg KH
2019-11-18 22:57 ` Ertman, David M
2019-11-19 8:04 ` Jason Wang
2019-11-19 17:50 ` Ertman, David M
2019-11-18 7:49 ` Greg KH
2019-11-18 22:55 ` Ertman, David M
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191120022141-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=davem@davemloft.net \
--cc=david.m.ertman@intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=jasowang@redhat.com \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jgg@ziepe.ca \
--cc=kiran.patil@intel.com \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nhorman@redhat.com \
--cc=parav@mellanox.com \
--cc=sassmann@redhat.com \
--cc=tiwei.bie@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).