netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tiwei Bie <tiwei.bie@intel.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	alex.williamson@redhat.com, maxime.coquelin@redhat.com,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, dan.daly@intel.com,
	cunming.liang@intel.com, zhihong.wang@intel.com,
	lingshan.zhu@intel.com
Subject: Re: [RFC v4 0/3] vhost: introduce mdev based hardware backend
Date: Thu, 19 Sep 2019 23:45:52 +0800	[thread overview]
Message-ID: <20190919154552.GA27657@___> (raw)
In-Reply-To: <d2efe7e4-cf13-437d-e2dc-e2779fac7d2f@redhat.com>

On Thu, Sep 19, 2019 at 09:08:11PM +0800, Jason Wang wrote:
> On 2019/9/18 下午10:32, Michael S. Tsirkin wrote:
> > > > > So I have some questions:
> > > > > 
> > > > > 1) Compared to method 2, what's the advantage of creating a new vhost char
> > > > > device? I guess it's for keep the API compatibility?
> > > > One benefit is that we can avoid doing vhost ioctls on
> > > > VFIO device fd.
> > > Yes, but any benefit from doing this?
> > It does seem a bit more modular, but it's certainly not a big deal.
> 
> Ok, if we go this way, it could be as simple as provide some callback to
> vhost, then vhost can just forward the ioctl through parent_ops.
> 
> > 
> > > > > 2) For method 2, is there any easy way for user/admin to distinguish e.g
> > > > > ordinary vfio-mdev for vhost from ordinary vfio-mdev?
> > > > I think device-api could be a choice.
> > > Ok.
> > > 
> > > 
> > > > > I saw you introduce
> > > > > ops matching helper but it's not friendly to management.
> > > > The ops matching helper is just to check whether a given
> > > > vfio-device is based on a mdev device.
> > > > 
> > > > > 3) A drawback of 1) and 2) is that it must follow vfio_device_ops that
> > > > > assumes the parameter comes from userspace, it prevents support kernel
> > > > > virtio drivers.
> > > > > 
> > > > > 4) So comes the idea of method 3, since it register a new vhost-mdev driver,
> > > > > we can use device specific ops instead of VFIO ones, then we can have a
> > > > > common API between vDPA parent and vhost-mdev/virtio-mdev drivers.
> > > > As the above draft shows, this requires introducing a new
> > > > VFIO device driver. I think Alex's opinion matters here.
> 
> Just to clarify, a new type of mdev driver but provides dummy
> vfio_device_ops for VFIO to make container DMA ioctl work.

I see. Thanks! IIUC, you mean we can provide a very tiny
VFIO device driver in drivers/vhost/mdev.c, e.g.:

static int vfio_vhost_mdev_open(void *device_data)
{
	if (!try_module_get(THIS_MODULE))
		return -ENODEV;
	return 0;
}

static void vfio_vhost_mdev_release(void *device_data)
{
	module_put(THIS_MODULE);
}

static const struct vfio_device_ops vfio_vhost_mdev_dev_ops = {
	.name		= "vfio-vhost-mdev",
	.open		= vfio_vhost_mdev_open,
	.release	= vfio_vhost_mdev_release,
};

static int vhost_mdev_probe(struct device *dev)
{
	struct mdev_device *mdev = to_mdev_device(dev);

	... Check the mdev device_id proposed in ...
	... https://lkml.org/lkml/2019/9/12/151 ...

	return vfio_add_group_dev(dev, &vfio_vhost_mdev_dev_ops, mdev);
}

static void vhost_mdev_remove(struct device *dev)
{
	vfio_del_group_dev(dev);
}

static struct mdev_driver vhost_mdev_driver = {
	.name	= "vhost_mdev",
	.probe	= vhost_mdev_probe,
	.remove	= vhost_mdev_remove,
};

So we can bind above mdev driver to the virtio-mdev compatible
mdev devices when we want to use vhost-mdev.

After binding above driver to the mdev device, we can setup IOMMU
via VFIO and get VFIO device fd of this mdev device, and pass it
to vhost fd (/dev/vhost-mdev) with a SET_BACKEND ioctl.

Thanks,
Tiwei

> 
> Thanks
> 
> 
> > > Yes, it is.
> > > 
> > > Thanks
> > > 
> > > 

  reply	other threads:[~2019-09-19 15:48 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-17  1:02 [RFC v4 0/3] vhost: introduce mdev based hardware backend Tiwei Bie
2019-09-17  1:02 ` [RFC v4 1/3] vfio: support getting vfio device from device fd Tiwei Bie
2019-09-17  1:02 ` [RFC v4 2/3] vfio: support checking vfio driver by device ops Tiwei Bie
2019-09-17  1:02 ` [RFC v4 3/3] vhost: introduce mdev based hardware backend Tiwei Bie
2019-09-17  7:26   ` Jason Wang
2019-09-20  4:21     ` Tiwei Bie
2019-09-17  1:29 ` [RFC v4 0/3] " Jason Wang
2019-09-17  3:32 ` Jason Wang
2019-09-17 10:58   ` Tiwei Bie
2019-09-18  5:51     ` Jason Wang
2019-09-18 14:32       ` Michael S. Tsirkin
2019-09-19 13:08         ` Jason Wang
2019-09-19 15:45           ` Tiwei Bie [this message]
2019-09-20  0:59             ` Jason Wang
2019-09-20  1:30             ` Jason Wang
2019-09-20  2:16               ` Tiwei Bie
2019-09-20  2:36                 ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190919154552.GA27657@___ \
    --to=tiwei.bie@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=cunming.liang@intel.com \
    --cc=dan.daly@intel.com \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=lingshan.zhu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).