From: Tiwei Bie <tiwei.bie@intel.com>
To: Jason Wang <jasowang@redhat.com>
Cc: mst@redhat.com, alex.williamson@redhat.com,
maxime.coquelin@redhat.com, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, dan.daly@intel.com,
cunming.liang@intel.com, zhihong.wang@intel.com,
lingshan.zhu@intel.com
Subject: Re: [PATCH v2] vhost: introduce mdev based hardware backend
Date: Wed, 23 Oct 2019 15:07:47 +0800 [thread overview]
Message-ID: <20191023070747.GA30533@___> (raw)
In-Reply-To: <ac36f1e3-b972-71ac-fe0c-3db03e016dcf@redhat.com>
On Wed, Oct 23, 2019 at 01:46:23PM +0800, Jason Wang wrote:
> On 2019/10/23 上午11:02, Tiwei Bie wrote:
> > On Tue, Oct 22, 2019 at 09:30:16PM +0800, Jason Wang wrote:
> > > On 2019/10/22 下午5:52, Tiwei Bie wrote:
> > > > This patch introduces a mdev based hardware vhost backend.
> > > > This backend is built on top of the same abstraction used
> > > > in virtio-mdev and provides a generic vhost interface for
> > > > userspace to accelerate the virtio devices in guest.
> > > >
> > > > This backend is implemented as a mdev device driver on top
> > > > of the same mdev device ops used in virtio-mdev but using
> > > > a different mdev class id, and it will register the device
> > > > as a VFIO device for userspace to use. Userspace can setup
> > > > the IOMMU with the existing VFIO container/group APIs and
> > > > then get the device fd with the device name. After getting
> > > > the device fd of this device, userspace can use vhost ioctls
> > > > to setup the backend.
> > > >
> > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > ---
> > > > This patch depends on below series:
> > > > https://lkml.org/lkml/2019/10/17/286
> > > >
> > > > v1 -> v2:
> > > > - Replace _SET_STATE with _SET_STATUS (MST);
> > > > - Check status bits at each step (MST);
> > > > - Report the max ring size and max number of queues (MST);
> > > > - Add missing MODULE_DEVICE_TABLE (Jason);
> > > > - Only support the network backend w/o multiqueue for now;
> > >
> > > Any idea on how to extend it to support devices other than net? I think we
> > > want a generic API or an API that could be made generic in the future.
> > >
> > > Do we want to e.g having a generic vhost mdev for all kinds of devices or
> > > introducing e.g vhost-net-mdev and vhost-scsi-mdev?
> > One possible way is to do what vhost-user does. I.e. Apart from
> > the generic ring, features, ... related ioctls, we also introduce
> > device specific ioctls when we need them. As vhost-mdev just needs
> > to forward configs between parent and userspace and even won't
> > cache any info when possible,
>
>
> So it looks to me this is only possible if we expose e.g set_config and
> get_config to userspace.
The set_config and get_config interface isn't really everything
of device specific settings. We also have ctrlq in virtio-net.
>
>
> > I think it might be better to do
> > this in one generic vhost-mdev module.
>
>
> Looking at definitions of VhostUserRequest in qemu, it mixed generic API
> with device specific API. If we want go this ways (a generic vhost-mdev),
> more questions needs to be answered:
>
> 1) How could userspace know which type of vhost it would use? Do we need to
> expose virtio subsystem device in for userspace this case?
>
> 2) That generic vhost-mdev module still need to filter out unsupported
> ioctls for a specific type. E.g if it probes a net device, it should refuse
> API for other type. This in fact a vhost-mdev-net but just not modularize it
> on top of vhost-mdev.
>
>
> >
> > >
> > > > - Some minor fixes and improvements;
> > > > - Rebase on top of virtio-mdev series v4;
[...]
> > > > +
> > > > +static long vhost_mdev_get_features(struct vhost_mdev *m, u64 __user *featurep)
> > > > +{
> > > > + if (copy_to_user(featurep, &m->features, sizeof(m->features)))
> > > > + return -EFAULT;
> > >
> > > As discussed in previous version do we need to filter out MQ feature here?
> > I think it's more straightforward to let the parent drivers to
> > filter out the unsupported features. Otherwise it would be tricky
> > when we want to add more features in vhost-mdev module,
>
>
> It's as simple as remove the feature from blacklist?
It's not really that easy. It may break the old drivers.
>
>
> > i.e. if
> > the parent drivers may expose unsupported features and relay on
> > vhost-mdev to filter them out, these features will be exposed
> > to userspace automatically when they are enabled in vhost-mdev
> > in the future.
>
>
> The issue is, it's only that vhost-mdev knows its own limitation. E.g in
> this patch, vhost-mdev only implements a subset of transport API, but parent
> doesn't know about that.
>
> Still MQ as an example, there's no way (or no need) for parent to know that
> vhost-mdev does not support MQ.
The mdev is a MDEV_CLASS_ID_VHOST mdev device. When the parent
is being developed, it should know the currently supported features
of vhost-mdev.
> And this allows old kenrel to work with new
> parent drivers.
The new drivers should provide things like VIRTIO_MDEV_F_VERSION_1
to be compatible with the old kernels. When VIRTIO_MDEV_F_VERSION_1
is provided/negotiated, the behaviours should be consistent.
>
> So basically we have three choices here:
>
> 1) Implement what vhost-user did and implement a generic vhost-mdev (but may
> still have lots of device specific code). To support advanced feature which
> requires the access to config, still lots of API that needs to be added.
>
> 2) Implement what vhost-kernel did, have a generic vhost-mdev driver and a
> vhost bus on top for match a device specific API e.g vhost-mdev-net. We
> still have device specific API but limit them only to device specific
> module. Still require new ioctls for advanced feature like MQ.
>
> 3) Simply expose all virtio-mdev transport to userspace.
Currently, virtio-mdev transport is a set of function callbacks
defined in kernel. How to simply expose virtio-mdev transport to
userspace?
> A generic module
> without any type specific code (like virtio-mdev). No need dedicated API for
> e.g MQ. But then the API will look much different than current vhost did.
>
> Consider the limitation of 1) I tend to choose 2 or 3. What's you opinion?
>
>
next prev parent reply other threads:[~2019-10-23 7:11 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-22 9:52 [PATCH v2] vhost: introduce mdev based hardware backend Tiwei Bie
2019-10-22 13:30 ` Jason Wang
2019-10-23 3:02 ` Tiwei Bie
2019-10-23 5:46 ` Jason Wang
2019-10-23 7:07 ` Tiwei Bie [this message]
2019-10-23 7:25 ` Jason Wang
2019-10-23 10:11 ` Tiwei Bie
2019-10-23 10:29 ` Jason Wang
2019-10-24 4:21 ` Tiwei Bie
2019-10-24 8:03 ` Jason Wang
2019-10-24 8:32 ` Jason Wang
2019-10-24 9:18 ` Tiwei Bie
2019-10-24 10:42 ` Jason Wang
2019-10-25 9:54 ` Jason Wang
2019-10-25 12:16 ` Michael S. Tsirkin
2019-10-28 1:58 ` Tiwei Bie
2019-10-28 3:50 ` Jason Wang
2019-10-29 9:57 ` Tiwei Bie
2019-10-29 10:48 ` Jason Wang
2019-10-30 1:27 ` Tiwei Bie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191023070747.GA30533@___ \
--to=tiwei.bie@intel.com \
--cc=alex.williamson@redhat.com \
--cc=cunming.liang@intel.com \
--cc=dan.daly@intel.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=lingshan.zhu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).