From: "Michael S. Tsirkin" <mst@redhat.com>
To: "Longpeng (Mike,
Cloud Infrastructure Service Product Dept.)"
<longpeng2@huawei.com>
Cc: "jasowang@redhat.com" <jasowang@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Yechuan <yechuan@huawei.com>,
"xieyongji@bytedance.com" <xieyongji@bytedance.com>,
"Gonglei \(Arei\)" <arei.gonglei@huawei.com>,
"parav@nvidia.com" <parav@nvidia.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
"sgarzare@redhat.com" <sgarzare@redhat.com>
Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
Date: Sun, 12 Dec 2021 04:30:02 -0500 [thread overview]
Message-ID: <20211212042818-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <721bbc1c27f545babdfbd17e1461e9f2@huawei.com>
On Sat, Dec 11, 2021 at 03:00:27AM +0000, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:
>
>
> > -----Original Message-----
> > From: Stefan Hajnoczi [mailto:stefanha@redhat.com]
> > Sent: Thursday, December 9, 2021 5:17 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> > Cc: jasowang@redhat.com; mst@redhat.com; parav@nvidia.com;
> > xieyongji@bytedance.com; sgarzare@redhat.com; Yechuan <yechuan@huawei.com>;
> > Gonglei (Arei) <arei.gonglei@huawei.com>; qemu-devel@nongnu.org
> > Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
> >
> > On Wed, Dec 08, 2021 at 01:20:10PM +0800, Longpeng(Mike) wrote:
> > > From: Longpeng <longpeng2@huawei.com>
> > >
> > > Hi guys,
> > >
> > > This patch introduces vhost-vdpa-net device, which is inspired
> > > by vhost-user-blk and the proposal of vhost-vdpa-blk device [1].
> > >
> > > I've tested this patch on Huawei's offload card:
> > > ./x86_64-softmmu/qemu-system-x86_64 \
> > > -device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0
> > >
> > > For virtio hardware offloading, the most important requirement for us
> > > is to support live migration between offloading cards from different
> > > vendors, the combination of netdev and virtio-net seems too heavy, we
> > > prefer a lightweight way.
> > >
> > > Maybe we could support both in the future ? Such as:
> > >
> > > * Lightweight
> > > Net: vhost-vdpa-net
> > > Storage: vhost-vdpa-blk
> > >
> > > * Heavy but more powerful
> > > Net: netdev + virtio-net + vhost-vdpa
> > > Storage: bdrv + virtio-blk + vhost-vdpa
> > >
> > > [1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html
> >
> > Stefano presented a plan for vdpa-blk at KVM Forum 2021:
> > https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-sof
> > tware-offload-for-virtio-blk-stefano-garzarella-red-hat
> >
> > It's closer to today's virtio-net + vhost-net approach than the
> > vhost-vdpa-blk device you have mentioned. The idea is to treat vDPA as
> > an offload feature rather than a completely separate code path that
> > needs to be maintained and tested. That way QEMU's block layer features
> > and live migration work with vDPA devices and re-use the virtio-blk
> > code. The key functionality that has not been implemented yet is a "fast
> > path" mechanism that allows the QEMU virtio-blk device's virtqueue to be
> > offloaded to vDPA.
> >
> > The unified vdpa-blk architecture should deliver the same performance
> > as the vhost-vdpa-blk device you mentioned but with more features, so I
> > wonder what aspects of the vhost-vdpa-blk idea are important to you?
> >
> > QEMU already has vhost-user-blk, which takes a similar approach as the
> > vhost-vdpa-blk device you are proposing. I'm not against the
> > vhost-vdpa-blk approach in priciple, but would like to understand your
> > requirements and see if there is a way to collaborate on one vdpa-blk
> > implementation instead of dividing our efforts between two.
> >
>
> We prefer a simple way in the virtio hardware offloading case, it could reduce
> our maintenance workload, we no need to maintain the virtio-net, netdev,
> virtio-blk, bdrv and ... any more. If we need to support other vdpa devices
> (such as virtio-crypto, virtio-fs) in the future, then we also need to maintain
> the corresponding device emulation code?
>
> For the virtio hardware offloading case, we usually use the vfio-pci framework,
> it saves a lot of our maintenance work in QEMU, we don't need to touch the device
> types. Inspired by Jason, what we really prefer is "vhost-vdpa-pci/mmio", use it to
> instead of the vfio-pci, it could provide the same performance as vfio-pci, but it's
> *possible* to support live migrate between offloading cards from different vendors.
OK, so the features you are dropping would be migration between
a vdpa, vhost and virtio backends. I think given vhost-vdpa-blk is seems
fair enough... What do others think?
> > Stefan
next prev parent reply other threads:[~2021-12-12 9:32 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-08 5:20 [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support Longpeng(Mike) via
2021-12-08 6:27 ` Jason Wang
2021-12-11 5:23 ` longpeng2--- via
2021-12-13 3:23 ` Jason Wang
2021-12-14 0:15 ` longpeng2--- via
2021-12-08 19:05 ` Michael S. Tsirkin
2021-12-11 2:43 ` longpeng2--- via
2021-12-09 9:16 ` Stefan Hajnoczi
2021-12-09 15:55 ` Stefano Garzarella
2021-12-11 4:11 ` longpeng2--- via
2021-12-13 11:10 ` Stefano Garzarella
2021-12-13 15:16 ` Stefan Hajnoczi
2021-12-14 1:44 ` longpeng2--- via
2021-12-14 13:03 ` Stefan Hajnoczi
2021-12-11 3:00 ` longpeng2--- via
2021-12-12 9:30 ` Michael S. Tsirkin [this message]
2021-12-13 2:47 ` Jason Wang
2021-12-13 10:58 ` Stefano Garzarella
2021-12-13 15:14 ` Stefan Hajnoczi
2021-12-14 2:22 ` Jason Wang
2021-12-14 13:11 ` Stefan Hajnoczi
2021-12-15 3:18 ` Jason Wang
2021-12-15 10:07 ` Stefan Hajnoczi
2021-12-16 3:01 ` Jason Wang
2021-12-16 9:10 ` Stefan Hajnoczi
2021-12-17 4:26 ` Jason Wang
2021-12-17 8:35 ` Stefan Hajnoczi
2021-12-20 2:48 ` Jason Wang
2021-12-20 8:11 ` Stefan Hajnoczi
2021-12-20 9:17 ` longpeng2--- via
2021-12-20 14:05 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211212042818-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=jasowang@redhat.com \
--cc=longpeng2@huawei.com \
--cc=parav@nvidia.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=xieyongji@bytedance.com \
--cc=yechuan@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).