From: Yuval Shaia <yuval.shaia@oracle.com> To: Devesh Sharma <devesh.sharma@broadcom.com> Cc: Jason Gunthorpe <jgg@mellanox.com>, "mst@redhat.com" <mst@redhat.com>, "linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>, Cornelia Huck <cohuck@redhat.com>, "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, "virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org> Subject: Re: [Qemu-devel] [RFC 0/3] VirtIO RDMA Date: Mon, 15 Apr 2019 13:27:39 +0300 [thread overview] Message-ID: <20190415102738.GB6145@lap1> (raw) In-Reply-To: <CANjDDBj0rqZEmHzMH+2461_DvjV5K4hT=hJ_usBuucV4Xwh84g@mail.gmail.com> On Fri, Apr 12, 2019 at 03:21:56PM +0530, Devesh Sharma wrote: > On Thu, Apr 11, 2019 at 11:11 PM Yuval Shaia <yuval.shaia@oracle.com> wrote: > > > > On Thu, Apr 11, 2019 at 08:34:20PM +0300, Yuval Shaia wrote: > > > On Thu, Apr 11, 2019 at 05:24:08PM +0000, Jason Gunthorpe wrote: > > > > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote: > > > > > On Thu, 11 Apr 2019 14:01:54 +0300 > > > > > Yuval Shaia <yuval.shaia@oracle.com> wrote: > > > > > > > > > > > Data center backends use more and more RDMA or RoCE devices and more and > > > > > > more software runs in virtualized environment. > > > > > > There is a need for a standard to enable RDMA/RoCE on Virtual Machines. > > > > > > > > > > > > Virtio is the optimal solution since is the de-facto para-virtualizaton > > > > > > technology and also because the Virtio specification > > > > > > allows Hardware Vendors to support Virtio protocol natively in order to > > > > > > achieve bare metal performance. > > > > > > > > > > > > This RFC is an effort to addresses challenges in defining the RDMA/RoCE > > > > > > Virtio Specification and a look forward on possible implementation > > > > > > techniques. > > > > > > > > > > > > Open issues/Todo list: > > > > > > List is huge, this is only start point of the project. > > > > > > Anyway, here is one example of item in the list: > > > > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that > > > > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure > > > > > > that this is reasonable so one option is to have one for all and > > > > > > multiplex the traffic on it. This is not good approach as by design it > > > > > > introducing an optional starvation. Another approach would be multi > > > > > > queues and round-robin (for example) between them. > > > > > > > > > > > > Expectations from this posting: > > > > > > In general, any comment is welcome, starting from hey, drop this as it is a > > > > > > very bad idea, to yeah, go ahead, we really want it. > > > > > > Idea here is that since it is not a minor effort i first want to know if > > > > > > there is some sort interest in the community for such device. > > > > > > > > > > My first reaction is: Sounds sensible, but it would be good to have a > > > > > spec for this :) > > > > > > > > I'm unclear why you'd want to have a virtio queue for anything other > > > > that some kind of command channel. > > > > > > > > I'm not sure a QP or CQ benefits from this?? > > > > > > Virtqueue is a standard mechanism to pass data from guest to host. By > > > > And vice versa (CQ?) > > > > > saying that - it really sounds like QP send and recv rings. So my thought > > > is to use a standard way for rings. As i've learned this is how it is used > > > by other virtio devices ex virtio-net. > > > > > > > > > > > Jason > > > > I would like to ask more basic question, how virtio queue will glue > with actual h/w qps? I may be to naive though. Have to admit - I have no idea. This work is based on emulated device so i'm my case - the emulated device is creating the virtqueue. I guess that HW device will create a QP and expose a virtqueue interface to it. The same driver should serve both the SW and HW devices. One of the objectives of this RFC is to collaborate an effort and implementation notes/ideas from HW vendors. > > -Regards > Devesh
WARNING: multiple messages have this Message-ID (diff)
From: Yuval Shaia <yuval.shaia@oracle.com> To: Devesh Sharma <devesh.sharma@broadcom.com> Cc: "mst@redhat.com" <mst@redhat.com>, "linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>, Cornelia Huck <cohuck@redhat.com>, "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, "virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>, Jason Gunthorpe <jgg@mellanox.com> Subject: Re: [Qemu-devel] [RFC 0/3] VirtIO RDMA Date: Mon, 15 Apr 2019 13:27:39 +0300 [thread overview] Message-ID: <20190415102738.GB6145@lap1> (raw) Message-ID: <20190415102739.HmvtbXOnckAfnu3g9SsJNKIo5lwVyez-aXAunaRRudU@z> (raw) In-Reply-To: <CANjDDBj0rqZEmHzMH+2461_DvjV5K4hT=hJ_usBuucV4Xwh84g@mail.gmail.com> On Fri, Apr 12, 2019 at 03:21:56PM +0530, Devesh Sharma wrote: > On Thu, Apr 11, 2019 at 11:11 PM Yuval Shaia <yuval.shaia@oracle.com> wrote: > > > > On Thu, Apr 11, 2019 at 08:34:20PM +0300, Yuval Shaia wrote: > > > On Thu, Apr 11, 2019 at 05:24:08PM +0000, Jason Gunthorpe wrote: > > > > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote: > > > > > On Thu, 11 Apr 2019 14:01:54 +0300 > > > > > Yuval Shaia <yuval.shaia@oracle.com> wrote: > > > > > > > > > > > Data center backends use more and more RDMA or RoCE devices and more and > > > > > > more software runs in virtualized environment. > > > > > > There is a need for a standard to enable RDMA/RoCE on Virtual Machines. > > > > > > > > > > > > Virtio is the optimal solution since is the de-facto para-virtualizaton > > > > > > technology and also because the Virtio specification > > > > > > allows Hardware Vendors to support Virtio protocol natively in order to > > > > > > achieve bare metal performance. > > > > > > > > > > > > This RFC is an effort to addresses challenges in defining the RDMA/RoCE > > > > > > Virtio Specification and a look forward on possible implementation > > > > > > techniques. > > > > > > > > > > > > Open issues/Todo list: > > > > > > List is huge, this is only start point of the project. > > > > > > Anyway, here is one example of item in the list: > > > > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that > > > > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure > > > > > > that this is reasonable so one option is to have one for all and > > > > > > multiplex the traffic on it. This is not good approach as by design it > > > > > > introducing an optional starvation. Another approach would be multi > > > > > > queues and round-robin (for example) between them. > > > > > > > > > > > > Expectations from this posting: > > > > > > In general, any comment is welcome, starting from hey, drop this as it is a > > > > > > very bad idea, to yeah, go ahead, we really want it. > > > > > > Idea here is that since it is not a minor effort i first want to know if > > > > > > there is some sort interest in the community for such device. > > > > > > > > > > My first reaction is: Sounds sensible, but it would be good to have a > > > > > spec for this :) > > > > > > > > I'm unclear why you'd want to have a virtio queue for anything other > > > > that some kind of command channel. > > > > > > > > I'm not sure a QP or CQ benefits from this?? > > > > > > Virtqueue is a standard mechanism to pass data from guest to host. By > > > > And vice versa (CQ?) > > > > > saying that - it really sounds like QP send and recv rings. So my thought > > > is to use a standard way for rings. As i've learned this is how it is used > > > by other virtio devices ex virtio-net. > > > > > > > > > > > Jason > > > > I would like to ask more basic question, how virtio queue will glue > with actual h/w qps? I may be to naive though. Have to admit - I have no idea. This work is based on emulated device so i'm my case - the emulated device is creating the virtqueue. I guess that HW device will create a QP and expose a virtqueue interface to it. The same driver should serve both the SW and HW devices. One of the objectives of this RFC is to collaborate an effort and implementation notes/ideas from HW vendors. > > -Regards > Devesh
next prev parent reply other threads:[~2019-04-15 10:27 UTC|newest] Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-04-11 11:01 [Qemu-devel] [RFC 0/3] VirtIO RDMA Yuval Shaia 2019-04-11 11:01 ` Yuval Shaia 2019-04-11 11:01 ` [Qemu-devel] [RFC 1/3] virtio-net: Move some virtio-net-pci decl to include/hw/virtio Yuval Shaia 2019-04-11 11:01 ` Yuval Shaia 2019-04-11 11:01 ` [Qemu-devel] [RFC 2/3] hw/virtio-rdma: VirtIO rdma device Yuval Shaia 2019-04-11 11:01 ` Yuval Shaia 2019-04-19 23:20 ` Michael S. Tsirkin 2019-04-19 23:20 ` Michael S. Tsirkin 2019-04-23 7:59 ` Cornelia Huck 2019-04-23 7:59 ` Cornelia Huck 2019-04-11 11:01 ` [Qemu-devel] [RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver Yuval Shaia 2019-04-11 11:01 ` Yuval Shaia 2019-04-13 7:58 ` Yanjun Zhu 2019-04-13 7:58 ` Yanjun Zhu 2019-04-14 5:20 ` Yuval Shaia 2019-04-14 5:20 ` Yuval Shaia 2019-04-16 1:07 ` Bart Van Assche 2019-04-16 1:07 ` Bart Van Assche 2019-04-16 8:56 ` Yuval Shaia 2019-04-16 8:56 ` Yuval Shaia 2019-04-11 17:02 ` [Qemu-devel] [RFC 0/3] VirtIO RDMA Cornelia Huck 2019-04-11 17:02 ` Cornelia Huck 2019-04-11 17:24 ` Jason Gunthorpe 2019-04-11 17:24 ` Jason Gunthorpe 2019-04-11 17:34 ` Yuval Shaia 2019-04-11 17:34 ` Yuval Shaia 2019-04-11 17:40 ` Jason Gunthorpe 2019-04-11 17:40 ` Jason Gunthorpe 2019-04-15 10:04 ` Yuval Shaia 2019-04-15 10:04 ` Yuval Shaia 2019-04-11 17:41 ` Yuval Shaia 2019-04-11 17:41 ` Yuval Shaia 2019-04-12 9:51 ` Devesh Sharma 2019-04-12 9:51 ` Devesh Sharma via Qemu-devel 2019-04-15 10:27 ` Yuval Shaia [this message] 2019-04-15 10:27 ` Yuval Shaia 2019-04-15 10:35 ` Yuval Shaia 2019-04-15 10:35 ` Yuval Shaia 2019-04-19 11:16 ` Hannes Reinecke 2019-04-19 11:16 ` Hannes Reinecke 2019-04-22 6:00 ` Leon Romanovsky 2019-04-22 6:00 ` Leon Romanovsky 2019-04-30 17:16 ` Yuval Shaia 2019-04-30 17:16 ` Yuval Shaia 2019-04-22 16:45 ` Jason Gunthorpe 2019-04-22 16:45 ` Jason Gunthorpe 2019-04-30 17:13 ` Yuval Shaia 2019-04-30 17:13 ` Yuval Shaia 2019-05-07 19:43 ` Jason Gunthorpe 2019-04-30 12:16 ` Yuval Shaia 2019-04-30 12:16 ` Yuval Shaia
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190415102738.GB6145@lap1 \ --to=yuval.shaia@oracle.com \ --cc=cohuck@redhat.com \ --cc=devesh.sharma@broadcom.com \ --cc=jgg@mellanox.com \ --cc=linux-rdma@vger.kernel.org \ --cc=mst@redhat.com \ --cc=qemu-devel@nongnu.org \ --cc=virtualization@lists.linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).