From: Jason Gunthorpe <jgg@mellanox.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: Yuval Shaia <yuval.shaia@oracle.com>,
"virtualization@lists.linux-foundation.org"
<virtualization@lists.linux-foundation.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"mst@redhat.com" <mst@redhat.com>,
"marcel.apfelbaum@gmail.com" <marcel.apfelbaum@gmail.com>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>
Subject: Re: [Qemu-devel] [RFC 0/3] VirtIO RDMA
Date: Thu, 11 Apr 2019 17:24:08 +0000 [thread overview]
Message-ID: <20190411172402.GA14509@mellanox.com> (raw)
In-Reply-To: <20190411190215.2163572e.cohuck@redhat.com>
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia <yuval.shaia@oracle.com> wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
> >
> > Virtio is the optimal solution since is the de-facto para-virtualizaton
> > technology and also because the Virtio specification
> > allows Hardware Vendors to support Virtio protocol natively in order to
> > achieve bare metal performance.
> >
> > This RFC is an effort to addresses challenges in defining the RDMA/RoCE
> > Virtio Specification and a look forward on possible implementation
> > techniques.
> >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an optional starvation. Another approach would be multi
> > queues and round-robin (for example) between them.
> >
> > Expectations from this posting:
> > In general, any comment is welcome, starting from hey, drop this as it is a
> > very bad idea, to yeah, go ahead, we really want it.
> > Idea here is that since it is not a minor effort i first want to know if
> > there is some sort interest in the community for such device.
>
> My first reaction is: Sounds sensible, but it would be good to have a
> spec for this :)
I'm unclear why you'd want to have a virtio queue for anything other
that some kind of command channel.
I'm not sure a QP or CQ benefits from this??
Jason
WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@mellanox.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: "mst@redhat.com" <mst@redhat.com>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Yuval Shaia <yuval.shaia@oracle.com>,
"virtualization@lists.linux-foundation.org"
<virtualization@lists.linux-foundation.org>
Subject: Re: [Qemu-devel] [RFC 0/3] VirtIO RDMA
Date: Thu, 11 Apr 2019 17:24:08 +0000 [thread overview]
Message-ID: <20190411172402.GA14509@mellanox.com> (raw)
Message-ID: <20190411172408.k9a_zMoC1d1B75YEFdO9pw5YEuOqPxqmdAU647jvBSM@z> (raw)
In-Reply-To: <20190411190215.2163572e.cohuck@redhat.com>
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia <yuval.shaia@oracle.com> wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
> >
> > Virtio is the optimal solution since is the de-facto para-virtualizaton
> > technology and also because the Virtio specification
> > allows Hardware Vendors to support Virtio protocol natively in order to
> > achieve bare metal performance.
> >
> > This RFC is an effort to addresses challenges in defining the RDMA/RoCE
> > Virtio Specification and a look forward on possible implementation
> > techniques.
> >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an optional starvation. Another approach would be multi
> > queues and round-robin (for example) between them.
> >
> > Expectations from this posting:
> > In general, any comment is welcome, starting from hey, drop this as it is a
> > very bad idea, to yeah, go ahead, we really want it.
> > Idea here is that since it is not a minor effort i first want to know if
> > there is some sort interest in the community for such device.
>
> My first reaction is: Sounds sensible, but it would be good to have a
> spec for this :)
I'm unclear why you'd want to have a virtio queue for anything other
that some kind of command channel.
I'm not sure a QP or CQ benefits from this??
Jason
next prev parent reply other threads:[~2019-04-11 17:24 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-11 11:01 [Qemu-devel] [RFC 0/3] VirtIO RDMA Yuval Shaia
2019-04-11 11:01 ` Yuval Shaia
2019-04-11 11:01 ` [Qemu-devel] [RFC 1/3] virtio-net: Move some virtio-net-pci decl to include/hw/virtio Yuval Shaia
2019-04-11 11:01 ` Yuval Shaia
2019-04-11 11:01 ` [Qemu-devel] [RFC 2/3] hw/virtio-rdma: VirtIO rdma device Yuval Shaia
2019-04-11 11:01 ` Yuval Shaia
2019-04-19 23:20 ` Michael S. Tsirkin
2019-04-19 23:20 ` Michael S. Tsirkin
2019-04-23 7:59 ` Cornelia Huck
2019-04-23 7:59 ` Cornelia Huck
2019-04-11 11:01 ` [Qemu-devel] [RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver Yuval Shaia
2019-04-11 11:01 ` Yuval Shaia
2019-04-13 7:58 ` Yanjun Zhu
2019-04-13 7:58 ` Yanjun Zhu
2019-04-14 5:20 ` Yuval Shaia
2019-04-14 5:20 ` Yuval Shaia
2019-04-16 1:07 ` Bart Van Assche
2019-04-16 1:07 ` Bart Van Assche
2019-04-16 8:56 ` Yuval Shaia
2019-04-16 8:56 ` Yuval Shaia
2019-04-11 17:02 ` [Qemu-devel] [RFC 0/3] VirtIO RDMA Cornelia Huck
2019-04-11 17:02 ` Cornelia Huck
2019-04-11 17:24 ` Jason Gunthorpe [this message]
2019-04-11 17:24 ` Jason Gunthorpe
2019-04-11 17:34 ` Yuval Shaia
2019-04-11 17:34 ` Yuval Shaia
2019-04-11 17:40 ` Jason Gunthorpe
2019-04-11 17:40 ` Jason Gunthorpe
2019-04-15 10:04 ` Yuval Shaia
2019-04-15 10:04 ` Yuval Shaia
2019-04-11 17:41 ` Yuval Shaia
2019-04-11 17:41 ` Yuval Shaia
2019-04-12 9:51 ` Devesh Sharma
2019-04-12 9:51 ` Devesh Sharma via Qemu-devel
2019-04-15 10:27 ` Yuval Shaia
2019-04-15 10:27 ` Yuval Shaia
2019-04-15 10:35 ` Yuval Shaia
2019-04-15 10:35 ` Yuval Shaia
2019-04-19 11:16 ` Hannes Reinecke
2019-04-19 11:16 ` Hannes Reinecke
2019-04-22 6:00 ` Leon Romanovsky
2019-04-22 6:00 ` Leon Romanovsky
2019-04-30 17:16 ` Yuval Shaia
2019-04-30 17:16 ` Yuval Shaia
2019-04-22 16:45 ` Jason Gunthorpe
2019-04-22 16:45 ` Jason Gunthorpe
2019-04-30 17:13 ` Yuval Shaia
2019-04-30 17:13 ` Yuval Shaia
2019-05-07 19:43 ` Jason Gunthorpe
2019-04-30 12:16 ` Yuval Shaia
2019-04-30 12:16 ` Yuval Shaia
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190411172402.GA14509@mellanox.com \
--to=jgg@mellanox.com \
--cc=cohuck@redhat.com \
--cc=linux-rdma@vger.kernel.org \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=yuval.shaia@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).