qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Raphael Norwitz <raphael.norwitz@nutanix.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	John Levon <john.levon@nutanix.com>,
	Thanos Makatos <thanos.makatos@nutanix.com>,
	Swapnil Ingle <swapnil.ingle@nutanix.com>,
	"alexis.lescout@nutanix.com" <alexis.lescout@nutanix.com>,
	Felipe Franciosi <felipe@nutanix.com>,
	"mst@redhat.com" <mst@redhat.com>
Subject: Re: Accelerating non-standard disk types
Date: Thu, 19 May 2022 18:34:35 +0000	[thread overview]
Message-ID: <20220516182215.GA13470@raphael-debian-dev> (raw)
In-Reply-To: <YoO/TdP1ArazkpVX@stefanha-x1.localdomain>

On Tue, May 17, 2022 at 04:29:17PM +0100, Stefan Hajnoczi wrote:
> On Mon, May 16, 2022 at 05:38:31PM +0000, Raphael Norwitz wrote:
> > Hey Stefan,
> > 
> > We've been thinking about ways to accelerate other disk types such as
> > SATA and IDE rather than translating to SCSI and using QEMU's iSCSI
> > driver, with existing and more performant backends such as SPDK. We
> > think there are some options worth exploring:
> > 
> > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > vhost-user-scsi or vhost-user-blk backend device.
> 
> If I understand correctly the idea is to have a QEMU Block Driver that
> connects to SPDK using vhost-user-scsi/blk?
>

Yes - the idea would be to introduce logic to translate SATA/IDE to SCSI
or block requests and send them via vhost-user-{scsi/blk} to SPDK or any
other vhost-user backend. Our thought is that this is doable today
whereas we may have to wait for QEMU to formally adopt libblkio before
proceeding with [3], and depending on timelines it may make sense to
implement [1] and then switch over to [3] later. Thoughts?

> > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > client?).
> 
> This is definitely the option with the lowest overhead. I'm not sure if
> implementing SATA and IDE emulation in SPDK is worth the effort for
> saving the last few cycles.
>

Agreed - it’s probably not worth exploring because of the amount of work
involved. One good argument would be that it may be better for security
in the multiprocess QEMU world, but to me that does not seem strong
enough to justify the work involved so I suggest we drop option [2].

> > [3] We've also been looking at your libblkio library. From your
> > description in
> > https://lists.gnu.org/archive/html/qemu-devel/2021-04/msg06146.html it
> > sounds like it may definitely play a role here, and possibly provide the
> > nessesary abstractions to back I/O from these emulated disks to any
> > backends we may want?
> 
> Kevin Wolf has contributed a vhost-user-blk driver for libblkio. This
> lets you achieve #1 using QEMU's libblkio Block Driver. The guest still
> sees IDE or SATA but instead of translating to iSCSI the I/O requests
> are sent over vhost-user-blk.
> 
> I suggest joining the libblkio chat and we can discuss how to set this
> up (the QEMU libblkio BlockDriver is not yet in qemu.git):
> https://matrix.to/#/#libblkio:matrix.org

Great - I have joined and will follow up there.

> 
> > We are planning to start a review of these options internally to survey
> > tradeoffs, potential timelines and practicality for these approaches. We
> > were also considering putting a submission together for KVM forum
> > describing our findings. Would you see any value in that?
> 
> I think it's always interesting to see performance results. I wonder if
> you have more cutting-edge optimizations or performance results you want
> to share at KVM Forum because IDE and SATA are more legacy/niche
> nowadays?
>

I realize I over-emphasized performance in my question - our larger goal
here is to align the data path for all disk types. We have some hope
that SATA can be sped up a bit, but it’s entirely possible that the MMIO
overhead will way outweigh and disk I/O improvements. Our thought was to
present a “Roadmap for supporting offload alternate disk types”, but
with your and Paolo’s response it seems like there isn’t enough material
to warrant a KVM talk and we should rather invest time in prototyping
and evaluating solutions.

> Stefan


      reply	other threads:[~2022-05-19 18:36 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-16 17:38 Accelerating non-standard disk types Raphael Norwitz
2022-05-17 13:53 ` Paolo Bonzini
2022-05-19 18:39   ` Raphael Norwitz
2022-05-25 16:00     ` Stefan Hajnoczi
2022-05-31  3:06       ` Raphael Norwitz
2022-06-01 13:06         ` Stefan Hajnoczi
2022-05-17 15:29 ` Stefan Hajnoczi
2022-05-19 18:34   ` Raphael Norwitz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220516182215.GA13470@raphael-debian-dev \
    --to=raphael.norwitz@nutanix.com \
    --cc=alexis.lescout@nutanix.com \
    --cc=felipe@nutanix.com \
    --cc=john.levon@nutanix.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=swapnil.ingle@nutanix.com \
    --cc=thanos.makatos@nutanix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).