From: Raphael Norwitz <raphael.norwitz@nutanix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>,
"stefanha@redhat.com" <stefanha@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
John Levon <john.levon@nutanix.com>,
Thanos Makatos <thanos.makatos@nutanix.com>,
Swapnil Ingle <swapnil.ingle@nutanix.com>,
Alexis Lescouet <alexis.lescouet@nutanix.com>,
Felipe Franciosi <felipe@nutanix.com>,
"mst@redhat.com" <mst@redhat.com>
Subject: Re: Accelerating non-standard disk types
Date: Thu, 19 May 2022 18:39:39 +0000 [thread overview]
Message-ID: <20220519183938.GB13470@raphael-debian-dev> (raw)
In-Reply-To: <fb522282-c750-2652-2e27-87c68819100b@redhat.com>
On Tue, May 17, 2022 at 03:53:52PM +0200, Paolo Bonzini wrote:
> On 5/16/22 19:38, Raphael Norwitz wrote:
> > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > vhost-user-scsi or vhost-user-blk backend device.
> > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > client?).
> > [3] We've also been looking at your libblkio library. From your
> > description in
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gnu.org_archive_html_qemu-2Ddevel_2021-2D04_msg06146.html&d=DwICaQ&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=wBSqcw0cal3wPP87YIKgFgmqMHjGCC3apYf4wCn1SIrX6GW_FR-J9wO68v-cyrpn&s=CP-6ZY-gqgQ2zLAJdR8WVTrMBoqmFHilGvW_qnf2myU&e= it
> > sounds like it may definitely play a role here, and possibly provide the
> > nessesary abstractions to back I/O from these emulated disks to any
> > backends we may want?
>
> First of all: have you benchmarked it? How much time is spent on MMIO vs.
> disk I/O?
>
Good point - we haven’t benchmarked the emulation, exit and translation
overheads - it is very possible speeding up disk I/O may not have a huge
impact. We would definitely benchmark this before exploring any of the
options seriously, but as you rightly note, performance is not the only
motivation here.
> Of the options above, the most interesting to me is to implement a
> vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one,
> that would translate I/O submissions to virtqueue (including polling and the
> like) and could be used with SATA.
>
We were certainly eyeing [1] as the most viable in the immediate future.
That said, since a vhost-user-blk driver has been added to libblkio, [3]
also sounds like a strong option. Do you see any long term benefit to
translating SATA/IDE submissions to virtqueues in a world where libblkio
is to be adopted?
> For IDE specifically, I'm not sure how much it can be sped up since it has
> only 1 in-flight operation. I think using KVM coalesced I/O could provide
> an interesting boost (assuming instant or near-instant reply from the
> backend). If all you're interested in however is not really performance,
> but rather having a single "connection" to your back end, vhost-user is
> certainly an option.
>
Interesting - I will take a look at KVM coalesced I/O.
You’re totally right though, performance is not our main interest for
these disk types. I should have emphasized offload rather than
acceleration and performance. We would prefer to QA and support as few
data paths as possible, and a vhost-user offload mechanism would allow
us to use the same path for all I/O. I imagine other QEMU users who
offload to backends like SPDK and use SATA/IDE disk types may feel
similarly?
> Paolo
next prev parent reply other threads:[~2022-05-19 18:42 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-16 17:38 Accelerating non-standard disk types Raphael Norwitz
2022-05-17 13:53 ` Paolo Bonzini
2022-05-19 18:39 ` Raphael Norwitz [this message]
2022-05-25 16:00 ` Stefan Hajnoczi
2022-05-31 3:06 ` Raphael Norwitz
2022-06-01 13:06 ` Stefan Hajnoczi
2022-05-17 15:29 ` Stefan Hajnoczi
2022-05-19 18:34 ` Raphael Norwitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220519183938.GB13470@raphael-debian-dev \
--to=raphael.norwitz@nutanix.com \
--cc=alexis.lescouet@nutanix.com \
--cc=felipe@nutanix.com \
--cc=john.levon@nutanix.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=swapnil.ingle@nutanix.com \
--cc=thanos.makatos@nutanix.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).