qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Alberto Faria <afaria@redhat.com>,
	qemu-devel@nongnu.org, Fam Zheng <fam@euphon.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org
Subject: Re: [PATCH 2/2] scsi-disk: Add native FUA support
Date: Tue, 25 Mar 2025 11:54:48 -0400	[thread overview]
Message-ID: <20250325155448.GA137279@fedora> (raw)
In-Reply-To: <Z-KmIyWX8hxRNe2u@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 3502 bytes --]

On Tue, Mar 25, 2025 at 01:48:35PM +0100, Kevin Wolf wrote:
> Am 06.03.2025 um 11:33 hat Kevin Wolf geschrieben:
> > Am 04.03.2025 um 16:52 hat Alberto Faria geschrieben:
> > > Avoid emulating FUA when the driver supports it natively. This should
> > > provide better performance than a full flush after the write.
> > > 
> > > Signed-off-by: Alberto Faria <afaria@redhat.com>
> > 
> > Did you try out if you can see performance improvements in practice?
> > It's always nice to have numbers in the commit message for patches that
> > promise performance improvements.
> 
> I was curious enough to see how this and the recent series by Stefan
> (virtio-scsi multiqueue) and myself (FUA on the backend + polling
> improvements) play out with virtio-scsi, so I just ran some fio
> benchmarks with sync=1 myself to compare:
> 
> iops bs=4k cache=none           |    virtio-scsi    |     virtio-blk    |
> O_SYNC workload                 |   qd 1  |  qd 16  |   qd 1  |  qd 16  |
> --------------------------------+---------+---------+---------+---------+
> master                          |   21296 |  109747 |   25762 |  130576 |
> + virtio-scsi multiqueue        |   28798 |  121170 |       - |       - |
> + FUA in scsi-disk              |   51893 |  204199 |       - |       - |
> --------------------------------+---------+---------+---------+---------+
> Total change                    | +143.7% |  +86.1% |       - |       - |
> 
> (No new numbers for virtio-blk because virtio-scsi patches obviously
> don't change anything about it. Also no numbers for FUA in file-posix
> because it's unused with cache=none.)
> 
> iops bs=4k cache=directsync     |    virtio-scsi    |     virtio-blk    |
> O_SYNC workload                 |   qd 1  |  qd 16  |   qd 1  |  qd 16  |
> --------------------------------+---------+---------+---------+---------+
> master                          |   32223 |  109748 |   45583 |  258416 |
> + FUA in file-posix + polling   |   32148 |  198665 |   58601 |  320190 |
> + virtio-scsi multiqueue        |   51739 |  225031 |       - |       - |
> + FUA in scsi-disk              |   56061 |  227535 |       - |       - |
> --------------------------------+---------+---------+---------+---------+
> Total change                    |  +74.0% | +107.3% |  +28.6% |  +23.9% |
> 
> Of course, the huge improvements on the virtio-scsi side only show how
> bad it was before. In most numbers it is still behind virtio-blk even
> after all three patch series (apart from cache=none where the
> availability of FUA on the device side makes a big difference, and I
> expect that virtio-blk will improve similarly once we implement it
> there).
> 
> Also note that when testing the virtio-scsi multiqueue patches, this
> was still a single iothread, i.e. I wasn't even making use of the new
> feature per se. I assume much of this comes from enabling polling
> because the series moved the event queue handling to the main loop,
> which prevented polling for virtio-scsi before. The series also got rid
> of an extra coroutine per request for the blk_is_available() call in
> virtio_scsi_ctx_check(), which might play a role, too.
> 
> Anyway, I like these numbers for FUA in scsi-disk. It makes write back
> cache modes almost catch up to write through with O_SYNC workloads. We
> should definitely get this merged and do the same for virtio-blk.

Thanks for sharing! Nice IOPS improvements across the board.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

      reply	other threads:[~2025-03-25 15:55 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-04 15:52 [PATCH 0/2] scsi-disk: Add FUA write support Alberto Faria
2025-03-04 15:52 ` [PATCH 1/2] scsi-disk: Advertise FUA support by default Alberto Faria
2025-03-04 16:15   ` Daniel P. Berrangé
2025-03-06 10:43     ` Kevin Wolf
2025-03-04 15:52 ` [PATCH 2/2] scsi-disk: Add native FUA support Alberto Faria
2025-03-06 10:33   ` Kevin Wolf
2025-03-25 12:48     ` Kevin Wolf
2025-03-25 15:54       ` Stefan Hajnoczi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250325155448.GA137279@fedora \
    --to=stefanha@redhat.com \
    --cc=afaria@redhat.com \
    --cc=fam@euphon.net \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).