qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>
Cc: Qemu Developers <qemu-devel@nongnu.org>
Subject: virtio-blk using a single iothread
Date: Thu, 8 Jun 2023 10:40:57 +0300	[thread overview]
Message-ID: <c206fa1d-077d-ae9b-476f-f43eec36a187@grimberg.me> (raw)

Hey Stefan, Paolo,

I just had a report from a user experiencing lower virtio-blk
performance than he expected. This user is running virtio-blk on top of
nvme-tcp device. The guest is running 12 CPU cores.

The guest read/write throughput is capped at around 30% of the available
throughput from the host (~800MB/s from the guest vs. 2800MB/s from the
host - 25Gb/s nic). The workload running on the guest is a
multi-threaded fio workload.

What is observed is the fact that virtio-blk is using a single disk-wide
iothread processing all the vqs. Specifically nvme-tcp (similar to other
tcp based protocols) is negatively impacted by lack of thread
concurrency that can distribute I/O requests to different TCP
connections.

We also attempted to move the iothread to a dedicated core, however that
did yield any meaningful performance improvements). The reason appears
to be less about CPU utilization on the iothread core, but more around
single TCP connection serialization.

Moving to io=threads does increase the throughput, however sacrificing
latency significantly.

So the user find itself with available host cpus and TCP connections
that it could easily use to get maximum throughput, without the ability
to leverage them. True, other guests will use different
threads/contexts, however the goal here is to allow the full performance
from a single device.

I've seen several discussions and attempts in the past to allow a
virtio-blk device leverage multiple iothreads, but around 2 years ago
the discussions over this paused. So wanted to ask, are there any plans
or anything in the works to address this limitation?

I've seen that the spdk folks are heading in this direction with their
vhost-blk implementation:
https://review.spdk.io/gerrit/c/spdk/spdk/+/16068

Thanks,


             reply	other threads:[~2023-06-08 13:22 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-08  7:40 Sagi Grimberg [this message]
2023-06-08 16:08 ` virtio-blk using a single iothread Stefan Hajnoczi
2023-06-11 12:27   ` Sagi Grimberg
2023-06-21 12:23     ` Stefan Hajnoczi
2023-07-27 15:11     ` Stefan Hajnoczi
2023-07-31 15:51     ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c206fa1d-077d-ae9b-476f-f43eec36a187@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).