From: Stefan Hajnoczi <stefanha@redhat.com>
To: virtio-fs@redhat.com, qemu-devel@nongnu.org
Cc: Liu Bo <bo.liu@linux.alibaba.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 0/4] virtiofsd: multithreading preparation part 3
Date: Wed, 7 Aug 2019 19:03:55 +0100 [thread overview]
Message-ID: <20190807180355.GA22758@stefanha-x1.localdomain> (raw)
In-Reply-To: <20190801165409.20121-1-stefanha@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 1931 bytes --]
On Thu, Aug 01, 2019 at 05:54:05PM +0100, Stefan Hajnoczi wrote:
> Performance
> -----------
> Please try these patches out and share your results.
Here are the performance numbers:
Threadpool | iodepth | iodepth
size | 1 | 64
-----------+---------+--------
None | 4451 | 4876
1 | 4360 | 4858
64 | 4359 | 33,266
A graph is available here:
https://vmsplice.net/~stefan/virtiofsd-threadpool-performance.png
Summary:
* iodepth=64 performance is increased by 6.8 times.
* iodepth=1 performance degrades by 2%.
* DAX is bottlenecked by QEMU's single-threaded
VHOST_USER_SLAVE_FS_MAP/UNMAP handler.
Threadpool size "none" is virtiofsd commit 813a824b707 ("virtiofsd: use
fuse_lowlevel_is_virtio() in fuse_session_destroy()") without any of the
multithreading preparation patches. I benchmarked this to check whether
the patches introduce a regression for iodepth=1. They do, but it's
only around 2%.
I also ran with DAX but found there was not much difference between
iodepth=1 and iodepth=64. This might be because the host mmap(2)
syscall becomes the bottleneck and a serialization point. QEMU only
processes one VHOST_USER_SLAVE_FS_MAP/UNMAP at a time. If we want to
accelerate DAX it may be necessary to parallelize mmap, assuming the
host kernel can do them in parallel on a single file. This performance
optimization is future work and not directly related to this patch
series.
The following fio job was run with cache=none and no DAX:
[global]
runtime=60
ramp_time=30
filename=/var/tmp/fio.dat
direct=1
rw=randread
bs=4k
size=4G
ioengine=libaio
iodepth=1
[read]
Guest configuration:
1 vCPU
4 GB RAM
Linux 5.1 (vivek-aug-06-2019)
Host configuration:
Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz (2 cores x 2 threads)
8 GB RAM
Linux 5.1.20-300.fc30.x86_64
XFS + dm-thin + dm-crypt
Toshiba THNSFJ256GDNU (256 GB SATA SSD)
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-08-07 18:04 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-01 16:54 [Qemu-devel] [PATCH 0/4] virtiofsd: multithreading preparation part 3 Stefan Hajnoczi
2019-08-01 16:54 ` [Qemu-devel] [PATCH 1/4] virtiofsd: process requests in a thread pool Stefan Hajnoczi
2019-08-05 12:02 ` Dr. David Alan Gilbert
2019-08-07 9:35 ` Stefan Hajnoczi
2019-08-01 16:54 ` [Qemu-devel] [PATCH 2/4] virtiofsd: prevent FUSE_INIT/FUSE_DESTROY races Stefan Hajnoczi
2019-08-05 12:26 ` Dr. David Alan Gilbert
2019-08-01 16:54 ` [Qemu-devel] [PATCH 3/4] virtiofsd: fix lo_destroy() resource leaks Stefan Hajnoczi
2019-08-05 15:17 ` Dr. David Alan Gilbert
2019-08-05 18:57 ` Dr. David Alan Gilbert
2019-08-06 18:58 ` Dr. David Alan Gilbert
2019-08-07 9:41 ` Stefan Hajnoczi
2019-08-01 16:54 ` [Qemu-devel] [PATCH 4/4] virtiofsd: add --thread-pool-size=NUM option Stefan Hajnoczi
2019-08-05 2:52 ` [Qemu-devel] [Virtio-fs] [PATCH 0/4] virtiofsd: multithreading preparation part 3 piaojun
2019-08-05 8:01 ` Stefan Hajnoczi
2019-08-05 9:40 ` piaojun
2019-08-07 18:03 ` Stefan Hajnoczi [this message]
2019-08-07 20:57 ` Vivek Goyal
2019-08-08 9:02 ` Stefan Hajnoczi
2019-08-08 9:53 ` Dr. David Alan Gilbert
2019-08-08 12:53 ` Vivek Goyal
2019-08-09 8:23 ` Stefan Hajnoczi
2019-08-10 21:35 ` Liu Bo
2019-08-09 8:21 ` Stefan Hajnoczi
2019-08-10 21:34 ` Liu Bo
2019-08-11 2:26 ` piaojun
2019-08-12 10:05 ` Stefan Hajnoczi
2019-08-12 11:58 ` piaojun
2019-08-12 12:51 ` Dr. David Alan Gilbert
2019-08-08 8:10 ` piaojun
2019-08-08 9:53 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190807180355.GA22758@stefanha-x1.localdomain \
--to=stefanha@redhat.com \
--cc=bo.liu@linux.alibaba.com \
--cc=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=virtio-fs@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).