qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: virtio-fs@redhat.com,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [Virtio-fs] [PATCH 0/4] virtiofsd: multithreading preparation part 3
Date: Fri, 9 Aug 2019 09:23:56 +0100	[thread overview]
Message-ID: <20190809082356.GC25286@stefanha-x1.localdomain> (raw)
In-Reply-To: <20190808125320.GB3147@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 3463 bytes --]

On Thu, Aug 08, 2019 at 08:53:20AM -0400, Vivek Goyal wrote:
> On Thu, Aug 08, 2019 at 10:53:16AM +0100, Dr. David Alan Gilbert wrote:
> > * Stefan Hajnoczi (stefanha@redhat.com) wrote:
> > > On Wed, Aug 07, 2019 at 04:57:15PM -0400, Vivek Goyal wrote:
> > > > Kernel also serializes MAP/UNMAP on one inode. So you will need to run
> > > > multiple jobs operating on different inodes to see parallel MAP/UNMAP
> > > > (atleast from kernel's point of view).
> > > 
> > > Okay, there is still room to experiment with how MAP and UNMAP are
> > > handled by virtiofsd and QEMU even if the host kernel ultimately becomes
> > > the bottleneck.
> > > 
> > > One possible optimization is to eliminate REMOVEMAPPING requests when
> > > the guest driver knows a SETUPMAPPING will follow immediately.  I see
> > > the following request pattern in a fio randread iodepth=64 job:
> > > 
> > >   unique: 995348, opcode: SETUPMAPPING (48), nodeid: 135, insize: 80, pid: 1351
> > >   lo_setupmapping(ino=135, fi=0x(nil), foffset=3860856832, len=2097152, moffset=859832320, flags=0)
> > >      unique: 995348, success, outsize: 16
> > >   unique: 995350, opcode: REMOVEMAPPING (49), nodeid: 135, insize: 60, pid: 12
> > >      unique: 995350, success, outsize: 16
> > >   unique: 995352, opcode: SETUPMAPPING (48), nodeid: 135, insize: 80, pid: 1351
> > >   lo_setupmapping(ino=135, fi=0x(nil), foffset=16777216, len=2097152, moffset=861929472, flags=0)
> > >      unique: 995352, success, outsize: 16
> > >   unique: 995354, opcode: REMOVEMAPPING (49), nodeid: 135, insize: 60, pid: 12
> > >      unique: 995354, success, outsize: 16
> > >   virtio_send_msg: elem 9: with 1 in desc of length 16
> > >   unique: 995356, opcode: SETUPMAPPING (48), nodeid: 135, insize: 80, pid: 1351
> > >   lo_setupmapping(ino=135, fi=0x(nil), foffset=383778816, len=2097152, moffset=864026624, flags=0)
> > >      unique: 995356, success, outsize: 16
> > >   unique: 995358, opcode: REMOVEMAPPING (49), nodeid: 135, insize: 60, pid: 12
> > > 
> > > The REMOVEMAPPING requests are unnecessary since we can map over the top
> > > of the old mapping instead of taking the extra step of removing it
> > > first.
> > 
> > Yep, those should go - I think Vivek likes to keep them for testing
> > since they make things fail more completely if there's a screwup.
> 
> I like to keep them because otherwise they keep the resources busy
> on host. If DAX range is being used immediately, then this optimization
> makes more sense. I will keep this in mind.

Skipping all unmaps is has drawbacks, as you've said.  I'm just thinking
about the case where a mapping is replaced with a new one.

> > 
> > > Some more questions to consider for DAX performance optimization:
> > > 
> > > 1. Is FUSE_READ/FUSE_WRITE more efficient than DAX for some I/O patterns?
> > 
> > Probably for cases where the data is only accessed once, and you can't
> > preemptively map.
> > Another variant on (1) is whether we could do read/writes while the mmap
> > is happening to absorb the latency.
> 
> For small random I/O, dax might not be very effective. Overhead of
> setting up mapping and tearing it down is significant.

Plus there is still an EPT violation and the host page cache needs to be
filled if we haven't prefetched it.  So I imagine FUSE_READ/FUSE_WRITE
will be faster than DAX here.  DAX will be better for repeated,
long-lived accesses.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2019-08-09  8:24 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-01 16:54 [Qemu-devel] [PATCH 0/4] virtiofsd: multithreading preparation part 3 Stefan Hajnoczi
2019-08-01 16:54 ` [Qemu-devel] [PATCH 1/4] virtiofsd: process requests in a thread pool Stefan Hajnoczi
2019-08-05 12:02   ` Dr. David Alan Gilbert
2019-08-07  9:35     ` Stefan Hajnoczi
2019-08-01 16:54 ` [Qemu-devel] [PATCH 2/4] virtiofsd: prevent FUSE_INIT/FUSE_DESTROY races Stefan Hajnoczi
2019-08-05 12:26   ` Dr. David Alan Gilbert
2019-08-01 16:54 ` [Qemu-devel] [PATCH 3/4] virtiofsd: fix lo_destroy() resource leaks Stefan Hajnoczi
2019-08-05 15:17   ` Dr. David Alan Gilbert
2019-08-05 18:57     ` Dr. David Alan Gilbert
2019-08-06 18:58       ` Dr. David Alan Gilbert
2019-08-07  9:41       ` Stefan Hajnoczi
2019-08-01 16:54 ` [Qemu-devel] [PATCH 4/4] virtiofsd: add --thread-pool-size=NUM option Stefan Hajnoczi
2019-08-05  2:52 ` [Qemu-devel] [Virtio-fs] [PATCH 0/4] virtiofsd: multithreading preparation part 3 piaojun
2019-08-05  8:01   ` Stefan Hajnoczi
2019-08-05  9:40     ` piaojun
2019-08-07 18:03 ` [Qemu-devel] " Stefan Hajnoczi
2019-08-07 20:57   ` [Qemu-devel] [Virtio-fs] " Vivek Goyal
2019-08-08  9:02     ` Stefan Hajnoczi
2019-08-08  9:53       ` Dr. David Alan Gilbert
2019-08-08 12:53         ` Vivek Goyal
2019-08-09  8:23           ` Stefan Hajnoczi [this message]
2019-08-10 21:35           ` Liu Bo
2019-08-09  8:21         ` Stefan Hajnoczi
2019-08-10 21:34           ` Liu Bo
2019-08-11  2:26           ` piaojun
2019-08-12 10:05             ` Stefan Hajnoczi
2019-08-12 11:58               ` piaojun
2019-08-12 12:51                 ` Dr. David Alan Gilbert
2019-08-08  8:10   ` piaojun
2019-08-08  9:53     ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190809082356.GC25286@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=vgoyal@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).