kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: James Houghton <jthoughton@google.com>
Cc: David Matlack <dmatlack@google.com>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvm list <kvm@vger.kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	Oliver Upton <oupton@google.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Frank van der Linden <fvdl@google.com>
Subject: Re: RFC: A KVM-specific alternative to UserfaultFD
Date: Tue, 7 Nov 2023 12:24:29 -0500	[thread overview]
Message-ID: <ZUpyzWOuhFDTXiAW@x1n> (raw)
In-Reply-To: <CADrL8HUHO12Bxrx94_VoS8AsN5uEO1qYM2SCF7Tgw-=vsRUwBA@mail.gmail.com>

On Tue, Nov 07, 2023 at 08:11:09AM -0800, James Houghton wrote:
> This extra ~8 bytes per page overhead is real, and it is the
> theoretical maximum additional overhead that userfaultfd would require
> over a KVM-based demand paging alternative when we are using
> hugepages. Consider the case where we are using THPs and have just
> finished post-copy, and we haven't done any collapsing yet:
> 
> For userfaultfd: because we have UFFDIO_COPY'd or UFFDIO_CONTINUE'd at
> 4K (because we demand-fetched at 4K), the userspace page tables are
> entirely shattered. KVM has no choice but to have an entirely
> shattered second-stage page table as well.
> 
> For KVM demand paging: the userspace page tables can remain entirely
> populated, so we get PMD mappings here. KVM, though, uses 4K SPTEs
> because we have only just finished post-copy and haven't started
> collapsing yet.
> 
> So both systems end up with a shattered second stage page table, but
> userfaultfd has a shattered userspace page table as well (+8 bytes/4K
> if using THP, +another 8 bytes/2M if using HugeTLB-1G, etc.) and that
> is where the extra overhead comes from.
> 
> The second mapping of guest memory that we use today (through which we
> install memory), given that we are using hugepages, will use PMDs and
> PUDs, so the overhead is minimal.
> 
> Hope that clears things up!

Ah I see, thanks James.  Though, is this a real concern in production use,
considering worst case 0.2% overhead (all THP backed) and only exist during
postcopy, only on destination host?

In all cases, I agree that's still a valid point then, comparing to a
constant 1/32k consumption with a bitmap.

Thanks,

-- 
Peter Xu


  reply	other threads:[~2023-11-07 17:24 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-06 18:25 RFC: A KVM-specific alternative to UserfaultFD David Matlack
2023-11-06 20:23 ` Peter Xu
2023-11-06 22:24   ` Axel Rasmussen
2023-11-06 23:03     ` Peter Xu
2023-11-06 23:22       ` David Matlack
2023-11-07 14:21         ` Peter Xu
2023-11-07 16:11           ` James Houghton
2023-11-07 17:24             ` Peter Xu [this message]
2023-11-07 19:08               ` James Houghton
2023-11-07 16:25   ` Paolo Bonzini
2023-11-07 20:04     ` David Matlack
2023-11-07 21:10       ` Oliver Upton
2023-11-07 21:34         ` David Matlack
2023-11-08  1:27           ` Oliver Upton
2023-11-08 16:56             ` David Matlack
2023-11-08 17:34               ` Peter Xu
2023-11-08 20:10                 ` Sean Christopherson
2023-11-08 20:36                   ` Peter Xu
2023-11-08 20:47                   ` Axel Rasmussen
2023-11-08 21:05                     ` David Matlack
2023-11-08 20:49                 ` David Matlack
2023-11-08 20:33               ` Paolo Bonzini
2023-11-08 20:43                 ` David Matlack
2023-11-07 22:29     ` Peter Xu
2023-11-09 16:41       ` David Matlack
2023-11-09 17:58         ` Sean Christopherson
2023-11-09 18:33           ` David Matlack
2023-11-09 22:44             ` David Matlack
2023-11-09 23:54               ` Sean Christopherson
2023-11-09 19:20           ` Peter Xu
2023-11-11 16:23             ` David Matlack
2023-11-11 17:30               ` Peter Xu
2023-11-13 16:43                 ` David Matlack
2023-11-20 18:32                   ` James Houghton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZUpyzWOuhFDTXiAW@x1n \
    --to=peterx@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=axelrasmussen@google.com \
    --cc=dmatlack@google.com \
    --cc=fvdl@google.com \
    --cc=jthoughton@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).