public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "seanjc@google.com" <seanjc@google.com>
Cc: "Huang, Kai" <kai.huang@intel.com>,
	"federico.parola@polito.it" <federico.parola@polito.it>,
	"isaku.yamahata@gmail.com" <isaku.yamahata@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"dmatlack@google.com" <dmatlack@google.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"Yamahata, Isaku" <isaku.yamahata@intel.com>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"michael.roth@amd.com" <michael.roth@amd.com>
Subject: Re: [PATCH v2 07/10] KVM: x86: Always populate L1 GPA for KVM_MAP_MEMORY
Date: Mon, 15 Apr 2024 21:36:11 +0000	[thread overview]
Message-ID: <8959c330e47aa78b97bdca6e8beae11697c15908.camel@intel.com> (raw)
In-Reply-To: <Zh2ZTt4tXXg0f0d9@google.com>

On Mon, 2024-04-15 at 14:17 -0700, Sean Christopherson wrote:
> > But doesn't the fault handler need the vCPU state?
> 
> Ignoring guest MTRRs, which will hopefully soon be a non-issue, no.  There are
> only six possible roots if TDP is enabled:
> 
>       1. 4-level !SMM !guest_mode
>       2. 4-level  SMM !guest_mode
>       3. 5-level !SMM !guest_mode
>       4. 5-level  SMM !guest_mode
>       5. 4-level !SMM guest_mode
>       6. 5-level !SMM guest_mode
> 
> 4-level vs. 5-level is a guest MAXPHYADDR thing, and swapping the MMU
> eliminates
> the SMM and guest_mode issues.  If there is per-vCPU state that makes its way
> into
> the TDP page tables, then we have problems, because it means that there is
> per-vCPU
> state in per-VM structures that isn't accounted for.
> 
> There are a few edge cases where KVM treads carefully, e.g. if the fault is to
> the vCPU's APIC-access page, but KVM manually handles those to avoid consuming
> per-vCPU state.
> 
> That said, I think this option is effectively 1b, because dropping the SMM vs.
> guest_mode state has the same uAPI problems as forcibly swapping the MMU, it's
> just a different way of doing so.
> 
> The first question to answer is, do we want to return an error or "silently"
> install mappings for !SMM, !guest_mode.  And so this option becomes relevant
> only
> _if_ we want to unconditionally install mappings for the 'base" mode.

Ah, I thought there was some logic around CR0.CD.

> 
> > > - Return error on guest mode or SMM mode:  Without this patch.
> > >   Pros: No additional patch.
> > >   Cons: Difficult to use.
> > 
> > Hmm... For the non-TDX use cases this is just an optimization, right? For
> > TDX
> > there shouldn't be an issue. If so, maybe this last one is not so horrible.
> 
> And the fact there are so variables to control (MAXPHADDR, SMM, and
> guest_mode)
> basically invalidates the argument that returning an error makes the ioctl()
> hard
> to use.  I can imagine it might be hard to squeeze this ioctl() into QEMU's
> existing code, but I don't buy that the ioctl() itself is hard to use.
> 
> Literally the only thing userspace needs to do is set CPUID to implicitly
> select
> between 4-level and 5-level paging.  If userspace wants to pre-map memory
> during
> live migration, or when jump-starting the guest with pre-defined state, simply
> pre-map memory before stuffing guest state.  In and of itself, that doesn't
> seem
> difficult, e.g. at a quick glance, QEMU could add a hook somewhere in
> kvm_vcpu_thread_fn() without too much trouble (though that comes with a huge
> disclaimer that I only know enough about how QEMU manages vCPUs to be
> dangerous).
> 
> I would describe the overall cons for this patch versus returning an error
> differently.  Switching MMU state puts the complexity in the kernel. 
> Returning
> an error punts any complexity to userspace.  Specifically, anything that KVM
> can
> do regarding vCPU state to get the right MMU, userspace can do too.
>  
> Add on that silently doing things that effectively ignore guest state usually
> ends badly, and I don't see a good argument for this patch (or any variant
> thereof).

Great.

  reply	other threads:[~2024-04-15 21:36 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-10 22:07 [PATCH v2 00/10] KVM: Guest Memory Pre-Population API isaku.yamahata
2024-04-10 22:07 ` [PATCH v2 01/10] KVM: Document KVM_MAP_MEMORY ioctl isaku.yamahata
2024-04-15 23:27   ` Edgecombe, Rick P
2024-04-15 23:47     ` Isaku Yamahata
2024-04-17 11:56     ` Paolo Bonzini
2024-04-10 22:07 ` [PATCH v2 02/10] KVM: Add KVM_MAP_MEMORY vcpu ioctl to pre-populate guest memory isaku.yamahata
2024-04-16 14:20   ` Edgecombe, Rick P
2024-04-10 22:07 ` [PATCH v2 03/10] KVM: x86/mmu: Extract __kvm_mmu_do_page_fault() isaku.yamahata
2024-04-16  8:22   ` Chao Gao
2024-04-16 23:43     ` Isaku Yamahata
2024-04-16 14:36   ` Edgecombe, Rick P
2024-04-16 23:52     ` Isaku Yamahata
2024-04-17 15:41       ` Paolo Bonzini
2024-04-10 22:07 ` [PATCH v2 04/10] KVM: x86/mmu: Make __kvm_mmu_do_page_fault() return mapped level isaku.yamahata
2024-04-16 14:40   ` Edgecombe, Rick P
2024-04-16 23:59     ` Isaku Yamahata
2024-04-10 22:07 ` [PATCH v2 05/10] KVM: x86/mmu: Introduce kvm_tdp_map_page() to populate guest memory isaku.yamahata
2024-04-16 14:46   ` Edgecombe, Rick P
2024-04-17 18:39     ` Isaku Yamahata
2024-04-17  7:04   ` Chao Gao
2024-04-17 18:44     ` Isaku Yamahata
2024-04-10 22:07 ` [PATCH v2 06/10] KVM: x86: Implement kvm_arch_vcpu_map_memory() isaku.yamahata
2024-04-16 15:12   ` Edgecombe, Rick P
2024-04-17  7:20   ` Chao Gao
2024-04-17 12:18   ` Paolo Bonzini
2024-04-10 22:07 ` [PATCH v2 07/10] KVM: x86: Always populate L1 GPA for KVM_MAP_MEMORY isaku.yamahata
2024-04-15 19:12   ` Edgecombe, Rick P
2024-04-15 21:17     ` Sean Christopherson
2024-04-15 21:36       ` Edgecombe, Rick P [this message]
2024-04-15 22:59         ` Sean Christopherson
2024-04-16  1:49       ` Isaku Yamahata
2024-04-16 14:22         ` Sean Christopherson
2024-04-16 21:41       ` Paolo Bonzini
2024-04-16 23:00         ` Sean Christopherson
2024-04-17 10:28           ` Paolo Bonzini
2024-04-15 19:37   ` Edgecombe, Rick P
2024-04-16 17:11   ` Edgecombe, Rick P
2024-04-10 22:07 ` [PATCH v2 08/10] KVM: x86: Add a hook in kvm_arch_vcpu_map_memory() isaku.yamahata
2024-04-16 14:57   ` Edgecombe, Rick P
2024-04-17 12:26   ` Paolo Bonzini
2024-04-10 22:07 ` [PATCH v2 09/10] KVM: SVM: Implement pre_mmu_map_page() to refuse KVM_MAP_MEMORY isaku.yamahata
2024-04-10 22:07 ` [PATCH v2 10/10] KVM: selftests: x86: Add test for KVM_MAP_MEMORY isaku.yamahata

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8959c330e47aa78b97bdca6e8beae11697c15908.camel@intel.com \
    --to=rick.p.edgecombe@intel.com \
    --cc=dmatlack@google.com \
    --cc=federico.parola@polito.it \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michael.roth@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox