From: Paolo Bonzini <pbonzini@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Maxim Levitsky <mlevitsk@redhat.com>,
Ben Gardon <bgardon@google.com>,
David Matlack <dmatlack@google.com>
Subject: Re: [PATCH] KVM: x86/mmu: Do not create SPTEs for GFNs that exceed host.MAXPHYADDR
Date: Fri, 29 Apr 2022 16:50:02 +0200 [thread overview]
Message-ID: <e5864cb4-cce8-bd32-04b0-ecb60c058d0b@redhat.com> (raw)
In-Reply-To: <Ymv5TR76RNvFBQhz@google.com>
On 4/29/22 16:42, Sean Christopherson wrote:
> On Fri, Apr 29, 2022, Paolo Bonzini wrote:
>> On 4/29/22 16:24, Sean Christopherson wrote:
>>> I don't love the divergent memslot behavior, but it's technically correct, so I
>>> can't really argue. Do we want to "officially" document the memslot behavior?
>>>
>>
>> I don't know what you mean by officially document,
>
> Something in kvm/api.rst under KVM_SET_USER_MEMORY_REGION.
Not sure if the API documentation is the best place because userspace
does not know whether shadow paging is on (except indirectly through
other capabilities, perhaps)?
It could even be programmatic, such as returning 52 for
CPUID[0x80000008]. A nested KVM on L1 would not be able to use the
#PF(RSVD) trick to detect MMIO faults. That's not a big price to pay,
however I'm not sure it's a good idea in general...
Paolo
>
>> but at least I have relied on it to test KVM's MAXPHYADDR=52 cases before
>> such hardware existed. :)
>
> Ah, that's a very good reason to support this for shadow paging. Maybe throw
> something about testing in the changelog? Without considering the testing angle,
> it looks like KVM supports max=52 for !TDP just because it can, because practically
> speaking there's unlikely to be a use case for exposing that much memory to a
> guest when using shadow paging.
>
next prev parent reply other threads:[~2022-04-29 14:50 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-28 23:34 [PATCH] KVM: x86/mmu: Do not create SPTEs for GFNs that exceed host.MAXPHYADDR Sean Christopherson
2022-04-29 10:36 ` Paolo Bonzini
2022-04-29 14:24 ` Sean Christopherson
2022-04-29 14:37 ` Paolo Bonzini
2022-04-29 14:42 ` Sean Christopherson
2022-04-29 14:50 ` Paolo Bonzini [this message]
2022-04-29 16:01 ` Sean Christopherson
2022-05-01 14:28 ` Maxim Levitsky
2022-05-01 14:32 ` Maxim Levitsky
2022-05-02 7:59 ` Maxim Levitsky
2022-05-02 8:56 ` Maxim Levitsky
2022-05-02 16:51 ` Sean Christopherson
2022-05-03 9:12 ` Maxim Levitsky
2022-05-03 15:12 ` Maxim Levitsky
2022-05-03 20:30 ` Sean Christopherson
2022-05-04 12:08 ` Maxim Levitsky
2022-05-04 14:47 ` Sean Christopherson
2022-05-04 19:11 ` Paolo Bonzini
2022-05-02 11:12 ` Kai Huang
2022-05-02 11:52 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e5864cb4-cce8-bd32-04b0-ecb60c058d0b@redhat.com \
--to=pbonzini@redhat.com \
--cc=bgardon@google.com \
--cc=dmatlack@google.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mlevitsk@redhat.com \
--cc=seanjc@google.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).