From: Jeremi Piotrowski <jpiotrowski@linux.microsoft.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Tianyu Lan <ltykernel@gmail.com>,
"Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: Re: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM on Hyper-V
Date: Mon, 13 Feb 2023 19:05:12 +0100 [thread overview]
Message-ID: <21b1ee26-dfd4-923d-72da-d8ded3dd819c@linux.microsoft.com> (raw)
In-Reply-To: <88a89319-a71e-fa90-0dbb-00cf8a549380@redhat.com>
On 13/02/2023 13:50, Paolo Bonzini wrote:
>
> On 2/13/23 13:44, Jeremi Piotrowski wrote:
>> Just built a kernel from that tree, and it displays the same behavior. The problem
>> is not that the addresses are wrong, but that the flushes are issued at the wrong
>> time now. At least for what "enlightened NPT TLB flush" requires.
>
> It is not clear to me why HvCallFluyshGuestPhysicalAddressSpace or HvCallFlushGuestPhysicalAddressList would have stricter requirements than a "regular" TLB shootdown using INVEPT.
>
> Can you clarify what you mean by wrong time, preferrably with some kind of sequence of events?
>
> That is, something like
>
> CPU 0 Modify EPT from ... to ...
> CPU 0 call_rcu() to free page table
> CPU 1 ... which is invalid because ...
> CPU 0 HvCallFlushGuestPhysicalAddressSpace
>
> Paolo
So I looked at the ftrace (all kvm&kvmmu events + hyperv_nested_* events) I see the following:
With tdp_mmu=0:
kvm_exit
sequence of kvm_mmu_prepare_zap_page
hyperv_nested_flush_guest_mapping (always follows every sequence of kvm_mmu_prepare_zap_page)
kvm_entry
With tdp_mmu=1 I see:
kvm_mmu_prepare_zap_page and kvm_tdp_mmu_spte_changed events from a kworker context, but
they are not followed by hyperv_nested_flush_guest_mapping. The only hyperv_nested_flush_guest_mapping
events I see happen from the qemu process context.
Also the number of flush hypercalls is significantly lower: a 7second sequence through OVMF with
tdp_mmu=0 produces ~270 flush hypercalls. In the traces with tdp_mmu=1 I now see max 3.
So this might be easier to diagnose than I thought: the HvCallFlushGuestPhysicalAddressSpace calls
are missing now.
next prev parent reply other threads:[~2023-02-13 18:12 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-10 18:17 "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM on Hyper-V Jeremi Piotrowski
2023-02-10 18:45 ` Sean Christopherson
2023-02-13 12:44 ` Jeremi Piotrowski
2023-02-13 12:50 ` Paolo Bonzini
2023-02-13 18:05 ` Jeremi Piotrowski [this message]
2023-02-13 18:26 ` Paolo Bonzini
2023-02-13 17:38 ` Sean Christopherson
2023-02-13 17:49 ` Jeremi Piotrowski
2023-02-13 18:11 ` Paolo Bonzini
2023-02-13 19:11 ` Sean Christopherson
2023-02-13 19:56 ` Paolo Bonzini
2023-02-14 20:27 ` Jeremi Piotrowski
2023-02-15 22:16 ` Sean Christopherson
2023-02-16 14:40 ` Jeremi Piotrowski
2023-02-24 16:17 ` Jeremi Piotrowski
2023-02-24 16:26 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=21b1ee26-dfd4-923d-72da-d8ded3dd819c@linux.microsoft.com \
--to=jpiotrowski@linux.microsoft.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ltykernel@gmail.com \
--cc=mikelley@microsoft.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox