From: Mingwei Zhang <mizhang@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Jim Mattson <jmattson@google.com>,
kvm@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
Like Xu <likexu@tencent.com>, Roman Kagan <rkagan@amazon.de>,
Kan Liang <kan.liang@intel.com>,
Dapeng1 Mi <dapeng1.mi@intel.com>
Subject: Re: [PATCH 1/2] KVM: x86: Synthesize at most one PMI per VM-exit
Date: Fri, 22 Sep 2023 21:02:24 +0000 [thread overview]
Message-ID: <ZQ4A4KaSyygKHDUI@google.com> (raw)
In-Reply-To: <CAL715WL8KN1fceDhKxCfeGjbctx=vz2pAbw607pFYP6bw9N0_w@mail.gmail.com>
On Fri, Sep 22, 2023, Mingwei Zhang wrote:
> On Fri, Sep 22, 2023 at 1:34 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Fri, Sep 22, 2023, Mingwei Zhang wrote:
> > > On Fri, Sep 22, 2023 at 12:21 PM Sean Christopherson <seanjc@google.com> wrote:
> > > >
> > > > On Fri, Sep 22, 2023, Jim Mattson wrote:
> > > > > On Fri, Sep 22, 2023 at 11:46 AM Sean Christopherson <seanjc@google.com> wrote:
> > > > > >
> > > > > > On Fri, Sep 01, 2023, Jim Mattson wrote:
> > > > > > > When the irq_work callback, kvm_pmi_trigger_fn(), is invoked during a
> > > > > > > VM-exit that also invokes __kvm_perf_overflow() as a result of
> > > > > > > instruction emulation, kvm_pmu_deliver_pmi() will be called twice
> > > > > > > before the next VM-entry.
> > > > > > >
> > > > > > > That shouldn't be a problem. The local APIC is supposed to
> > > > > > > automatically set the mask flag in LVTPC when it handles a PMI, so the
> > > > > > > second PMI should be inhibited. However, KVM's local APIC emulation
> > > > > > > fails to set the mask flag in LVTPC when it handles a PMI, so two PMIs
> > > > > > > are delivered via the local APIC. In the common case, where LVTPC is
> > > > > > > configured to deliver an NMI, the first NMI is vectored through the
> > > > > > > guest IDT, and the second one is held pending. When the NMI handler
> > > > > > > returns, the second NMI is vectored through the IDT. For Linux guests,
> > > > > > > this results in the "dazed and confused" spurious NMI message.
> > > > > > >
> > > > > > > Though the obvious fix is to set the mask flag in LVTPC when handling
> > > > > > > a PMI, KVM's logic around synthesizing a PMI is unnecessarily
> > > > > > > convoluted.
> > > > > >
> > > > > > To address Like's question about whether not this is necessary, I think we should
> > > > > > rephrase this to explicitly state this is a bug irrespective of the whole LVTPC
> > > > > > masking thing.
> > > > > >
> > > > > > And I think it makes sense to swap the order of the two patches. The LVTPC masking
> > > > > > fix is a clearcut architectural violation. This is a bit more of a grey area,
> > > > > > though still blatantly buggy.
> > > > >
> > > > > The reason I ordered the patches as I did is that when this patch
> > > > > comes first, it actually fixes the problem that was introduced in
> > > > > commit 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring
> > > > > instructions"). If this patch comes second, it's less clear that it
> > > > > fixes a bug, since the other patch renders this one essentially moot.
> > > >
> > > > Yeah, but as Like pointed out, the way the changelog is worded just raises the
> > > > question of why this change is necessary.
> > > >
> > > > I think we should tag them both for stable. They're both bug fixes, regardless
> > > > of the ordering.
> > >
> > > Agree. Both patches are fixing the general potential buggy situation
> > > of multiple PMI injection on one vm entry: one software level defense
> > > (forcing the usage of KVM_REQ_PMI) and one hardware level defense
> > > (preventing PMI injection using mask).
> > >
> > > Although neither patch in this series is fixing the root cause of this
> > > specific double PMI injection bug, I don't see a reason why we cannot
> > > add a "fixes" tag to them, since we may fix it and create it again.
> > >
> > > I am currently working on it and testing my patch. Please give me some
> > > time, I think I could try sending out one version today. Once that is
> > > done, I will combine mine with the existing patch and send it out as a
> > > series.
> >
> > Me confused, what patch? And what does this patch have to do with Jim's series?
> > Unless I've missed something, Jim's patches are good to go with my nits addressed.
>
> Let me step back.
>
> We have the following problem when we run perf inside guest:
>
> [ 1437.487320] Uhhuh. NMI received for unknown reason 20 on CPU 3.
> [ 1437.487330] Dazed and confused, but trying to continue
>
> This means there are more NMIs that guest PMI could understand. So
> there are potentially two approaches to solve the problem: 1) fix the
> PMI injection issue: only one can be injected; 2) fix the code that
> causes the (incorrect) multiple PMI injection.
>
> I am working on the 2nd one. So, the property of the 2nd one is:
> without patches in 1) (Jim's patches), we could still avoid the above
> warning messages.
>
> Thanks.
> -Mingwei
This is my draft version. If you don't have full-width counter support, this
patch needs be placed on top of this one:
https://lore.kernel.org/all/20230504120042.785651-1-rkagan@amazon.de/
My initial testing on both QEMU and our GCP testing environment shows no
"Uhhuh..." dmesg in guest.
Please take a look...
From 47e629269d8b0ff65c242334f068300216cb7f91 Mon Sep 17 00:00:00 2001
From: Mingwei Zhang <mizhang@google.com>
Date: Fri, 22 Sep 2023 17:13:55 +0000
Subject: [PATCH] KVM: x86/pmu: Fix emulated counter increment due to
instruction emulation
Fix KVM emulated counter increment due to instruction emulation. KVM
pmc->counter is always a snapshot value when counter is running. Therefore,
the value does not represent the actual value of counter. Thus it is
inappropriate to compare it with other counter values. In existing code
KVM directly compares pmc->prev_counter and pmc->counter directly. However,
pmc->prev_counter is a snaphot value assigned from pmc->counter when
counter may still be running. So this comparison logic in
reprogram_counter() will generate incorrect invocations to
__kvm_perf_overflow(in_pmi=false) and generate duplicated PMI injection
requests.
Fix this issue by adding emulated_counter field and only the do the counter
calculation after we pause
Change-Id: I2d59e68557fd35f7bbcfe09ea42ad81bd36776b7
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/pmu.c | 15 ++++++++-------
arch/x86/kvm/svm/pmu.c | 1 +
arch/x86/kvm/vmx/pmu_intel.c | 2 ++
4 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1a4def36d5bb..47bbfbc0aa35 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -494,6 +494,7 @@ struct kvm_pmc {
bool intr;
u64 counter;
u64 prev_counter;
+ u64 emulated_counter;
u64 eventsel;
struct perf_event *perf_event;
struct kvm_vcpu *vcpu;
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index edb89b51b383..47acf3a2b077 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -240,12 +240,13 @@ static void pmc_pause_counter(struct kvm_pmc *pmc)
{
u64 counter = pmc->counter;
- if (!pmc->perf_event || pmc->is_paused)
- return;
-
/* update counter, reset event value to avoid redundant accumulation */
- counter += perf_event_pause(pmc->perf_event, true);
- pmc->counter = counter & pmc_bitmask(pmc);
+ if (pmc->perf_event && !pmc->is_paused)
+ counter += perf_event_pause(pmc->perf_event, true);
+
+ pmc->prev_counter = counter & pmc_bitmask(pmc);
+ pmc->counter = (counter + pmc->emulated_counter) & pmc_bitmask(pmc);
+ pmc->emulated_counter = 0;
pmc->is_paused = true;
}
@@ -452,6 +453,7 @@ static void reprogram_counter(struct kvm_pmc *pmc)
reprogram_complete:
clear_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi);
pmc->prev_counter = 0;
+ pmc->emulated_counter = 0;
}
void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
@@ -725,8 +727,7 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu)
static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
{
- pmc->prev_counter = pmc->counter;
- pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc);
+ pmc->emulated_counter += 1;
kvm_pmu_request_counter_reprogram(pmc);
}
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index a25b91ff9aea..b88fab4ae1d7 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -243,6 +243,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
pmc_stop_counter(pmc);
pmc->counter = pmc->prev_counter = pmc->eventsel = 0;
+ pmc->emulated_counter = 0;
}
pmu->global_ctrl = pmu->global_status = 0;
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 626df5fdf542..d03c4ec7273d 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -641,6 +641,7 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
pmc_stop_counter(pmc);
pmc->counter = pmc->prev_counter = pmc->eventsel = 0;
+ pmc->emulated_counter = 0;
}
for (i = 0; i < KVM_PMC_MAX_FIXED; i++) {
@@ -648,6 +649,7 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
pmc_stop_counter(pmc);
pmc->counter = pmc->prev_counter = 0;
+ pmc->emulated_counter = 0;
}
pmu->fixed_ctr_ctrl = pmu->global_ctrl = pmu->global_status = 0;
--
2.42.0.515.g380fc7ccd1-goog
next prev parent reply other threads:[~2023-09-22 21:02 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-01 18:56 [PATCH 1/2] KVM: x86: Synthesize at most one PMI per VM-exit Jim Mattson
2023-09-01 18:56 ` [PATCH 2/2] KVM: x86: Mask LVTPC when handling a PMI Jim Mattson
2023-09-02 19:06 ` Mingwei Zhang
2023-09-06 8:59 ` Mi, Dapeng1
2023-09-22 18:22 ` Sean Christopherson
2023-09-25 17:52 ` Jim Mattson
2023-09-25 18:00 ` Sean Christopherson
2023-09-02 19:05 ` [PATCH 1/2] KVM: x86: Synthesize at most one PMI per VM-exit Mingwei Zhang
2023-09-06 9:17 ` Mi, Dapeng
2023-09-06 20:54 ` Mingwei Zhang
2023-09-07 6:29 ` Mi, Dapeng
2023-09-14 11:57 ` Like Xu
2023-09-14 14:27 ` Sean Christopherson
2023-09-22 18:46 ` Sean Christopherson
2023-09-22 19:04 ` Jim Mattson
2023-09-22 19:21 ` Sean Christopherson
2023-09-22 20:25 ` Mingwei Zhang
2023-09-22 20:34 ` Sean Christopherson
2023-09-22 20:49 ` Mingwei Zhang
2023-09-22 21:02 ` Mingwei Zhang [this message]
2023-09-22 22:44 ` Sean Christopherson
2023-09-25 6:00 ` Mingwei Zhang
2023-09-25 19:54 ` Mingwei Zhang
2023-09-22 21:06 ` Sean Christopherson
2023-09-22 22:42 ` Mingwei Zhang
2023-09-22 23:00 ` Sean Christopherson
2023-09-25 6:09 ` Mingwei Zhang
2023-09-25 16:22 ` Mingwei Zhang
2023-09-25 17:06 ` Sean Christopherson
2023-09-25 7:06 ` Like Xu
2023-09-25 7:33 ` Like Xu
-- strict thread matches above, loose matches on Subject: below --
2023-09-25 17:34 [PATCH 0/2] Fix the duplicate PMI injections in vPMU Mingwei Zhang
2023-09-25 17:34 ` [PATCH 1/2] KVM: x86: Synthesize at most one PMI per VM-exit Mingwei Zhang
2023-09-25 17:59 ` Sean Christopherson
2023-09-25 19:33 ` Mingwei Zhang
2023-09-25 21:28 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZQ4A4KaSyygKHDUI@google.com \
--to=mizhang@google.com \
--cc=dapeng1.mi@intel.com \
--cc=jmattson@google.com \
--cc=kan.liang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=likexu@tencent.com \
--cc=pbonzini@redhat.com \
--cc=rkagan@amazon.de \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox