From: Sean Christopherson <seanjc@google.com>
To: Maxim Levitsky <mlevitsk@redhat.com>
Cc: kvm@vger.kernel.org, Sandipan Das <sandipan.das@amd.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Jim Mattson <jmattson@google.com>,
Peter Zijlstra <peterz@infradead.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Borislav Petkov <bp@alien8.de>,
Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
Josh Poimboeuf <jpoimboe@kernel.org>,
Daniel Sneddon <daniel.sneddon@linux.intel.com>,
Jiaxi Chen <jiaxi.chen@linux.intel.com>,
Babu Moger <babu.moger@amd.com>,
linux-kernel@vger.kernel.org, Jing Liu <jing2.liu@intel.com>,
Wyes Karny <wyes.karny@amd.com>,
x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 07/11] KVM: x86: add a delayed hardware NMI injection interface
Date: Tue, 31 Jan 2023 22:28:38 +0000 [thread overview]
Message-ID: <Y9mWFlGdzoa8ZDW7@google.com> (raw)
In-Reply-To: <20221129193717.513824-8-mlevitsk@redhat.com>
On Tue, Nov 29, 2022, Maxim Levitsky wrote:
> @@ -5191,9 +5191,12 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>
> vcpu->arch.nmi_injected = events->nmi.injected;
> if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
> - vcpu->arch.nmi_pending = events->nmi.pending;
> + atomic_add(events->nmi.pending, &vcpu->arch.nmi_queued);
> +
> static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
>
> + process_nmi(vcpu);
Argh, having two process_nmi() calls is ugly (not blaming your code, it's KVM's
ABI that's ugly). E.g. if we collapse this down, it becomes:
process_nmi(vcpu);
if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
<blah blah blah>
}
static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
process_nmi(vcpu);
And the second mess is that V_NMI needs to be cleared.
The first process_nmi() effectively exists to (a) purge nmi_queued and (b) keep
nmi_pending if KVM_VCPUEVENT_VALID_NMI_PENDING is not set. I think we can just
replace that with an set of nmi_queued, i.e.
if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
vcpu->arch-nmi_pending = 0;
atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending);
process_nmi();
}
because if nmi_queued is non-zero in the !KVM_VCPUEVENT_VALID_NMI_PENDING, then
there should already be a pending KVM_REQ_NMI. Alternatively, replace process_nmi()
with a KVM_REQ_NMI request (that probably has my vote?).
If that works, can you do that in a separate patch? Then this patch can tack on
a call to clear V_NMI.
> if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR &&
> lapic_in_kernel(vcpu))
> vcpu->arch.apic->sipi_vector = events->sipi_vector;
> @@ -10008,6 +10011,10 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
> static void process_nmi(struct kvm_vcpu *vcpu)
> {
> unsigned limit = 2;
> + int nmi_to_queue = atomic_xchg(&vcpu->arch.nmi_queued, 0);
> +
> + if (!nmi_to_queue)
> + return;
>
> /*
> * x86 is limited to one NMI running, and one NMI pending after it.
> @@ -10015,13 +10022,34 @@ static void process_nmi(struct kvm_vcpu *vcpu)
> * Otherwise, allow two (and we'll inject the first one immediately).
> */
> if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected)
> - limit = 1;
> + limit--;
> +
> + /* Also if there is already a NMI hardware queued to be injected,
> + * decrease the limit again
> + */
> + if (static_call(kvm_x86_get_hw_nmi_pending)(vcpu))
> + limit--;
I don't think this is correct. If a vNMI is pending and NMIs are blocked, then
limit will end up '0' and KVM will fail to pend the additional NMI in software.
After much fiddling, and factoring in the above, how about this?
unsigned limit = 2;
/*
* x86 is limited to one NMI running, and one NMI pending after it.
* If an NMI is already in progress, limit further NMIs to just one.
* Otherwise, allow two (and we'll inject the first one immediately).
*/
if (vcpu->arch.nmi_injected) {
/* vNMI counts as the "one pending NMI". */
if (static_call(kvm_x86_is_vnmi_pending)(vcpu))
limit = 0;
else
limit = 1;
} else if (static_call(kvm_x86_get_nmi_mask)(vcpu) ||
static_call(kvm_x86_is_vnmi_pending)(vcpu)) {
limit = 1;
}
vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
if (vcpu->arch.nmi_pending &&
static_call(kvm_x86_set_vnmi_pending(vcpu)))
vcpu->arch.nmi_pending--;
if (vcpu->arch.nmi_pending)
kvm_make_request(KVM_REQ_EVENT, vcpu);
With the KVM_REQ_EVENT change in a separate prep patch:
--
From: Sean Christopherson <seanjc@google.com>
Date: Tue, 31 Jan 2023 13:33:02 -0800
Subject: [PATCH] KVM: x86: Raise an event request when processing NMIs iff an
NMI is pending
Don't raise KVM_REQ_EVENT if no NMIs are pending at the end of
process_nmi(). Finishing process_nmi() without a pending NMI will become
much more likely when KVM gains support for AMD's vNMI, which allows
pending vNMIs in hardware, i.e. doesn't require explicit injection.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 508074e47bc0..030136b6ebbd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10134,7 +10134,9 @@ static void process_nmi(struct kvm_vcpu *vcpu)
vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
- kvm_make_request(KVM_REQ_EVENT, vcpu);
+
+ if (vcpu->arch.nmi_pending)
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
}
void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
base-commit: 916b54a7688b0b9a1c48c708b848e4348c3ae2ab
--
next prev parent reply other threads:[~2023-01-31 22:28 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-29 19:37 [PATCH v2 00/11] SVM: vNMI (with my fixes) Maxim Levitsky
2022-11-29 19:37 ` [PATCH v2 01/11] KVM: nSVM: don't sync back tlb_ctl on nested VM exit Maxim Levitsky
2022-12-05 14:05 ` Santosh Shukla
2022-12-06 12:13 ` Maxim Levitsky
2022-11-29 19:37 ` [PATCH v2 02/11] KVM: nSVM: clean up the copying of V_INTR bits from vmcb02 to vmcb12 Maxim Levitsky
2023-01-28 0:37 ` Sean Christopherson
2023-01-31 1:44 ` Sean Christopherson
2023-02-24 14:38 ` Maxim Levitsky
2023-02-24 16:48 ` Sean Christopherson
2022-11-29 19:37 ` [PATCH v2 03/11] KVM: nSVM: explicitly raise KVM_REQ_EVENT on nested VM exit if L1 doesn't intercept interrupts Maxim Levitsky
2023-01-28 0:56 ` Sean Christopherson
2023-01-30 18:41 ` Sean Christopherson
2022-11-29 19:37 ` [PATCH v2 04/11] KVM: SVM: drop the SVM specific H_FLAGS Maxim Levitsky
2022-12-05 15:31 ` Santosh Shukla
2023-01-28 0:56 ` Sean Christopherson
2022-11-29 19:37 ` [PATCH v2 05/11] KVM: x86: emulator: stop using raw host flags Maxim Levitsky
2023-01-28 0:58 ` Sean Christopherson
2023-02-24 14:38 ` Maxim Levitsky
2022-11-29 19:37 ` [PATCH v2 06/11] KVM: SVM: add wrappers to enable/disable IRET interception Maxim Levitsky
2022-12-05 15:41 ` Santosh Shukla
2022-12-06 12:14 ` Maxim Levitsky
2022-12-08 12:09 ` Santosh Shukla
2022-12-08 13:44 ` Maxim Levitsky
2023-01-31 21:07 ` Sean Christopherson
2023-02-13 14:50 ` Santosh Shukla
2022-11-29 19:37 ` [PATCH v2 07/11] KVM: x86: add a delayed hardware NMI injection interface Maxim Levitsky
2023-01-28 1:09 ` Sean Christopherson
2023-01-31 21:12 ` Sean Christopherson
2023-02-08 9:35 ` Santosh Shukla
2023-02-08 9:32 ` Santosh Shukla
2023-02-24 14:39 ` Maxim Levitsky
2023-01-31 22:28 ` Sean Christopherson [this message]
2023-02-01 0:06 ` Sean Christopherson
2023-02-08 9:51 ` Santosh Shukla
2023-02-08 16:09 ` Sean Christopherson
2023-02-08 9:43 ` Santosh Shukla
2023-02-08 16:06 ` Sean Christopherson
2023-02-14 10:22 ` Santosh Shukla
2023-02-15 22:43 ` Sean Christopherson
2023-02-16 0:22 ` Sean Christopherson
2023-02-17 7:56 ` Santosh Shukla
2022-11-29 19:37 ` [PATCH v2 08/11] x86/cpu: Add CPUID feature bit for VNMI Maxim Levitsky
2022-11-29 19:37 ` [PATCH v2 09/11] KVM: SVM: Add VNMI bit definition Maxim Levitsky
2023-01-31 22:42 ` Sean Christopherson
2023-02-02 9:42 ` Santosh Shukla
2022-11-29 19:37 ` [PATCH v2 10/11] KVM: SVM: implement support for vNMI Maxim Levitsky
2022-12-04 17:18 ` Maxim Levitsky
2022-12-05 17:07 ` Santosh Shukla
2023-01-28 1:10 ` Sean Christopherson
2023-02-10 12:02 ` Santosh Shukla
2023-02-01 0:22 ` Sean Christopherson
2023-02-01 0:39 ` Sean Christopherson
2023-02-10 12:24 ` Santosh Shukla
2023-02-10 16:44 ` Sean Christopherson
2022-11-29 19:37 ` [PATCH v2 11/11] KVM: nSVM: implement support for nested VNMI Maxim Levitsky
2022-12-05 17:14 ` Santosh Shukla
2022-12-06 12:19 ` Maxim Levitsky
2022-12-08 12:11 ` Santosh Shukla
2023-02-01 0:44 ` Sean Christopherson
2022-12-06 9:58 ` [PATCH v2 00/11] SVM: vNMI (with my fixes) Santosh Shukla
2023-02-01 0:24 ` Sean Christopherson
2022-12-20 10:27 ` Maxim Levitsky
2022-12-21 18:44 ` Sean Christopherson
2023-01-15 9:05 ` Maxim Levitsky
2023-01-28 1:13 ` Sean Christopherson
2023-02-01 19:13 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y9mWFlGdzoa8ZDW7@google.com \
--to=seanjc@google.com \
--cc=babu.moger@amd.com \
--cc=bp@alien8.de \
--cc=daniel.sneddon@linux.intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=jiaxi.chen@linux.intel.com \
--cc=jing2.liu@intel.com \
--cc=jmattson@google.com \
--cc=jpoimboe@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=mlevitsk@redhat.com \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=sandipan.das@amd.com \
--cc=tglx@linutronix.de \
--cc=wyes.karny@amd.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).