public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support
@ 2024-11-27 22:55 Melody Wang
  2024-11-27 22:55 ` [PATCH v3 1/7] x86/sev: Define the #HV doorbell page structure Melody Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

Hi all,

This is v3 of the Restricted Injection hypervisor support patches.

The previous, v2 version was submitted here:

https://lore.kernel.org/r/cover.1725945912.git.huibo.wang@amd.com

Since then, the series experienced the following changes:

1. Remove unused field in struct hvdb_events. 

2. Add comments explaining sev_snp_blocked() with no_further_signal and vector.
   Explain that only a single interrupt vector is presented by the hypervisor
   in the doorbell page. Only when the interrupt is acked, the next interrupt is
   presented.

3. Add warning in sev_snp_cancel_injection() when vector is not #HV_VECTOR.

4. Update hvdb_map() failure handling. When it fails, no interrupt will be
   injected.

5. Remove sev_snp_queue_exception() in svm_update_soft_interrupt_rip(), since
   the soft interrupt is only BP_VECTOR and OF_VECTOR, so it will always return
   false, then sev_snp_queue_exception() will be executed in
   svm_inject_exception() always.

6. Add new #HV IPI feature in XAPIC and X2APIC mode, test each mode with three
   IPI types: broadcast, self-IPI, and allbutself.


Changelog:
----------
v2

Hi all,

This is a v2 of the restricted injection hypervisor support patches.
 
The previous version was submitted here:
https://lore.kernel.org/r/cover.1722989996.git.huibo.wang@amd.com

Since the previous submission, one issue reported by the kernel test robot was
fixed.

All comments and review feedback are appreciated.

Thanks. 

v1

Operating systems may not handle unexpected interrupt or exception sequences.
A malicious hypervisor can inject random interrupt or exception sequences,
putting guest drivers or guest OS kernels into an unexpected state, which could
lead to security issues.

To address this concern, SEV-SNP restricts the injection of interrupts and
exceptions to those only allowed by the guest. Restricted Injection disables
all hypervisor-based interrupt queuing and event injection for all vectors,
allowing only a single vector, #HV (28), which is reserved for SNP guest use
but is never generated by hardware. #HV is only permitted to be injected into
VMSAs that execute with Restricted Injection.

Guests operating with Restricted Injection are expected to communicate with the
hypervisor about events via a software-managed para-virtualization interface.
This interface can utilize #HV injection as a doorbell to inform the guest that
new events have occurred. This patch set implements Restricted Injection on the
KVM side directly into VMPL0.

Overview:

The GHCB 2.0 specification[1] defines #HV doorbell page and the #HV doorbell
page NAE event allows for an SEV-SNP guest to register a doorbell page for use
with the hypervisor injection exception (#HV). When Restricted Injection is
active, only #HV exceptions can be injected into the guest, and the hypervisor
follows the GHCB #HV doorbell communication to inject the exception or
interrupt. Restricted Injection can be enabled by setting the bit in
vmsa_features.

The patchset is rebased on the kvm/next (commit 15e1c3d65975524c5c792fcd59f7d89f00402261).

Testing:

The patchset has been tested with the sev-snp guest, ovmf and qemu supporting
restricted injection.

Four test sets:
1.ls -lr /
2.apt update
3.fio
4.perf

Thanks
Melody


[1] https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/specifications/56421.pdf

Melody Wang (7):
  x86/sev: Define the #HV doorbell page structure
  KVM: SVM: Add support for the SEV-SNP #HV doorbell page NAE event
  KVM: SVM: Inject #HV when restricted injection is active
  KVM: SVM: Inject NMIs when restricted injection is active
  KVM: SVM: Inject MCEs when restricted injection is active
  KVM: SVM: Add support for the SEV-SNP #HV IPI NAE event
  KVM: SVM: Enable restricted injection for an SEV-SNP guest

 arch/x86/include/asm/cpufeatures.h |   1 +
 arch/x86/include/asm/kvm-x86-ops.h |   1 +
 arch/x86/include/asm/kvm_host.h    |   1 +
 arch/x86/include/asm/sev-common.h  |   1 +
 arch/x86/include/asm/svm.h         |  33 +++
 arch/x86/include/uapi/asm/kvm.h    |   1 +
 arch/x86/include/uapi/asm/svm.h    |   6 +
 arch/x86/kvm/lapic.c               |  24 ++-
 arch/x86/kvm/lapic.h               |   2 +
 arch/x86/kvm/svm/sev.c             | 318 ++++++++++++++++++++++++++++-
 arch/x86/kvm/svm/svm.c             |  41 +++-
 arch/x86/kvm/svm/svm.h             |  26 ++-
 arch/x86/kvm/vmx/main.c            |   1 +
 arch/x86/kvm/vmx/vmx.c             |   5 +
 arch/x86/kvm/vmx/x86_ops.h         |   1 +
 arch/x86/kvm/x86.c                 |   7 +
 16 files changed, 463 insertions(+), 6 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/7] x86/sev: Define the #HV doorbell page structure
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  2024-11-27 22:55 ` [PATCH v3 2/7] KVM: SVM: Add support for the SEV-SNP #HV doorbell page NAE event Melody Wang
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

Restricted injection is a feature which enforces additional interrupt and event
injection security protections for a SEV-SNP guest. It disables all
hypervisor-based interrupt queuing and event injection of all vectors except
a new exception vector, #HV (28), which is reserved for SNP guest use, but
never generated by hardware. #HV is only allowed to be injected into VMSAs that
execute with Restricted Injection.

The guests running with the SNP restricted injection feature active limit the
host to ringing a doorbell with a #HV exception.

Define two fields in the #HV doorbell page: a pending event field, and an
EOI assist.

Create the structure definition for the #HV doorbell page as per GHCB
specification.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/include/asm/svm.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2b59b9951c90..95cb9a62f477 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -239,6 +239,39 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_TSC_RATIO_MAX	0x000000ffffffffffULL
 #define SVM_TSC_RATIO_DEFAULT	0x0100000000ULL
 
+/*
+ * Hypervisor doorbell page:
+ *
+ * Used when restricted injection is enabled for a VM. One page in size that
+ * is shared between the guest and hypervisor to communicate exception and
+ * interrupt events.
+ */
+struct hvdb_events {
+	/* First 64 bytes of HV doorbell page defined in GHCB specification */
+	union {
+		struct {
+			/* Non-maskable event indicators */
+			u16 vector:		8,
+			    nmi:		1,
+			    mce:		1,
+			    reserved2:		5,
+			    no_further_signal:	1;
+		};
+
+		u16 pending_events;
+	};
+
+	u8 no_eoi_required;
+
+	u8 reserved3[61];
+};
+
+struct hvdb {
+	struct hvdb_events events;
+
+	/* Remainder of the page is for software use */
+	u8 reserved[PAGE_SIZE - sizeof(struct hvdb_events)];
+};
 
 /* AVIC */
 #define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK	(0xFFULL)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/7] KVM: SVM: Add support for the SEV-SNP #HV doorbell page NAE event
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
  2024-11-27 22:55 ` [PATCH v3 1/7] x86/sev: Define the #HV doorbell page structure Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  2024-11-27 22:55 ` [PATCH v3 3/7] KVM: SVM: Inject #HV when restricted injection is active Melody Wang
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

To support the SEV-SNP Restricted Injection feature, the SEV-SNP guest must
register a #HV doorbell page for use with the #HV.

The #HV doorbell page NAE event allows the guest to register a #HV doorbell
page. The NAE event consists of four actions: GET_PREFERRED, SET, QUERY, CLEAR.
Implement the NAE event as per GHCB specification.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/include/uapi/asm/svm.h |  5 +++
 arch/x86/kvm/svm/sev.c          | 73 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.h          |  2 +
 3 files changed, 80 insertions(+)

diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index 1814b413fd57..7905c9be44d1 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -115,6 +115,11 @@
 #define SVM_VMGEXIT_AP_CREATE_ON_INIT		0
 #define SVM_VMGEXIT_AP_CREATE			1
 #define SVM_VMGEXIT_AP_DESTROY			2
+#define SVM_VMGEXIT_HVDB_PAGE                   0x80000014
+#define SVM_VMGEXIT_HVDB_GET_PREFERRED          0
+#define SVM_VMGEXIT_HVDB_SET                    1
+#define SVM_VMGEXIT_HVDB_QUERY                  2
+#define SVM_VMGEXIT_HVDB_CLEAR                  3
 #define SVM_VMGEXIT_SNP_RUN_VMPL		0x80000018
 #define SVM_VMGEXIT_HV_FEATURES			0x8000fffd
 #define SVM_VMGEXIT_TERM_REQUEST		0x8000fffe
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 72674b8825c4..7cd1c0652d15 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -3413,6 +3413,10 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
 		    control->exit_info_1 == control->exit_info_2)
 			goto vmgexit_err;
 		break;
+	case SVM_VMGEXIT_HVDB_PAGE:
+		if (!sev_snp_guest(vcpu->kvm))
+			goto vmgexit_err;
+		break;
 	default:
 		reason = GHCB_ERR_INVALID_EVENT;
 		goto vmgexit_err;
@@ -4129,6 +4133,66 @@ static int snp_handle_ext_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t r
 	return 1; /* resume guest */
 }
 
+static int sev_snp_hv_doorbell_page(struct vcpu_svm *svm)
+{
+	struct kvm_vcpu *vcpu = &svm->vcpu;
+	struct kvm_host_map hvdb_map;
+	gpa_t hvdb_gpa;
+	u64 request;
+
+	if (!sev_snp_guest(vcpu->kvm))
+		return -EINVAL;
+
+	request = svm->vmcb->control.exit_info_1;
+	hvdb_gpa = svm->vmcb->control.exit_info_2;
+
+	switch (request) {
+	case SVM_VMGEXIT_HVDB_GET_PREFERRED:
+		ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, ~0ULL);
+		break;
+	case SVM_VMGEXIT_HVDB_SET:
+		svm->sev_es.hvdb_gpa = INVALID_PAGE;
+
+		if (!PAGE_ALIGNED(hvdb_gpa)) {
+			vcpu_unimpl(vcpu, "vmgexit: unaligned #HV doorbell page address [%#llx] from guest\n",
+				    hvdb_gpa);
+			return -EINVAL;
+		}
+
+		if (!page_address_valid(vcpu, hvdb_gpa)) {
+			vcpu_unimpl(vcpu, "vmgexit: invalid #HV doorbell page address [%#llx] from guest\n",
+				    hvdb_gpa);
+			return -EINVAL;
+		}
+
+		/* Map and unmap the GPA just to be sure the GPA is valid */
+		if (kvm_vcpu_map(vcpu, gpa_to_gfn(hvdb_gpa), &hvdb_map)) {
+			/* Unable to map #HV doorbell page from guest */
+			vcpu_unimpl(vcpu, "vmgexit: error mapping #HV doorbell page [%#llx] from guest\n",
+				    hvdb_gpa);
+			return -EINVAL;
+		}
+		kvm_vcpu_unmap(vcpu, &hvdb_map);
+
+		svm->sev_es.hvdb_gpa = hvdb_gpa;
+		fallthrough;
+	case SVM_VMGEXIT_HVDB_QUERY:
+		ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, svm->sev_es.hvdb_gpa);
+		break;
+	case SVM_VMGEXIT_HVDB_CLEAR:
+		svm->sev_es.hvdb_gpa = INVALID_PAGE;
+		break;
+	default:
+		svm->sev_es.hvdb_gpa = INVALID_PAGE;
+
+		vcpu_unimpl(vcpu, "vmgexit: invalid #HV doorbell page request [%#llx] from guest\n",
+			    request);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -4409,6 +4473,14 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
 	case SVM_VMGEXIT_EXT_GUEST_REQUEST:
 		ret = snp_handle_ext_guest_req(svm, control->exit_info_1, control->exit_info_2);
 		break;
+	case SVM_VMGEXIT_HVDB_PAGE:
+		if (sev_snp_hv_doorbell_page(svm)) {
+			ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
+			ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_INPUT);
+		}
+
+		ret = 1;
+		break;
 	case SVM_VMGEXIT_UNSUPPORTED_EVENT:
 		vcpu_unimpl(vcpu,
 			    "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",
@@ -4576,6 +4648,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
 					    sev_enc_bit));
 
 	mutex_init(&svm->sev_es.snp_vmsa_mutex);
+	svm->sev_es.hvdb_gpa = INVALID_PAGE;
 }
 
 void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 43fa6a16eb19..161bd32b87ad 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -239,6 +239,8 @@ struct vcpu_sev_es_state {
 	gpa_t snp_vmsa_gpa;
 	bool snp_ap_waiting_for_reset;
 	bool snp_has_guest_vmsa;
+
+	gpa_t hvdb_gpa;
 };
 
 struct vcpu_svm {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/7] KVM: SVM: Inject #HV when restricted injection is active
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
  2024-11-27 22:55 ` [PATCH v3 1/7] x86/sev: Define the #HV doorbell page structure Melody Wang
  2024-11-27 22:55 ` [PATCH v3 2/7] KVM: SVM: Add support for the SEV-SNP #HV doorbell page NAE event Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  2024-11-27 22:55 ` [PATCH v3 4/7] KVM: SVM: Inject NMIs " Melody Wang
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

When restricted injection is active, only #HV exceptions can be injected into
the SEV-SNP guest.

Detect that restricted injection feature is active for the guest, and then
follow the #HV doorbell communication from the GHCB specification to inject the
interrupt or exception.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/include/uapi/asm/kvm.h |   1 +
 arch/x86/kvm/svm/sev.c          | 165 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/svm.c          |  16 +++-
 arch/x86/kvm/svm/svm.h          |  21 +++-
 4 files changed, 199 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 88585c1de416..ec82ab4ef70c 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -35,6 +35,7 @@
 #define MC_VECTOR 18
 #define XM_VECTOR 19
 #define VE_VECTOR 20
+#define HV_VECTOR 28
 
 /* Select x86 specific features in <linux/kvm.h> */
 #define __KVM_HAVE_PIT
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7cd1c0652d15..77dbc7dea974 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -5028,3 +5028,168 @@ int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn)
 
 	return level;
 }
+
+static void prepare_hv_injection(struct vcpu_svm *svm, struct hvdb *hvdb)
+{
+	if (hvdb->events.no_further_signal)
+		return;
+
+	svm->vmcb->control.event_inj = HV_VECTOR |
+				       SVM_EVTINJ_TYPE_EXEPT |
+				       SVM_EVTINJ_VALID;
+	svm->vmcb->control.event_inj_err = 0;
+
+	hvdb->events.no_further_signal = 1;
+}
+
+static void unmap_hvdb(struct kvm_vcpu *vcpu, struct kvm_host_map *map)
+{
+	kvm_vcpu_unmap(vcpu, map);
+}
+
+static struct hvdb *map_hvdb(struct kvm_vcpu *vcpu, struct kvm_host_map *map)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (!VALID_PAGE(svm->sev_es.hvdb_gpa))
+		return NULL;
+
+	if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->sev_es.hvdb_gpa), map)) {
+		/* Unable to map #HV doorbell page from guest */
+		vcpu_unimpl(vcpu, "snp: error mapping #HV doorbell page [%#llx] from guest\n",
+			    svm->sev_es.hvdb_gpa);
+
+		return NULL;
+	}
+
+	return map->hva;
+}
+
+static void __sev_snp_inject(enum inject_type type, struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct kvm_host_map hvdb_map;
+	struct hvdb *hvdb;
+
+	hvdb = map_hvdb(vcpu, &hvdb_map);
+	if (!hvdb) {
+		WARN_ONCE(1, "restricted injection enabled, hvdb page mapping failed\n");
+		return;
+	}
+
+	hvdb->events.vector = vcpu->arch.interrupt.nr;
+
+	prepare_hv_injection(svm, hvdb);
+
+	unmap_hvdb(vcpu, &hvdb_map);
+}
+
+bool sev_snp_queue_exception(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (!sev_snp_is_rinj_active(vcpu))
+		return false;
+
+	/*
+	 * Restricted injection is enabled, only #HV is supported.
+	 * If the vector is not HV_VECTOR, do not inject the exception,
+	 * then return true to skip the original injection path.
+	 */
+	if (WARN_ONCE(vcpu->arch.exception.vector != HV_VECTOR,
+		      "restricted injection enabled, exception vector %u injection not supported\n",
+		      vcpu->arch.exception.vector))
+		return true;
+
+	/*
+	 * An intercept likely occurred during #HV delivery, so re-inject it
+	 * using the current HVDB pending event values.
+	 */
+	svm->vmcb->control.event_inj = HV_VECTOR |
+				       SVM_EVTINJ_TYPE_EXEPT |
+				       SVM_EVTINJ_VALID;
+	svm->vmcb->control.event_inj_err = 0;
+
+	return true;
+}
+
+bool sev_snp_inject(enum inject_type type, struct kvm_vcpu *vcpu)
+{
+	if (!sev_snp_is_rinj_active(vcpu))
+		return false;
+
+	__sev_snp_inject(type, vcpu);
+
+	return true;
+}
+
+void sev_snp_cancel_injection(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct kvm_host_map hvdb_map;
+	struct hvdb *hvdb;
+
+	if (!sev_snp_is_rinj_active(vcpu))
+		return;
+
+	if (!svm->vmcb->control.event_inj)
+		return;
+
+	if (WARN_ONCE((svm->vmcb->control.event_inj & SVM_EVTINJ_VEC_MASK) != HV_VECTOR,
+			"restricted injection enabled,  %u vector not supported\n",
+			svm->vmcb->control.event_inj & SVM_EVTINJ_VEC_MASK))
+		return;
+
+	/*
+	 * Copy the information in the doorbell page into the event injection
+	 * fields to complete the cancellation flow.
+	 */
+	hvdb = map_hvdb(vcpu, &hvdb_map);
+	if (!hvdb)
+		return;
+
+	if (!hvdb->events.pending_events) {
+		/* No pending events, then event_inj field should be 0 */
+		WARN_ON_ONCE(svm->vmcb->control.event_inj);
+		goto out;
+	}
+
+	/* Copy info back into event_inj field (replaces #HV) */
+	svm->vmcb->control.event_inj = SVM_EVTINJ_VALID;
+
+	if (hvdb->events.vector)
+		svm->vmcb->control.event_inj |= hvdb->events.vector |
+						SVM_EVTINJ_TYPE_INTR;
+
+	hvdb->events.pending_events = 0;
+
+out:
+	unmap_hvdb(vcpu, &hvdb_map);
+}
+
+/*
+* sev_snp_blocked() is for each vector - interrupt, nmi and mce,
+* for example, it is checking if there is an interrupt handled or not by
+* the guest when another interrupt is pending. So hvdb->events.vector will
+* be used for checking. While no_further_signal is signaling to the guest
+* that a #HV is presented by the hypervisor. So no_further_signal is checked
+* when a #HV needs to be presented to the guest.
+*/
+bool sev_snp_blocked(enum inject_type type, struct kvm_vcpu *vcpu)
+{
+	struct kvm_host_map hvdb_map;
+	struct hvdb *hvdb;
+	bool blocked;
+
+	/* Indicate interrupts are blocked if doorbell page can't be mapped */
+	hvdb = map_hvdb(vcpu, &hvdb_map);
+	if (!hvdb)
+		return true;
+
+	/* Indicate interrupts blocked based on guest acknowledgment */
+	blocked = !!hvdb->events.vector;
+
+	unmap_hvdb(vcpu, &hvdb_map);
+
+	return blocked;
+}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index dd15cc635655..99f35a54b6ad 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -467,6 +467,9 @@ static void svm_inject_exception(struct kvm_vcpu *vcpu)
 	    svm_update_soft_interrupt_rip(vcpu))
 		return;
 
+	if (sev_snp_queue_exception(vcpu))
+		return;
+
 	svm->vmcb->control.event_inj = ex->vector
 		| SVM_EVTINJ_VALID
 		| (ex->has_error_code ? SVM_EVTINJ_VALID_ERR : 0)
@@ -3679,10 +3682,12 @@ static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
 
 	trace_kvm_inj_virq(vcpu->arch.interrupt.nr,
 			   vcpu->arch.interrupt.soft, reinjected);
-	++vcpu->stat.irq_injections;
 
-	svm->vmcb->control.event_inj = vcpu->arch.interrupt.nr |
-				       SVM_EVTINJ_VALID | type;
+	if (!sev_snp_inject(INJECT_IRQ, vcpu))
+		svm->vmcb->control.event_inj = vcpu->arch.interrupt.nr |
+						SVM_EVTINJ_VALID | type;
+
+	++vcpu->stat.irq_injections;
 }
 
 void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
@@ -3827,6 +3832,9 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
 	if (!gif_set(svm))
 		return true;
 
+	if (sev_snp_is_rinj_active(vcpu))
+		return sev_snp_blocked(INJECT_IRQ, vcpu);
+
 	if (is_guest_mode(vcpu)) {
 		/* As long as interrupts are being delivered...  */
 		if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK)
@@ -4145,6 +4153,8 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb_control_area *control = &svm->vmcb->control;
 
+	sev_snp_cancel_injection(vcpu);
+
 	control->exit_int_info = control->event_inj;
 	control->exit_int_info_err = control->event_inj_err;
 	control->event_inj = 0;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 161bd32b87ad..724e0b197b2c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -55,6 +55,10 @@ extern bool x2avic_enabled;
 extern bool vnmi;
 extern int lbrv;
 
+enum inject_type {
+	INJECT_IRQ,
+};
+
 /*
  * Clean bits in VMCB.
  * VMCB_ALL_CLEAN_MASK might also need to
@@ -765,6 +769,17 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu);
 int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
 void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end);
 int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn);
+bool sev_snp_queue_exception(struct kvm_vcpu *vcpu);
+bool sev_snp_inject(enum inject_type type, struct kvm_vcpu *vcpu);
+void sev_snp_cancel_injection(struct kvm_vcpu *vcpu);
+bool sev_snp_blocked(enum inject_type type, struct kvm_vcpu *vcpu);
+static inline bool sev_snp_is_rinj_active(struct kvm_vcpu *vcpu)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
+
+	return sev_snp_guest(vcpu->kvm) &&
+		(sev->vmsa_features & SVM_SEV_FEAT_RESTRICTED_INJECTION);
+};
 #else
 static inline struct page *snp_safe_alloc_page_node(int node, gfp_t gfp)
 {
@@ -795,7 +810,11 @@ static inline int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn)
 {
 	return 0;
 }
-
+static inline bool sev_snp_queue_exception(struct kvm_vcpu *vcpu) { return false; }
+static inline bool sev_snp_inject(enum inject_type type, struct kvm_vcpu *vcpu) { return false; }
+static inline void sev_snp_cancel_injection(struct kvm_vcpu *vcpu) {}
+static inline bool sev_snp_blocked(enum inject_type type, struct kvm_vcpu *vcpu) { return false; }
+static inline bool sev_snp_is_rinj_active(struct kvm_vcpu *vcpu) { return false; }
 #endif
 
 /* vmenter.S */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/7] KVM: SVM: Inject NMIs when restricted injection is active
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
                   ` (2 preceding siblings ...)
  2024-11-27 22:55 ` [PATCH v3 3/7] KVM: SVM: Inject #HV when restricted injection is active Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  2024-11-27 22:55 ` [PATCH v3 5/7] KVM: SVM: Inject MCEs " Melody Wang
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

When restricted injection is active, only #HV exceptions can be injected into
the SEV-SNP guest.

Detect that restricted injection feature is active for the guest, and then
follow the #HV doorbell communication from the GHCB specification to inject
NMIs.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/kvm/svm/sev.c | 19 ++++++++++++++++---
 arch/x86/kvm/svm/svm.c |  8 ++++++++
 arch/x86/kvm/svm/svm.h |  1 +
 3 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 77dbc7dea974..00d1f620d14a 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -5077,7 +5077,10 @@ static void __sev_snp_inject(enum inject_type type, struct kvm_vcpu *vcpu)
 		return;
 	}
 
-	hvdb->events.vector = vcpu->arch.interrupt.nr;
+	if (type == INJECT_NMI)
+		hvdb->events.nmi = 1;
+	else
+		hvdb->events.vector = vcpu->arch.interrupt.nr;
 
 	prepare_hv_injection(svm, hvdb);
 
@@ -5157,10 +5160,17 @@ void sev_snp_cancel_injection(struct kvm_vcpu *vcpu)
 	/* Copy info back into event_inj field (replaces #HV) */
 	svm->vmcb->control.event_inj = SVM_EVTINJ_VALID;
 
+	/*
+	 * KVM only injects a single event each time (prepare_hv_injection),
+	 * so when events.nmi is true, the vector will be zero
+	 */
 	if (hvdb->events.vector)
 		svm->vmcb->control.event_inj |= hvdb->events.vector |
 						SVM_EVTINJ_TYPE_INTR;
 
+	if (hvdb->events.nmi)
+		svm->vmcb->control.event_inj |= SVM_EVTINJ_TYPE_NMI;
+
 	hvdb->events.pending_events = 0;
 
 out:
@@ -5186,8 +5196,11 @@ bool sev_snp_blocked(enum inject_type type, struct kvm_vcpu *vcpu)
 	if (!hvdb)
 		return true;
 
-	/* Indicate interrupts blocked based on guest acknowledgment */
-	blocked = !!hvdb->events.vector;
+	/* Indicate NMIs and interrupts blocked based on guest acknowledgment */
+	if (type == INJECT_NMI)
+		blocked = hvdb->events.nmi;
+	else
+		blocked = !!hvdb->events.vector;
 
 	unmap_hvdb(vcpu, &hvdb_map);
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 99f35a54b6ad..91bf17684bc8 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3616,6 +3616,9 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
+	if (sev_snp_inject(INJECT_NMI, vcpu))
+		goto status;
+
 	svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI;
 
 	if (svm->nmi_l1_to_l2)
@@ -3630,6 +3633,8 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
 		svm->nmi_masked = true;
 		svm_set_iret_intercept(svm);
 	}
+
+status:
 	++vcpu->stat.nmi_injections;
 }
 
@@ -3800,6 +3805,9 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
 	if (!gif_set(svm))
 		return true;
 
+	if (sev_snp_is_rinj_active(vcpu))
+		return sev_snp_blocked(INJECT_NMI, vcpu);
+
 	if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
 		return false;
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 724e0b197b2c..b6e833f455ae 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -57,6 +57,7 @@ extern int lbrv;
 
 enum inject_type {
 	INJECT_IRQ,
+	INJECT_NMI,
 };
 
 /*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 5/7] KVM: SVM: Inject MCEs when restricted injection is active
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
                   ` (3 preceding siblings ...)
  2024-11-27 22:55 ` [PATCH v3 4/7] KVM: SVM: Inject NMIs " Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  2024-11-28 13:41   ` kernel test robot
  2024-11-27 22:55 ` [PATCH v3 6/7] KVM: SVM: Add support for the SEV-SNP #HV IPI NAE event Melody Wang
  2024-11-27 22:55 ` [PATCH v3 7/7] KVM: SVM: Enable restricted injection for an SEV-SNP guest Melody Wang
  6 siblings, 1 reply; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

When restricted injection is active, only #HV exceptions can be injected into
the SEV-SNP guest.

Detect that restricted injection feature is active for the guest, and then
follow the #HV doorbell communication from the GHCB specification to inject the
MCEs.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  1 +
 arch/x86/kvm/svm/sev.c             | 16 ++++++++++++++--
 arch/x86/kvm/svm/svm.c             | 17 +++++++++++++++++
 arch/x86/kvm/svm/svm.h             |  2 ++
 arch/x86/kvm/vmx/main.c            |  1 +
 arch/x86/kvm/vmx/vmx.c             |  5 +++++
 arch/x86/kvm/vmx/x86_ops.h         |  1 +
 arch/x86/kvm/x86.c                 |  7 +++++++
 9 files changed, 49 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 5aff7222e40f..07fb1e2f59cb 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -77,6 +77,7 @@ KVM_X86_OP(inject_exception)
 KVM_X86_OP(cancel_injection)
 KVM_X86_OP(interrupt_allowed)
 KVM_X86_OP(nmi_allowed)
+KVM_X86_OP_OPTIONAL(mce_allowed)
 KVM_X86_OP(get_nmi_mask)
 KVM_X86_OP(set_nmi_mask)
 KVM_X86_OP(enable_nmi_window)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e159e44a6a1b..288b826e384c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1717,6 +1717,7 @@ struct kvm_x86_ops {
 	void (*cancel_injection)(struct kvm_vcpu *vcpu);
 	int (*interrupt_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
 	int (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
+	int (*mce_allowed)(struct kvm_vcpu *vcpu);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
 	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
 	/* Whether or not a virtual NMI is pending in hardware. */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 00d1f620d14a..19fcb0ddcff0 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -5079,6 +5079,8 @@ static void __sev_snp_inject(enum inject_type type, struct kvm_vcpu *vcpu)
 
 	if (type == INJECT_NMI)
 		hvdb->events.nmi = 1;
+	else if (type == INJECT_MCE)
+		hvdb->events.mce = 1;
 	else
 		hvdb->events.vector = vcpu->arch.interrupt.nr;
 
@@ -5094,6 +5096,11 @@ bool sev_snp_queue_exception(struct kvm_vcpu *vcpu)
 	if (!sev_snp_is_rinj_active(vcpu))
 		return false;
 
+	if (vcpu->arch.exception.vector == MC_VECTOR) {
+		__sev_snp_inject(INJECT_MCE, vcpu);
+		return true;
+	}
+
 	/*
 	 * Restricted injection is enabled, only #HV is supported.
 	 * If the vector is not HV_VECTOR, do not inject the exception,
@@ -5162,7 +5169,7 @@ void sev_snp_cancel_injection(struct kvm_vcpu *vcpu)
 
 	/*
 	 * KVM only injects a single event each time (prepare_hv_injection),
-	 * so when events.nmi is true, the vector will be zero
+	 * so when events.nmi is true, the mce and vector will be zero
 	 */
 	if (hvdb->events.vector)
 		svm->vmcb->control.event_inj |= hvdb->events.vector |
@@ -5171,6 +5178,9 @@ void sev_snp_cancel_injection(struct kvm_vcpu *vcpu)
 	if (hvdb->events.nmi)
 		svm->vmcb->control.event_inj |= SVM_EVTINJ_TYPE_NMI;
 
+	if (hvdb->events.mce)
+		svm->vmcb->control.event_inj |= MC_VECTOR | SVM_EVTINJ_TYPE_EXEPT;
+
 	hvdb->events.pending_events = 0;
 
 out:
@@ -5196,9 +5206,11 @@ bool sev_snp_blocked(enum inject_type type, struct kvm_vcpu *vcpu)
 	if (!hvdb)
 		return true;
 
-	/* Indicate NMIs and interrupts blocked based on guest acknowledgment */
+	/* Indicate NMIs, MCEs and interrupts blocked based on guest acknowledgment */
 	if (type == INJECT_NMI)
 		blocked = hvdb->events.nmi;
+	else if (type == INJECT_MCE)
+		blocked = hvdb->events.mce;
 	else
 		blocked = !!hvdb->events.vector;
 
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 91bf17684bc8..696653269c55 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3881,6 +3881,22 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 	return 1;
 }
 
+bool svm_mce_blocked(struct kvm_vcpu *vcpu)
+{
+	if (sev_snp_is_rinj_active(vcpu))
+		return sev_snp_blocked(INJECT_MCE, vcpu);
+
+	return false;
+}
+
+static int svm_mce_allowed(struct kvm_vcpu *vcpu)
+{
+	if (svm_mce_blocked(vcpu))
+		return 0;
+
+	return 1;
+}
+
 static void svm_enable_irq_window(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -5091,6 +5107,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.cancel_injection = svm_cancel_injection,
 	.interrupt_allowed = svm_interrupt_allowed,
 	.nmi_allowed = svm_nmi_allowed,
+	.mce_allowed = svm_mce_allowed,
 	.get_nmi_mask = svm_get_nmi_mask,
 	.set_nmi_mask = svm_set_nmi_mask,
 	.enable_nmi_window = svm_enable_nmi_window,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index b6e833f455ae..9c71bf01729b 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -58,6 +58,7 @@ extern int lbrv;
 enum inject_type {
 	INJECT_IRQ,
 	INJECT_NMI,
+	INJECT_MCE,
 };
 
 /*
@@ -616,6 +617,7 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void disable_nmi_singlestep(struct vcpu_svm *svm);
 bool svm_smi_blocked(struct kvm_vcpu *vcpu);
 bool svm_nmi_blocked(struct kvm_vcpu *vcpu);
+bool svm_mce_blocked(struct kvm_vcpu *vcpu);
 bool svm_interrupt_blocked(struct kvm_vcpu *vcpu);
 void svm_set_gif(struct vcpu_svm *svm, bool value);
 int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code);
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 92d35cc6cd15..036f750c53c5 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -87,6 +87,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.cancel_injection = vmx_cancel_injection,
 	.interrupt_allowed = vmx_interrupt_allowed,
 	.nmi_allowed = vmx_nmi_allowed,
+	.mce_allowed = vmx_mce_allowed,
 	.get_nmi_mask = vmx_get_nmi_mask,
 	.set_nmi_mask = vmx_set_nmi_mask,
 	.enable_nmi_window = vmx_enable_nmi_window,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 893366e53732..afa6d126324c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5103,6 +5103,11 @@ int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 	return !vmx_interrupt_blocked(vcpu);
 }
 
+int vmx_mce_allowed(struct kvm_vcpu *vcpu)
+{
+	return 1;
+}
+
 int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
 {
 	void __user *ret;
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index a55981c5216e..8607ef20897d 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -93,6 +93,7 @@ void vmx_inject_exception(struct kvm_vcpu *vcpu);
 void vmx_cancel_injection(struct kvm_vcpu *vcpu);
 int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection);
 int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection);
+int vmx_mce_allowed(struct kvm_vcpu *vcpu);
 bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
 void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
 void vmx_enable_nmi_window(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2e713480933a..a76ce35c5b93 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10357,12 +10357,19 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
 			}
 		}
 
+		if (vcpu->arch.exception.vector == MC_VECTOR) {
+			r = static_call(kvm_x86_mce_allowed)(vcpu);
+			if (!r)
+				goto out_except;
+		}
+
 		kvm_inject_exception(vcpu);
 
 		vcpu->arch.exception.pending = false;
 		vcpu->arch.exception.injected = true;
 
 		can_inject = false;
+out_except:
 	}
 
 	/* Don't inject interrupts if the user asked to avoid doing so */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 6/7] KVM: SVM: Add support for the SEV-SNP #HV IPI NAE event
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
                   ` (4 preceding siblings ...)
  2024-11-27 22:55 ` [PATCH v3 5/7] KVM: SVM: Inject MCEs " Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  2024-11-27 22:55 ` [PATCH v3 7/7] KVM: SVM: Enable restricted injection for an SEV-SNP guest Melody Wang
  6 siblings, 0 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

The #HV IPI NAE event allows the guest to send an IPI to other vCPUs in the
guest when the Restricted Injection feature is enabled. Implement the NAE
event as per GHCB specification.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/include/uapi/asm/svm.h |  1 +
 arch/x86/kvm/lapic.c            | 24 +++++++++++++++++++++++-
 arch/x86/kvm/lapic.h            |  2 ++
 arch/x86/kvm/svm/sev.c          | 29 +++++++++++++++++++++++++++++
 4 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index 7905c9be44d1..7a3a599d3df8 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -120,6 +120,7 @@
 #define SVM_VMGEXIT_HVDB_SET                    1
 #define SVM_VMGEXIT_HVDB_QUERY                  2
 #define SVM_VMGEXIT_HVDB_CLEAR                  3
+#define SVM_VMGEXIT_HV_IPI                      0x80000015
 #define SVM_VMGEXIT_SNP_RUN_VMPL		0x80000018
 #define SVM_VMGEXIT_HV_FEATURES			0x8000fffd
 #define SVM_VMGEXIT_TERM_REQUEST		0x8000fffe
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 3c83951c619e..99a45ba1b637 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2417,7 +2417,7 @@ static int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
 static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
 			    gpa_t address, int len, const void *data)
 {
-	struct kvm_lapic *apic = to_lapic(this);
+	struct kvm_lapic *apic = this ? to_lapic(this) : vcpu->arch.apic;
 	unsigned int offset = address - apic->base_address;
 	u32 val;
 
@@ -3416,3 +3416,25 @@ void kvm_lapic_exit(void)
 	static_key_deferred_flush(&apic_sw_disabled);
 	WARN_ON(static_branch_unlikely(&apic_sw_disabled.key));
 }
+
+/* Send IPI by writing ICR with MSR write when X2APIC enabled, with mmio write when XAPIC enabled */
+int kvm_xapic_x2apic_send_ipi(struct kvm_vcpu *vcpu, u64 data)
+{
+	u32 icr_msr_addr = APIC_BASE_MSR + (APIC_ICR >> 4);
+	struct kvm_lapic *apic = vcpu->arch.apic;
+	gpa_t gpa = apic->base_address + APIC_ICR;
+
+	if (!kvm_lapic_enabled(vcpu))
+		return 1;
+
+	if (vcpu->arch.apic_base & X2APIC_ENABLE) {
+		if (!kvm_x2apic_msr_write(vcpu, icr_msr_addr, data))
+			return 0;
+	} else {
+		if (!apic_mmio_write(vcpu, NULL, gpa, 4, &data))
+			return 0;
+	}
+
+	return 1;
+}
+EXPORT_SYMBOL_GPL(kvm_xapic_x2apic_send_ipi);
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 24add38beaf0..29c55f35f889 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -140,6 +140,8 @@ int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
 int kvm_lapic_set_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len);
 void kvm_lapic_exit(void);
 
+int kvm_xapic_x2apic_send_ipi(struct kvm_vcpu *vcpu, u64 data);
+
 u64 kvm_lapic_readable_reg_mask(struct kvm_lapic *apic);
 
 #define VEC_POS(v) ((v) & (32 - 1))
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 19fcb0ddcff0..5e8fc8cf2d0d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -34,6 +34,7 @@
 #include "svm_ops.h"
 #include "cpuid.h"
 #include "trace.h"
+#include "lapic.h"
 
 #define GHCB_VERSION_MAX	2ULL
 #define GHCB_VERSION_DEFAULT	2ULL
@@ -3417,6 +3418,10 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
 		if (!sev_snp_guest(vcpu->kvm))
 			goto vmgexit_err;
 		break;
+	case SVM_VMGEXIT_HV_IPI:
+		if (!sev_snp_guest(vcpu->kvm))
+			goto vmgexit_err;
+		break;
 	default:
 		reason = GHCB_ERR_INVALID_EVENT;
 		goto vmgexit_err;
@@ -4193,6 +4198,22 @@ static int sev_snp_hv_doorbell_page(struct vcpu_svm *svm)
 	return 0;
 }
 
+static int sev_snp_hv_ipi(struct vcpu_svm *svm)
+{
+	struct kvm_vcpu *vcpu = &svm->vcpu;
+	u64 icr_info;
+
+	if (!sev_snp_guest(vcpu->kvm))
+		return -EINVAL;
+
+	icr_info = svm->vmcb->control.exit_info_1;
+
+	if (kvm_xapic_x2apic_send_ipi(vcpu, icr_info))
+		return -EINVAL;
+
+	return 0;
+}
+
 static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -4479,6 +4500,14 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
 			ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_INPUT);
 		}
 
+		ret = 1;
+		break;
+	case SVM_VMGEXIT_HV_IPI:
+		if (sev_snp_hv_ipi(svm)) {
+			ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 2);
+			ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, GHCB_ERR_INVALID_INPUT);
+		}
+
 		ret = 1;
 		break;
 	case SVM_VMGEXIT_UNSUPPORTED_EVENT:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 7/7] KVM: SVM: Enable restricted injection for an SEV-SNP guest
  2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
                   ` (5 preceding siblings ...)
  2024-11-27 22:55 ` [PATCH v3 6/7] KVM: SVM: Add support for the SEV-SNP #HV IPI NAE event Melody Wang
@ 2024-11-27 22:55 ` Melody Wang
  6 siblings, 0 replies; 11+ messages in thread
From: Melody Wang @ 2024-11-27 22:55 UTC (permalink / raw)
  To: kvm, linux-kernel, x86
  Cc: Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta, Melody Wang

Enable the restricted injection in an SEV-SNP guest by setting the restricted
injection bit in the VMSA SEV features field (SEV_FEATURES[3]) from QEMU.

Add the restricted injection supporting the hypervisor advertised features.

Co-developed-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Melody Wang <huibo.wang@amd.com>
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/include/asm/sev-common.h  |  1 +
 arch/x86/kvm/svm/sev.c             | 26 +++++++++++++++++++++++++-
 3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index d96277dceabf..c0a409ac1ea3 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -450,6 +450,7 @@
 #define X86_FEATURE_SEV_SNP		(19*32+ 4) /* "sev_snp" AMD Secure Encrypted Virtualization - Secure Nested Paging */
 #define X86_FEATURE_V_TSC_AUX		(19*32+ 9) /* Virtual TSC_AUX */
 #define X86_FEATURE_SME_COHERENT	(19*32+10) /* AMD hardware-enforced cache coherency */
+#define X86_FEATURE_RESTRICTED_INJECTION	(19*32+12) /* AMD SEV Restricted Injection */
 #define X86_FEATURE_DEBUG_SWAP		(19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */
 #define X86_FEATURE_SVSM		(19*32+28) /* "svsm" SVSM present */
 
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 98726c2b04f8..f409893ad1a5 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -136,6 +136,7 @@ enum psc_op {
 
 #define GHCB_HV_FT_SNP			BIT_ULL(0)
 #define GHCB_HV_FT_SNP_AP_CREATION	BIT_ULL(1)
+#define GHCB_HV_FT_SNP_RINJ		(BIT_ULL(2) | GHCB_HV_FT_SNP_AP_CREATION)
 #define GHCB_HV_FT_SNP_MULTI_VMPL	BIT_ULL(5)
 
 /*
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 5e8fc8cf2d0d..d2a1b4304e41 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -40,7 +40,9 @@
 #define GHCB_VERSION_DEFAULT	2ULL
 #define GHCB_VERSION_MIN	1ULL
 
-#define GHCB_HV_FT_SUPPORTED	(GHCB_HV_FT_SNP | GHCB_HV_FT_SNP_AP_CREATION)
+#define GHCB_HV_FT_SUPPORTED	(GHCB_HV_FT_SNP |		\
+				 GHCB_HV_FT_SNP_AP_CREATION |	\
+				 GHCB_HV_FT_SNP_RINJ)
 
 /* enable/disable SEV support */
 static bool sev_enabled = true;
@@ -57,6 +59,10 @@ module_param_named(sev_snp, sev_snp_enabled, bool, 0444);
 /* enable/disable SEV-ES DebugSwap support */
 static bool sev_es_debug_swap_enabled = true;
 module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
+
+/* enable/disable SEV-SNP Restricted Injection support */
+static bool sev_snp_restricted_injection_enabled = true;
+module_param_named(restricted_injection, sev_snp_restricted_injection_enabled, bool, 0444);
 static u64 sev_supported_vmsa_features;
 
 #define AP_RESET_HOLD_NONE		0
@@ -3083,6 +3089,12 @@ void __init sev_hardware_setup(void)
 	sev_supported_vmsa_features = 0;
 	if (sev_es_debug_swap_enabled)
 		sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
+
+	if (!sev_snp_enabled || !cpu_feature_enabled(X86_FEATURE_RESTRICTED_INJECTION))
+		sev_snp_restricted_injection_enabled = false;
+
+	if (sev_snp_restricted_injection_enabled)
+		sev_supported_vmsa_features |= SVM_SEV_FEAT_RESTRICTED_INJECTION;
 }
 
 void sev_hardware_unsetup(void)
@@ -4589,6 +4601,15 @@ void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm)
 		sev_es_vcpu_after_set_cpuid(svm);
 }
 
+static void sev_snp_init_vmcb(struct vcpu_svm *svm)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info;
+
+	/* V_NMI is not supported when Restricted Injection is enabled */
+	if (sev->vmsa_features & SVM_SEV_FEAT_RESTRICTED_INJECTION)
+		svm->vmcb->control.int_ctl &= ~V_NMI_ENABLE_MASK;
+}
+
 static void sev_es_init_vmcb(struct vcpu_svm *svm)
 {
 	struct vmcb *vmcb = svm->vmcb01.ptr;
@@ -4646,6 +4667,9 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
 	/* Clear intercepts on selected MSRs */
 	set_msr_interception(vcpu, svm->msrpm, MSR_EFER, 1, 1);
 	set_msr_interception(vcpu, svm->msrpm, MSR_IA32_CR_PAT, 1, 1);
+
+	if (sev_snp_guest(vcpu->kvm))
+		sev_snp_init_vmcb(svm);
 }
 
 void sev_init_vmcb(struct vcpu_svm *svm)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 5/7] KVM: SVM: Inject MCEs when restricted injection is active
  2024-11-27 22:55 ` [PATCH v3 5/7] KVM: SVM: Inject MCEs " Melody Wang
@ 2024-11-28 13:41   ` kernel test robot
  2024-11-30 21:02     ` Melody (Huibo) Wang
  0 siblings, 1 reply; 11+ messages in thread
From: kernel test robot @ 2024-11-28 13:41 UTC (permalink / raw)
  To: Melody Wang, kvm, linux-kernel, x86
  Cc: llvm, oe-kbuild-all, Sean Christopherson, Paolo Bonzini,
	Tom Lendacky, Neeraj Upadhyay, Ashish Kalra, Michael Roth,
	Pankaj Gupta, Melody Wang

Hi Melody,

kernel test robot noticed the following build warnings:

[auto build test WARNING on kvm/queue]
[also build test WARNING on mst-vhost/linux-next tip/x86/core linus/master v6.12 next-20241128]
[cannot apply to kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Melody-Wang/x86-sev-Define-the-HV-doorbell-page-structure/20241128-122321
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
patch link:    https://lore.kernel.org/r/20241127225539.5567-6-huibo.wang%40amd.com
patch subject: [PATCH v3 5/7] KVM: SVM: Inject MCEs when restricted injection is active
config: x86_64-kexec (https://download.01.org/0day-ci/archive/20241128/202411282157.6f84J7Wh-lkp@intel.com/config)
compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241128/202411282157.6f84J7Wh-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202411282157.6f84J7Wh-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from arch/x86/kvm/x86.c:20:
   In file included from include/linux/kvm_host.h:16:
   In file included from include/linux/mm.h:2213:
   include/linux/vmstat.h:504:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     504 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     505 |                            item];
         |                            ~~~~
   include/linux/vmstat.h:511:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     511 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     512 |                            NR_VM_NUMA_EVENT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~~
   include/linux/vmstat.h:518:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
     518 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
         |                               ~~~~~~~~~~~ ^ ~~~
   include/linux/vmstat.h:524:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
     524 |         return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~ ^
     525 |                            NR_VM_NUMA_EVENT_ITEMS +
         |                            ~~~~~~~~~~~~~~~~~~~~~~
>> arch/x86/kvm/x86.c:10373:2: warning: label at end of compound statement is a C23 extension [-Wc23-extensions]
    10373 |         }
          |         ^
   5 warnings generated.


vim +10373 arch/x86/kvm/x86.c

b97f074583736c Maxim Levitsky      2021-02-25  10214  
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10215  /*
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10216   * Check for any event (interrupt or exception) that is ready to be injected,
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10217   * and if there is at least one event, inject the event with the highest
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10218   * priority.  This handles both "pending" events, i.e. events that have never
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10219   * been injected into the guest, and "injected" events, i.e. events that were
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10220   * injected as part of a previous VM-Enter, but weren't successfully delivered
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10221   * and need to be re-injected.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10222   *
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10223   * Note, this is not guaranteed to be invoked on a guest instruction boundary,
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10224   * i.e. doesn't guarantee that there's an event window in the guest.  KVM must
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10225   * be able to inject exceptions in the "middle" of an instruction, and so must
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10226   * also be able to re-inject NMIs and IRQs in the middle of an instruction.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10227   * I.e. for exceptions and re-injected events, NOT invoking this on instruction
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10228   * boundaries is necessary and correct.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10229   *
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10230   * For simplicity, KVM uses a single path to inject all events (except events
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10231   * that are injected directly from L1 to L2) and doesn't explicitly track
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10232   * instruction boundaries for asynchronous events.  However, because VM-Exits
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10233   * that can occur during instruction execution typically result in KVM skipping
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10234   * the instruction or injecting an exception, e.g. instruction and exception
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10235   * intercepts, and because pending exceptions have higher priority than pending
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10236   * interrupts, KVM still honors instruction boundaries in most scenarios.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10237   *
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10238   * But, if a VM-Exit occurs during instruction execution, and KVM does NOT skip
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10239   * the instruction or inject an exception, then KVM can incorrecty inject a new
54aa699e8094ef Bjorn Helgaas       2024-01-02  10240   * asynchronous event if the event became pending after the CPU fetched the
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10241   * instruction (in the guest).  E.g. if a page fault (#PF, #NPF, EPT violation)
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10242   * occurs and is resolved by KVM, a coincident NMI, SMI, IRQ, etc... can be
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10243   * injected on the restarted instruction instead of being deferred until the
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10244   * instruction completes.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10245   *
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10246   * In practice, this virtualization hole is unlikely to be observed by the
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10247   * guest, and even less likely to cause functional problems.  To detect the
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10248   * hole, the guest would have to trigger an event on a side effect of an early
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10249   * phase of instruction execution, e.g. on the instruction fetch from memory.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10250   * And for it to be a functional problem, the guest would need to depend on the
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10251   * ordering between that side effect, the instruction completing, _and_ the
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10252   * delivery of the asynchronous event.
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10253   */
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10254  static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
e746c1f1b94ac9 Sean Christopherson 2022-08-30  10255  				       bool *req_immediate_exit)
95ba82731374eb Gleb Natapov        2009-04-21  10256  {
28360f88706837 Sean Christopherson 2022-08-30  10257  	bool can_inject;
b6b8a1451fc404 Jan Kiszka          2014-03-07  10258  	int r;
b6b8a1451fc404 Jan Kiszka          2014-03-07  10259  
6c593b5276e6ce Sean Christopherson 2022-08-30  10260  	/*
54aa699e8094ef Bjorn Helgaas       2024-01-02  10261  	 * Process nested events first, as nested VM-Exit supersedes event
6c593b5276e6ce Sean Christopherson 2022-08-30  10262  	 * re-injection.  If there's an event queued for re-injection, it will
6c593b5276e6ce Sean Christopherson 2022-08-30  10263  	 * be saved into the appropriate vmc{b,s}12 fields on nested VM-Exit.
6c593b5276e6ce Sean Christopherson 2022-08-30  10264  	 */
6c593b5276e6ce Sean Christopherson 2022-08-30  10265  	if (is_guest_mode(vcpu))
6c593b5276e6ce Sean Christopherson 2022-08-30  10266  		r = kvm_check_nested_events(vcpu);
6c593b5276e6ce Sean Christopherson 2022-08-30  10267  	else
6c593b5276e6ce Sean Christopherson 2022-08-30  10268  		r = 0;
b59bb7bdf08ee9 Gleb Natapov        2009-07-09  10269  
664f8e26b00c76 Wanpeng Li          2017-08-24  10270  	/*
6c593b5276e6ce Sean Christopherson 2022-08-30  10271  	 * Re-inject exceptions and events *especially* if immediate entry+exit
6c593b5276e6ce Sean Christopherson 2022-08-30  10272  	 * to/from L2 is needed, as any event that has already been injected
6c593b5276e6ce Sean Christopherson 2022-08-30  10273  	 * into L2 needs to complete its lifecycle before injecting a new event.
6c593b5276e6ce Sean Christopherson 2022-08-30  10274  	 *
6c593b5276e6ce Sean Christopherson 2022-08-30  10275  	 * Don't re-inject an NMI or interrupt if there is a pending exception.
6c593b5276e6ce Sean Christopherson 2022-08-30  10276  	 * This collision arises if an exception occurred while vectoring the
6c593b5276e6ce Sean Christopherson 2022-08-30  10277  	 * injected event, KVM intercepted said exception, and KVM ultimately
6c593b5276e6ce Sean Christopherson 2022-08-30  10278  	 * determined the fault belongs to the guest and queues the exception
6c593b5276e6ce Sean Christopherson 2022-08-30  10279  	 * for injection back into the guest.
6c593b5276e6ce Sean Christopherson 2022-08-30  10280  	 *
6c593b5276e6ce Sean Christopherson 2022-08-30  10281  	 * "Injected" interrupts can also collide with pending exceptions if
6c593b5276e6ce Sean Christopherson 2022-08-30  10282  	 * userspace ignores the "ready for injection" flag and blindly queues
6c593b5276e6ce Sean Christopherson 2022-08-30  10283  	 * an interrupt.  In that case, prioritizing the exception is correct,
6c593b5276e6ce Sean Christopherson 2022-08-30  10284  	 * as the exception "occurred" before the exit to userspace.  Trap-like
6c593b5276e6ce Sean Christopherson 2022-08-30  10285  	 * exceptions, e.g. most #DBs, have higher priority than interrupts.
6c593b5276e6ce Sean Christopherson 2022-08-30  10286  	 * And while fault-like exceptions, e.g. #GP and #PF, are the lowest
6c593b5276e6ce Sean Christopherson 2022-08-30  10287  	 * priority, they're only generated (pended) during instruction
6c593b5276e6ce Sean Christopherson 2022-08-30  10288  	 * execution, and interrupts are recognized at instruction boundaries.
6c593b5276e6ce Sean Christopherson 2022-08-30  10289  	 * Thus a pending fault-like exception means the fault occurred on the
6c593b5276e6ce Sean Christopherson 2022-08-30  10290  	 * *previous* instruction and must be serviced prior to recognizing any
6c593b5276e6ce Sean Christopherson 2022-08-30  10291  	 * new events in order to fully complete the previous instruction.
6c593b5276e6ce Sean Christopherson 2022-08-30  10292  	 */
6c593b5276e6ce Sean Christopherson 2022-08-30  10293  	if (vcpu->arch.exception.injected)
b97f074583736c Maxim Levitsky      2021-02-25  10294  		kvm_inject_exception(vcpu);
7709aba8f71613 Sean Christopherson 2022-08-30  10295  	else if (kvm_is_exception_pending(vcpu))
6c593b5276e6ce Sean Christopherson 2022-08-30  10296  		; /* see above */
6c593b5276e6ce Sean Christopherson 2022-08-30  10297  	else if (vcpu->arch.nmi_injected)
896046474f8d2e Wei Wang            2024-05-07  10298  		kvm_x86_call(inject_nmi)(vcpu);
6c593b5276e6ce Sean Christopherson 2022-08-30  10299  	else if (vcpu->arch.interrupt.injected)
896046474f8d2e Wei Wang            2024-05-07  10300  		kvm_x86_call(inject_irq)(vcpu, true);
b6b8a1451fc404 Jan Kiszka          2014-03-07  10301  
6c593b5276e6ce Sean Christopherson 2022-08-30  10302  	/*
6c593b5276e6ce Sean Christopherson 2022-08-30  10303  	 * Exceptions that morph to VM-Exits are handled above, and pending
6c593b5276e6ce Sean Christopherson 2022-08-30  10304  	 * exceptions on top of injected exceptions that do not VM-Exit should
6c593b5276e6ce Sean Christopherson 2022-08-30  10305  	 * either morph to #DF or, sadly, override the injected exception.
6c593b5276e6ce Sean Christopherson 2022-08-30  10306  	 */
3b82b8d7fdf7c1 Sean Christopherson 2020-04-22  10307  	WARN_ON_ONCE(vcpu->arch.exception.injected &&
3b82b8d7fdf7c1 Sean Christopherson 2020-04-22  10308  		     vcpu->arch.exception.pending);
3b82b8d7fdf7c1 Sean Christopherson 2020-04-22  10309  
1a680e355c9477 Liran Alon          2018-03-23  10310  	/*
6c593b5276e6ce Sean Christopherson 2022-08-30  10311  	 * Bail if immediate entry+exit to/from the guest is needed to complete
6c593b5276e6ce Sean Christopherson 2022-08-30  10312  	 * nested VM-Enter or event re-injection so that a different pending
6c593b5276e6ce Sean Christopherson 2022-08-30  10313  	 * event can be serviced (or if KVM needs to exit to userspace).
6c593b5276e6ce Sean Christopherson 2022-08-30  10314  	 *
6c593b5276e6ce Sean Christopherson 2022-08-30  10315  	 * Otherwise, continue processing events even if VM-Exit occurred.  The
6c593b5276e6ce Sean Christopherson 2022-08-30  10316  	 * VM-Exit will have cleared exceptions that were meant for L2, but
6c593b5276e6ce Sean Christopherson 2022-08-30  10317  	 * there may now be events that can be injected into L1.
1a680e355c9477 Liran Alon          2018-03-23  10318  	 */
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10319  	if (r < 0)
a5f6909a71f922 Jim Mattson         2021-06-04  10320  		goto out;
95ba82731374eb Gleb Natapov        2009-04-21  10321  
7709aba8f71613 Sean Christopherson 2022-08-30  10322  	/*
7709aba8f71613 Sean Christopherson 2022-08-30  10323  	 * A pending exception VM-Exit should either result in nested VM-Exit
7709aba8f71613 Sean Christopherson 2022-08-30  10324  	 * or force an immediate re-entry and exit to/from L2, and exception
7709aba8f71613 Sean Christopherson 2022-08-30  10325  	 * VM-Exits cannot be injected (flag should _never_ be set).
7709aba8f71613 Sean Christopherson 2022-08-30  10326  	 */
7709aba8f71613 Sean Christopherson 2022-08-30  10327  	WARN_ON_ONCE(vcpu->arch.exception_vmexit.injected ||
7709aba8f71613 Sean Christopherson 2022-08-30  10328  		     vcpu->arch.exception_vmexit.pending);
7709aba8f71613 Sean Christopherson 2022-08-30  10329  
28360f88706837 Sean Christopherson 2022-08-30  10330  	/*
28360f88706837 Sean Christopherson 2022-08-30  10331  	 * New events, other than exceptions, cannot be injected if KVM needs
28360f88706837 Sean Christopherson 2022-08-30  10332  	 * to re-inject a previous event.  See above comments on re-injecting
28360f88706837 Sean Christopherson 2022-08-30  10333  	 * for why pending exceptions get priority.
28360f88706837 Sean Christopherson 2022-08-30  10334  	 */
28360f88706837 Sean Christopherson 2022-08-30  10335  	can_inject = !kvm_event_needs_reinjection(vcpu);
28360f88706837 Sean Christopherson 2022-08-30  10336  
664f8e26b00c76 Wanpeng Li          2017-08-24  10337  	if (vcpu->arch.exception.pending) {
5623f751bd9c43 Sean Christopherson 2022-08-30  10338  		/*
5623f751bd9c43 Sean Christopherson 2022-08-30  10339  		 * Fault-class exceptions, except #DBs, set RF=1 in the RFLAGS
5623f751bd9c43 Sean Christopherson 2022-08-30  10340  		 * value pushed on the stack.  Trap-like exception and all #DBs
5623f751bd9c43 Sean Christopherson 2022-08-30  10341  		 * leave RF as-is (KVM follows Intel's behavior in this regard;
5623f751bd9c43 Sean Christopherson 2022-08-30  10342  		 * AMD states that code breakpoint #DBs excplitly clear RF=0).
5623f751bd9c43 Sean Christopherson 2022-08-30  10343  		 *
5623f751bd9c43 Sean Christopherson 2022-08-30  10344  		 * Note, most versions of Intel's SDM and AMD's APM incorrectly
5623f751bd9c43 Sean Christopherson 2022-08-30  10345  		 * describe the behavior of General Detect #DBs, which are
5623f751bd9c43 Sean Christopherson 2022-08-30  10346  		 * fault-like.  They do _not_ set RF, a la code breakpoints.
5623f751bd9c43 Sean Christopherson 2022-08-30  10347  		 */
d4963e319f1f78 Sean Christopherson 2022-08-30  10348  		if (exception_type(vcpu->arch.exception.vector) == EXCPT_FAULT)
664f8e26b00c76 Wanpeng Li          2017-08-24  10349  			__kvm_set_rflags(vcpu, kvm_get_rflags(vcpu) |
664f8e26b00c76 Wanpeng Li          2017-08-24  10350  					     X86_EFLAGS_RF);
664f8e26b00c76 Wanpeng Li          2017-08-24  10351  
d4963e319f1f78 Sean Christopherson 2022-08-30  10352  		if (vcpu->arch.exception.vector == DB_VECTOR) {
d4963e319f1f78 Sean Christopherson 2022-08-30  10353  			kvm_deliver_exception_payload(vcpu, &vcpu->arch.exception);
f10c729ff96528 Jim Mattson         2018-10-16  10354  			if (vcpu->arch.dr7 & DR7_GD) {
664f8e26b00c76 Wanpeng Li          2017-08-24  10355  				vcpu->arch.dr7 &= ~DR7_GD;
664f8e26b00c76 Wanpeng Li          2017-08-24  10356  				kvm_update_dr7(vcpu);
664f8e26b00c76 Wanpeng Li          2017-08-24  10357  			}
f10c729ff96528 Jim Mattson         2018-10-16  10358  		}
664f8e26b00c76 Wanpeng Li          2017-08-24  10359  
8b59e4b9d4bdfd Melody Wang         2024-11-27  10360  		if (vcpu->arch.exception.vector == MC_VECTOR) {
8b59e4b9d4bdfd Melody Wang         2024-11-27  10361  			r = static_call(kvm_x86_mce_allowed)(vcpu);
8b59e4b9d4bdfd Melody Wang         2024-11-27  10362  			if (!r)
8b59e4b9d4bdfd Melody Wang         2024-11-27  10363  				goto out_except;
8b59e4b9d4bdfd Melody Wang         2024-11-27  10364  		}
8b59e4b9d4bdfd Melody Wang         2024-11-27  10365  
b97f074583736c Maxim Levitsky      2021-02-25  10366  		kvm_inject_exception(vcpu);
a61d7c5432ac5a Sean Christopherson 2022-05-02  10367  
a61d7c5432ac5a Sean Christopherson 2022-05-02  10368  		vcpu->arch.exception.pending = false;
a61d7c5432ac5a Sean Christopherson 2022-05-02  10369  		vcpu->arch.exception.injected = true;
a61d7c5432ac5a Sean Christopherson 2022-05-02  10370  
c6b22f59d694d0 Paolo Bonzini       2020-05-26  10371  		can_inject = false;
8b59e4b9d4bdfd Melody Wang         2024-11-27  10372  out_except:
1a680e355c9477 Liran Alon          2018-03-23 @10373  	}
1a680e355c9477 Liran Alon          2018-03-23  10374  
61e5f69ef08379 Maxim Levitsky      2021-08-11  10375  	/* Don't inject interrupts if the user asked to avoid doing so */
61e5f69ef08379 Maxim Levitsky      2021-08-11  10376  	if (vcpu->guest_debug & KVM_GUESTDBG_BLOCKIRQ)
61e5f69ef08379 Maxim Levitsky      2021-08-11  10377  		return 0;
61e5f69ef08379 Maxim Levitsky      2021-08-11  10378  
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10379  	/*
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10380  	 * Finally, inject interrupt events.  If an event cannot be injected
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10381  	 * due to architectural conditions (e.g. IF=0) a window-open exit
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10382  	 * will re-request KVM_REQ_EVENT.  Sometimes however an event is pending
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10383  	 * and can architecturally be injected, but we cannot do it right now:
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10384  	 * an interrupt could have arrived just now and we have to inject it
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10385  	 * as a vmexit, or there could already an event in the queue, which is
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10386  	 * indicated by can_inject.  In that case we request an immediate exit
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10387  	 * in order to make progress and get back here for another iteration.
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10388  	 * The kvm_x86_ops hooks communicate this by returning -EBUSY.
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10389  	 */
31e83e21cf00fe Paolo Bonzini       2022-09-29  10390  #ifdef CONFIG_KVM_SMM
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10391  	if (vcpu->arch.smi_pending) {
896046474f8d2e Wei Wang            2024-05-07  10392  		r = can_inject ? kvm_x86_call(smi_allowed)(vcpu, true) :
896046474f8d2e Wei Wang            2024-05-07  10393  				 -EBUSY;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10394  		if (r < 0)
a5f6909a71f922 Jim Mattson         2021-06-04  10395  			goto out;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10396  		if (r) {
c43203cab1e2e1 Paolo Bonzini       2016-06-01  10397  			vcpu->arch.smi_pending = false;
52797bf9a875c4 Liran Alon          2017-11-15  10398  			++vcpu->arch.smi_count;
ee2cd4b7555e3a Paolo Bonzini       2016-06-01  10399  			enter_smm(vcpu);
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10400  			can_inject = false;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10401  		} else
896046474f8d2e Wei Wang            2024-05-07  10402  			kvm_x86_call(enable_smi_window)(vcpu);
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10403  	}
31e83e21cf00fe Paolo Bonzini       2022-09-29  10404  #endif
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10405  
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10406  	if (vcpu->arch.nmi_pending) {
896046474f8d2e Wei Wang            2024-05-07  10407  		r = can_inject ? kvm_x86_call(nmi_allowed)(vcpu, true) :
896046474f8d2e Wei Wang            2024-05-07  10408  				 -EBUSY;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10409  		if (r < 0)
a5f6909a71f922 Jim Mattson         2021-06-04  10410  			goto out;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10411  		if (r) {
7460fb4a340033 Avi Kivity          2011-09-20  10412  			--vcpu->arch.nmi_pending;
95ba82731374eb Gleb Natapov        2009-04-21  10413  			vcpu->arch.nmi_injected = true;
896046474f8d2e Wei Wang            2024-05-07  10414  			kvm_x86_call(inject_nmi)(vcpu);
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10415  			can_inject = false;
896046474f8d2e Wei Wang            2024-05-07  10416  			WARN_ON(kvm_x86_call(nmi_allowed)(vcpu, true) < 0);
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10417  		}
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10418  		if (vcpu->arch.nmi_pending)
896046474f8d2e Wei Wang            2024-05-07  10419  			kvm_x86_call(enable_nmi_window)(vcpu);
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10420  	}
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10421  
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10422  	if (kvm_cpu_has_injectable_intr(vcpu)) {
896046474f8d2e Wei Wang            2024-05-07  10423  		r = can_inject ? kvm_x86_call(interrupt_allowed)(vcpu, true) :
896046474f8d2e Wei Wang            2024-05-07  10424  				 -EBUSY;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10425  		if (r < 0)
a5f6909a71f922 Jim Mattson         2021-06-04  10426  			goto out;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10427  		if (r) {
bf672720e83cf0 Maxim Levitsky      2023-07-26  10428  			int irq = kvm_cpu_get_interrupt(vcpu);
bf672720e83cf0 Maxim Levitsky      2023-07-26  10429  
bf672720e83cf0 Maxim Levitsky      2023-07-26  10430  			if (!WARN_ON_ONCE(irq == -1)) {
bf672720e83cf0 Maxim Levitsky      2023-07-26  10431  				kvm_queue_interrupt(vcpu, irq, false);
896046474f8d2e Wei Wang            2024-05-07  10432  				kvm_x86_call(inject_irq)(vcpu, false);
896046474f8d2e Wei Wang            2024-05-07  10433  				WARN_ON(kvm_x86_call(interrupt_allowed)(vcpu, true) < 0);
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10434  			}
bf672720e83cf0 Maxim Levitsky      2023-07-26  10435  		}
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10436  		if (kvm_cpu_has_injectable_intr(vcpu))
896046474f8d2e Wei Wang            2024-05-07  10437  			kvm_x86_call(enable_irq_window)(vcpu);
95ba82731374eb Gleb Natapov        2009-04-21  10438  	}
ee2cd4b7555e3a Paolo Bonzini       2016-06-01  10439  
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10440  	if (is_guest_mode(vcpu) &&
5b4ac1a1b71373 Paolo Bonzini       2022-09-21  10441  	    kvm_x86_ops.nested_ops->has_events &&
32f55e475ce2c4 Sean Christopherson 2024-06-07  10442  	    kvm_x86_ops.nested_ops->has_events(vcpu, true))
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10443  		*req_immediate_exit = true;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10444  
dea0d5a2fde622 Sean Christopherson 2022-09-30  10445  	/*
dea0d5a2fde622 Sean Christopherson 2022-09-30  10446  	 * KVM must never queue a new exception while injecting an event; KVM
dea0d5a2fde622 Sean Christopherson 2022-09-30  10447  	 * is done emulating and should only propagate the to-be-injected event
dea0d5a2fde622 Sean Christopherson 2022-09-30  10448  	 * to the VMCS/VMCB.  Queueing a new exception can put the vCPU into an
dea0d5a2fde622 Sean Christopherson 2022-09-30  10449  	 * infinite loop as KVM will bail from VM-Enter to inject the pending
dea0d5a2fde622 Sean Christopherson 2022-09-30  10450  	 * exception and start the cycle all over.
dea0d5a2fde622 Sean Christopherson 2022-09-30  10451  	 *
dea0d5a2fde622 Sean Christopherson 2022-09-30  10452  	 * Exempt triple faults as they have special handling and won't put the
dea0d5a2fde622 Sean Christopherson 2022-09-30  10453  	 * vCPU into an infinite loop.  Triple fault can be queued when running
dea0d5a2fde622 Sean Christopherson 2022-09-30  10454  	 * VMX without unrestricted guest, as that requires KVM to emulate Real
dea0d5a2fde622 Sean Christopherson 2022-09-30  10455  	 * Mode events (see kvm_inject_realmode_interrupt()).
dea0d5a2fde622 Sean Christopherson 2022-09-30  10456  	 */
dea0d5a2fde622 Sean Christopherson 2022-09-30  10457  	WARN_ON_ONCE(vcpu->arch.exception.pending ||
dea0d5a2fde622 Sean Christopherson 2022-09-30  10458  		     vcpu->arch.exception_vmexit.pending);
a5f6909a71f922 Jim Mattson         2021-06-04  10459  	return 0;
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10460  
a5f6909a71f922 Jim Mattson         2021-06-04  10461  out:
a5f6909a71f922 Jim Mattson         2021-06-04  10462  	if (r == -EBUSY) {
c9d40913ac5a21 Paolo Bonzini       2020-05-22  10463  		*req_immediate_exit = true;
a5f6909a71f922 Jim Mattson         2021-06-04  10464  		r = 0;
a5f6909a71f922 Jim Mattson         2021-06-04  10465  	}
a5f6909a71f922 Jim Mattson         2021-06-04  10466  	return r;
95ba82731374eb Gleb Natapov        2009-04-21  10467  }
95ba82731374eb Gleb Natapov        2009-04-21  10468  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 5/7] KVM: SVM: Inject MCEs when restricted injection is active
  2024-11-28 13:41   ` kernel test robot
@ 2024-11-30 21:02     ` Melody (Huibo) Wang
  2024-12-03 13:38       ` Philip Li
  0 siblings, 1 reply; 11+ messages in thread
From: Melody (Huibo) Wang @ 2024-11-30 21:02 UTC (permalink / raw)
  To: kernel test robot, kvm, linux-kernel, x86
  Cc: llvm, oe-kbuild-all, Sean Christopherson, Paolo Bonzini,
	Tom Lendacky, Neeraj Upadhyay, Ashish Kalra, Michael Roth,
	Pankaj Gupta

Hi,

On 11/28/2024 5:41 AM, kernel test robot wrote:
> Hi Melody,
> 
> kernel test robot noticed the following build warnings:
> 
> [auto build test WARNING on kvm/queue]
> [also build test WARNING on mst-vhost/linux-next tip/x86/core linus/master v6.12 next-20241128]
> [cannot apply to kvm/linux-next]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
The base of my tree is 

4d911c7abee5 ("Merge tag 'kvm-riscv-6.13-2' of https://github.com/kvm-riscv/linux into HEAD")

which is the current kvm/next branch. 

Thanks,
Melody

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 5/7] KVM: SVM: Inject MCEs when restricted injection is active
  2024-11-30 21:02     ` Melody (Huibo) Wang
@ 2024-12-03 13:38       ` Philip Li
  0 siblings, 0 replies; 11+ messages in thread
From: Philip Li @ 2024-12-03 13:38 UTC (permalink / raw)
  To: Melody (Huibo) Wang
  Cc: kernel test robot, kvm, linux-kernel, x86, llvm, oe-kbuild-all,
	Sean Christopherson, Paolo Bonzini, Tom Lendacky, Neeraj Upadhyay,
	Ashish Kalra, Michael Roth, Pankaj Gupta

On Sat, Nov 30, 2024 at 01:02:08PM -0800, Melody (Huibo) Wang wrote:
> Hi,
> 
> On 11/28/2024 5:41 AM, kernel test robot wrote:
> > Hi Melody,
> > 
> > kernel test robot noticed the following build warnings:
> > 
> > [auto build test WARNING on kvm/queue]
> > [also build test WARNING on mst-vhost/linux-next tip/x86/core linus/master v6.12 next-20241128]
> > [cannot apply to kvm/linux-next]
> > [If your patch is applied to the wrong git tree, kindly drop us a note.
> The base of my tree is 
> 
> 4d911c7abee5 ("Merge tag 'kvm-riscv-6.13-2' of https://github.com/kvm-riscv/linux into HEAD")
> 
> which is the current kvm/next branch. 

Got it, thanks for info. We will configure this as the base candidate.

> 
> Thanks,
> Melody
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-12-03 13:38 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-27 22:55 [PATCH v3 0/7] Add SEV-SNP restricted injection hypervisor support Melody Wang
2024-11-27 22:55 ` [PATCH v3 1/7] x86/sev: Define the #HV doorbell page structure Melody Wang
2024-11-27 22:55 ` [PATCH v3 2/7] KVM: SVM: Add support for the SEV-SNP #HV doorbell page NAE event Melody Wang
2024-11-27 22:55 ` [PATCH v3 3/7] KVM: SVM: Inject #HV when restricted injection is active Melody Wang
2024-11-27 22:55 ` [PATCH v3 4/7] KVM: SVM: Inject NMIs " Melody Wang
2024-11-27 22:55 ` [PATCH v3 5/7] KVM: SVM: Inject MCEs " Melody Wang
2024-11-28 13:41   ` kernel test robot
2024-11-30 21:02     ` Melody (Huibo) Wang
2024-12-03 13:38       ` Philip Li
2024-11-27 22:55 ` [PATCH v3 6/7] KVM: SVM: Add support for the SEV-SNP #HV IPI NAE event Melody Wang
2024-11-27 22:55 ` [PATCH v3 7/7] KVM: SVM: Enable restricted injection for an SEV-SNP guest Melody Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox