linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Emulate VMXON region correctly
@ 2014-05-06  6:19 Bandan Das
  2014-05-06  6:19 ` [PATCH v2 1/4] KVM: nVMX: rearrange get_vmx_mem_address Bandan Das
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Bandan Das @ 2014-05-06  6:19 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Gleb Natapov, Jan Kiszka, Marcelo Tosatti,
	linux-kernel

Reference: https://bugzilla.kernel.org/show_bug.cgi?id=54521

The vmxon region is unused by nvmx, but adding these checks
are probably harmless and may detect buggy L1 hypervisors in 
the future!

v2:
1/4
  - Commit message change to reflect addition of new function
2/4
  - Use cpuid_maxphyaddr()
  - Fix a leak with kunmap()
  - Remove unnecessary braces around comparisions
  - Move all checks into a common function, this will be later
    used by handle_vmptrld and handle_vmclear in 4/4
4/4
  - New patch - use common function to perform checks on
    vmptr

Bandan Das (4):
  KVM: nVMX: rearrange get_vmx_mem_address
  KVM: nVMX: additional checks on vmxon region
  KVM: nVMX: fail on invalid vmclear/vmptrld pointer
  KVM: nVMX: move vmclear and vmptrld pre-checks to
    nested_vmx_check_vmptr

 arch/x86/kvm/cpuid.c |   1 +
 arch/x86/kvm/vmx.c   | 240 +++++++++++++++++++++++++++++++++------------------
 2 files changed, 156 insertions(+), 85 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/4] KVM: nVMX: rearrange get_vmx_mem_address
  2014-05-06  6:19 [PATCH v2 0/4] Emulate VMXON region correctly Bandan Das
@ 2014-05-06  6:19 ` Bandan Das
  2014-05-06  6:19 ` [PATCH v2 2/4] KVM: nVMX: additional checks on vmxon region Bandan Das
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Bandan Das @ 2014-05-06  6:19 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Gleb Natapov, Jan Kiszka, Marcelo Tosatti,
	linux-kernel

Our common function for vmptr checks (in 2/4) needs to fetch
the memory address

Signed-off-by: Bandan Das <bsd@redhat.com>
---
 arch/x86/kvm/vmx.c | 106 ++++++++++++++++++++++++++---------------------------
 1 file changed, 53 insertions(+), 53 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 1f68c58..c18fe9a4 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5775,6 +5775,59 @@ static enum hrtimer_restart vmx_preemption_timer_fn(struct hrtimer *timer)
 }
 
 /*
+ * Decode the memory-address operand of a vmx instruction, as recorded on an
+ * exit caused by such an instruction (run by a guest hypervisor).
+ * On success, returns 0. When the operand is invalid, returns 1 and throws
+ * #UD or #GP.
+ */
+static int get_vmx_mem_address(struct kvm_vcpu *vcpu,
+				 unsigned long exit_qualification,
+				 u32 vmx_instruction_info, gva_t *ret)
+{
+	/*
+	 * According to Vol. 3B, "Information for VM Exits Due to Instruction
+	 * Execution", on an exit, vmx_instruction_info holds most of the
+	 * addressing components of the operand. Only the displacement part
+	 * is put in exit_qualification (see 3B, "Basic VM-Exit Information").
+	 * For how an actual address is calculated from all these components,
+	 * refer to Vol. 1, "Operand Addressing".
+	 */
+	int  scaling = vmx_instruction_info & 3;
+	int  addr_size = (vmx_instruction_info >> 7) & 7;
+	bool is_reg = vmx_instruction_info & (1u << 10);
+	int  seg_reg = (vmx_instruction_info >> 15) & 7;
+	int  index_reg = (vmx_instruction_info >> 18) & 0xf;
+	bool index_is_valid = !(vmx_instruction_info & (1u << 22));
+	int  base_reg       = (vmx_instruction_info >> 23) & 0xf;
+	bool base_is_valid  = !(vmx_instruction_info & (1u << 27));
+
+	if (is_reg) {
+		kvm_queue_exception(vcpu, UD_VECTOR);
+		return 1;
+	}
+
+	/* Addr = segment_base + offset */
+	/* offset = base + [index * scale] + displacement */
+	*ret = vmx_get_segment_base(vcpu, seg_reg);
+	if (base_is_valid)
+		*ret += kvm_register_read(vcpu, base_reg);
+	if (index_is_valid)
+		*ret += kvm_register_read(vcpu, index_reg)<<scaling;
+	*ret += exit_qualification; /* holds the displacement */
+
+	if (addr_size == 1) /* 32 bit */
+		*ret &= 0xffffffff;
+
+	/*
+	 * TODO: throw #GP (and return 1) in various cases that the VM*
+	 * instructions require it - e.g., offset beyond segment limit,
+	 * unusable or unreadable/unwritable segment, non-canonical 64-bit
+	 * address, and so on. Currently these are not checked.
+	 */
+	return 0;
+}
+
+/*
  * Emulate the VMXON instruction.
  * Currently, we just remember that VMX is active, and do not save or even
  * inspect the argument to VMXON (the so-called "VMXON pointer") because we
@@ -5934,59 +5987,6 @@ static int handle_vmoff(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
-/*
- * Decode the memory-address operand of a vmx instruction, as recorded on an
- * exit caused by such an instruction (run by a guest hypervisor).
- * On success, returns 0. When the operand is invalid, returns 1 and throws
- * #UD or #GP.
- */
-static int get_vmx_mem_address(struct kvm_vcpu *vcpu,
-				 unsigned long exit_qualification,
-				 u32 vmx_instruction_info, gva_t *ret)
-{
-	/*
-	 * According to Vol. 3B, "Information for VM Exits Due to Instruction
-	 * Execution", on an exit, vmx_instruction_info holds most of the
-	 * addressing components of the operand. Only the displacement part
-	 * is put in exit_qualification (see 3B, "Basic VM-Exit Information").
-	 * For how an actual address is calculated from all these components,
-	 * refer to Vol. 1, "Operand Addressing".
-	 */
-	int  scaling = vmx_instruction_info & 3;
-	int  addr_size = (vmx_instruction_info >> 7) & 7;
-	bool is_reg = vmx_instruction_info & (1u << 10);
-	int  seg_reg = (vmx_instruction_info >> 15) & 7;
-	int  index_reg = (vmx_instruction_info >> 18) & 0xf;
-	bool index_is_valid = !(vmx_instruction_info & (1u << 22));
-	int  base_reg       = (vmx_instruction_info >> 23) & 0xf;
-	bool base_is_valid  = !(vmx_instruction_info & (1u << 27));
-
-	if (is_reg) {
-		kvm_queue_exception(vcpu, UD_VECTOR);
-		return 1;
-	}
-
-	/* Addr = segment_base + offset */
-	/* offset = base + [index * scale] + displacement */
-	*ret = vmx_get_segment_base(vcpu, seg_reg);
-	if (base_is_valid)
-		*ret += kvm_register_read(vcpu, base_reg);
-	if (index_is_valid)
-		*ret += kvm_register_read(vcpu, index_reg)<<scaling;
-	*ret += exit_qualification; /* holds the displacement */
-
-	if (addr_size == 1) /* 32 bit */
-		*ret &= 0xffffffff;
-
-	/*
-	 * TODO: throw #GP (and return 1) in various cases that the VM*
-	 * instructions require it - e.g., offset beyond segment limit,
-	 * unusable or unreadable/unwritable segment, non-canonical 64-bit
-	 * address, and so on. Currently these are not checked.
-	 */
-	return 0;
-}
-
 /* Emulate the VMCLEAR instruction */
 static int handle_vmclear(struct kvm_vcpu *vcpu)
 {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/4] KVM: nVMX: additional checks on vmxon region
  2014-05-06  6:19 [PATCH v2 0/4] Emulate VMXON region correctly Bandan Das
  2014-05-06  6:19 ` [PATCH v2 1/4] KVM: nVMX: rearrange get_vmx_mem_address Bandan Das
@ 2014-05-06  6:19 ` Bandan Das
  2014-05-06  6:19 ` [PATCH v2 3/4] KVM: nVMX: fail on invalid vmclear/vmptrld pointer Bandan Das
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Bandan Das @ 2014-05-06  6:19 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Gleb Natapov, Jan Kiszka, Marcelo Tosatti,
	linux-kernel

Currently, the vmxon region isn't used in the nested case.
However, according to the spec, the vmxon instruction performs
additional sanity checks on this region and the associated
pointer. Modify emulated vmxon to better adhere to the spec
requirements

Signed-off-by: Bandan Das <bsd@redhat.com>
---
 arch/x86/kvm/cpuid.c |  1 +
 arch/x86/kvm/vmx.c   | 67 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f47a104..da9894b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -726,6 +726,7 @@ int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
 not_found:
 	return 36;
 }
+EXPORT_SYMBOL_GPL(cpuid_maxphyaddr);
 
 /*
  * If no match is found, check whether we exceed the vCPU's limit
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c18fe9a4..059906a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -354,6 +354,7 @@ struct vmcs02_list {
 struct nested_vmx {
 	/* Has the level1 guest done vmxon? */
 	bool vmxon;
+	gpa_t vmxon_ptr;
 
 	/* The guest-physical address of the current VMCS L1 keeps for L2 */
 	gpa_t current_vmptr;
@@ -5828,6 +5829,68 @@ static int get_vmx_mem_address(struct kvm_vcpu *vcpu,
 }
 
 /*
+ * This function performs the various checks including
+ * - if it's 4KB aligned
+ * - No bits beyond the physical address width are set
+ * - Returns 0 on success or else 1
+ */
+static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason)
+{
+	gva_t gva;
+	gpa_t vmptr;
+	struct x86_exception e;
+	struct page *page;
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	int maxphyaddr = cpuid_maxphyaddr(vcpu);
+
+	if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),
+			vmcs_read32(VMX_INSTRUCTION_INFO), &gva))
+		return 1;
+
+	if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr,
+				sizeof(vmptr), &e)) {
+		kvm_inject_page_fault(vcpu, &e);
+		return 1;
+	}
+
+	switch (exit_reason) {
+	case EXIT_REASON_VMON:
+		/*
+		 * SDM 3: 24.11.5
+		 * The first 4 bytes of VMXON region contain the supported
+		 * VMCS revision identifier
+		 *
+		 * Note - IA32_VMX_BASIC[48] will never be 1
+		 * for the nested case;
+		 * which replaces physical address width with 32
+		 *
+		 */
+		if (!IS_ALIGNED(vmptr, PAGE_SIZE) || (vmptr >> maxphyaddr)) {
+			nested_vmx_failInvalid(vcpu);
+			skip_emulated_instruction(vcpu);
+			return 1;
+		}
+
+		page = nested_get_page(vcpu, vmptr);
+		if (page == NULL ||
+		    *(u32 *)kmap(page) != VMCS12_REVISION) {
+			nested_vmx_failInvalid(vcpu);
+			kunmap(page);
+			skip_emulated_instruction(vcpu);
+			return 1;
+		}
+		kunmap(page);
+		vmx->nested.vmxon_ptr = vmptr;
+		break;
+
+	default:
+		return 1; /* shouldn't happen */
+	}
+
+	return 0;
+}
+
+/*
  * Emulate the VMXON instruction.
  * Currently, we just remember that VMX is active, and do not save or even
  * inspect the argument to VMXON (the so-called "VMXON pointer") because we
@@ -5865,6 +5928,10 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
 		kvm_inject_gp(vcpu, 0);
 		return 1;
 	}
+
+	if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMON))
+		return 1;
+
 	if (vmx->nested.vmxon) {
 		nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
 		skip_emulated_instruction(vcpu);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/4] KVM: nVMX: fail on invalid vmclear/vmptrld pointer
  2014-05-06  6:19 [PATCH v2 0/4] Emulate VMXON region correctly Bandan Das
  2014-05-06  6:19 ` [PATCH v2 1/4] KVM: nVMX: rearrange get_vmx_mem_address Bandan Das
  2014-05-06  6:19 ` [PATCH v2 2/4] KVM: nVMX: additional checks on vmxon region Bandan Das
@ 2014-05-06  6:19 ` Bandan Das
  2014-05-06  6:19 ` [PATCH v2 4/4] KVM: nVMX: move vmclear and vmptrld pre-checks to nested_vmx_check_vmptr Bandan Das
  2014-05-06 17:01 ` [PATCH v2 0/4] Emulate VMXON region correctly Paolo Bonzini
  4 siblings, 0 replies; 8+ messages in thread
From: Bandan Das @ 2014-05-06  6:19 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Gleb Natapov, Jan Kiszka, Marcelo Tosatti,
	linux-kernel

The spec mandates that if the vmptrld or vmclear
address is equal to the vmxon region pointer, the
instruction should fail with error "VMPTRLD with
VMXON pointer" or "VMCLEAR with VMXON pointer"

Signed-off-by: Bandan Das <bsd@redhat.com>
---
 arch/x86/kvm/vmx.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 059906a..6c125ff 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6083,6 +6083,12 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
 		return 1;
 	}
 
+	if (vmptr == vmx->nested.vmxon_ptr) {
+		nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER);
+		skip_emulated_instruction(vcpu);
+		return 1;
+	}
+
 	if (vmptr == vmx->nested.current_vmptr) {
 		nested_release_vmcs12(vmx);
 		vmx->nested.current_vmptr = -1ull;
@@ -6426,6 +6432,12 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
 		return 1;
 	}
 
+	if (vmptr == vmx->nested.vmxon_ptr) {
+		nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER);
+		skip_emulated_instruction(vcpu);
+		return 1;
+	}
+
 	if (vmx->nested.current_vmptr != vmptr) {
 		struct vmcs12 *new_vmcs12;
 		struct page *page;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 4/4] KVM: nVMX: move vmclear and vmptrld pre-checks to nested_vmx_check_vmptr
  2014-05-06  6:19 [PATCH v2 0/4] Emulate VMXON region correctly Bandan Das
                   ` (2 preceding siblings ...)
  2014-05-06  6:19 ` [PATCH v2 3/4] KVM: nVMX: fail on invalid vmclear/vmptrld pointer Bandan Das
@ 2014-05-06  6:19 ` Bandan Das
  2014-05-06 17:01 ` [PATCH v2 0/4] Emulate VMXON region correctly Paolo Bonzini
  4 siblings, 0 replies; 8+ messages in thread
From: Bandan Das @ 2014-05-06  6:19 UTC (permalink / raw)
  To: kvm; +Cc: Paolo Bonzini, Gleb Natapov, Jan Kiszka, Marcelo Tosatti,
	linux-kernel

Some checks are common to all, and moreover,
according to the spec, the check for whether any bits
beyond the physical address width are set are also
applicable to all of them

Signed-off-by: Bandan Das <bsd@redhat.com>
---
 arch/x86/kvm/vmx.c | 83 ++++++++++++++++++++++++------------------------------
 1 file changed, 37 insertions(+), 46 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6c125ff..9b36057 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5833,8 +5833,10 @@ static int get_vmx_mem_address(struct kvm_vcpu *vcpu,
  * - if it's 4KB aligned
  * - No bits beyond the physical address width are set
  * - Returns 0 on success or else 1
+ * (Intel SDM Section 30.3)
  */
-static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason)
+static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason,
+				  gpa_t *vmpointer)
 {
 	gva_t gva;
 	gpa_t vmptr;
@@ -5882,11 +5884,42 @@ static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason)
 		kunmap(page);
 		vmx->nested.vmxon_ptr = vmptr;
 		break;
+	case EXIT_REASON_VMCLEAR:
+		if (!IS_ALIGNED(vmptr, PAGE_SIZE) || (vmptr >> maxphyaddr)) {
+			nested_vmx_failValid(vcpu,
+					     VMXERR_VMCLEAR_INVALID_ADDRESS);
+			skip_emulated_instruction(vcpu);
+			return 1;
+		}
 
+		if (vmptr == vmx->nested.vmxon_ptr) {
+			nested_vmx_failValid(vcpu,
+					     VMXERR_VMCLEAR_VMXON_POINTER);
+			skip_emulated_instruction(vcpu);
+			return 1;
+		}
+		break;
+	case EXIT_REASON_VMPTRLD:
+		if (!IS_ALIGNED(vmptr, PAGE_SIZE) || (vmptr >> maxphyaddr)) {
+			nested_vmx_failValid(vcpu,
+					     VMXERR_VMPTRLD_INVALID_ADDRESS);
+			skip_emulated_instruction(vcpu);
+			return 1;
+		}
+
+		if (vmptr == vmx->nested.vmxon_ptr) {
+			nested_vmx_failValid(vcpu,
+					     VMXERR_VMCLEAR_VMXON_POINTER);
+			skip_emulated_instruction(vcpu);
+			return 1;
+		}
+		break;
 	default:
 		return 1; /* shouldn't happen */
 	}
 
+	if (vmpointer)
+		*vmpointer = vmptr;
 	return 0;
 }
 
@@ -5929,7 +5962,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
 		return 1;
 	}
 
-	if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMON))
+	if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMON, NULL))
 		return 1;
 
 	if (vmx->nested.vmxon) {
@@ -6058,37 +6091,16 @@ static int handle_vmoff(struct kvm_vcpu *vcpu)
 static int handle_vmclear(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	gva_t gva;
 	gpa_t vmptr;
 	struct vmcs12 *vmcs12;
 	struct page *page;
-	struct x86_exception e;
 
 	if (!nested_vmx_check_permission(vcpu))
 		return 1;
 
-	if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),
-			vmcs_read32(VMX_INSTRUCTION_INFO), &gva))
+	if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMCLEAR, &vmptr))
 		return 1;
 
-	if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr,
-				sizeof(vmptr), &e)) {
-		kvm_inject_page_fault(vcpu, &e);
-		return 1;
-	}
-
-	if (!IS_ALIGNED(vmptr, PAGE_SIZE)) {
-		nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS);
-		skip_emulated_instruction(vcpu);
-		return 1;
-	}
-
-	if (vmptr == vmx->nested.vmxon_ptr) {
-		nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER);
-		skip_emulated_instruction(vcpu);
-		return 1;
-	}
-
 	if (vmptr == vmx->nested.current_vmptr) {
 		nested_release_vmcs12(vmx);
 		vmx->nested.current_vmptr = -1ull;
@@ -6408,35 +6420,14 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
 static int handle_vmptrld(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	gva_t gva;
 	gpa_t vmptr;
-	struct x86_exception e;
 	u32 exec_control;
 
 	if (!nested_vmx_check_permission(vcpu))
 		return 1;
 
-	if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),
-			vmcs_read32(VMX_INSTRUCTION_INFO), &gva))
-		return 1;
-
-	if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr,
-				sizeof(vmptr), &e)) {
-		kvm_inject_page_fault(vcpu, &e);
-		return 1;
-	}
-
-	if (!IS_ALIGNED(vmptr, PAGE_SIZE)) {
-		nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS);
-		skip_emulated_instruction(vcpu);
+	if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMPTRLD, &vmptr))
 		return 1;
-	}
-
-	if (vmptr == vmx->nested.vmxon_ptr) {
-		nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER);
-		skip_emulated_instruction(vcpu);
-		return 1;
-	}
 
 	if (vmx->nested.current_vmptr != vmptr) {
 		struct vmcs12 *new_vmcs12;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/4] Emulate VMXON region correctly
  2014-05-06  6:19 [PATCH v2 0/4] Emulate VMXON region correctly Bandan Das
                   ` (3 preceding siblings ...)
  2014-05-06  6:19 ` [PATCH v2 4/4] KVM: nVMX: move vmclear and vmptrld pre-checks to nested_vmx_check_vmptr Bandan Das
@ 2014-05-06 17:01 ` Paolo Bonzini
  2014-06-03 20:11   ` Bandan Das
  4 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2014-05-06 17:01 UTC (permalink / raw)
  To: Bandan Das, kvm; +Cc: Gleb Natapov, Jan Kiszka, Marcelo Tosatti, linux-kernel

Il 06/05/2014 08:19, Bandan Das ha scritto:
> Reference: https://bugzilla.kernel.org/show_bug.cgi?id=54521
>
> The vmxon region is unused by nvmx, but adding these checks
> are probably harmless and may detect buggy L1 hypervisors in
> the future!

Applied to kvm/queue, unit test patches are welcome (hint, hint!).

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/4] Emulate VMXON region correctly
  2014-05-06 17:01 ` [PATCH v2 0/4] Emulate VMXON region correctly Paolo Bonzini
@ 2014-06-03 20:11   ` Bandan Das
  2014-06-04  7:18     ` Jan Kiszka
  0 siblings, 1 reply; 8+ messages in thread
From: Bandan Das @ 2014-06-03 20:11 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Gleb Natapov, Jan Kiszka, Marcelo Tosatti, linux-kernel

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 06/05/2014 08:19, Bandan Das ha scritto:
>> Reference: https://bugzilla.kernel.org/show_bug.cgi?id=54521
>>
>> The vmxon region is unused by nvmx, but adding these checks
>> are probably harmless and may detect buggy L1 hypervisors in
>> the future!
>
> Applied to kvm/queue, unit test patches are welcome (hint, hint!).

Jan and Paolo,

Ok, I thought a little bit about this and I am wondering what 
kind of tests we could have here. All the changes are pure 
checks and no data is returned back to the guest. The
vmxon code in kvm-unit-tests already does the right thing i.e
sets the revision identifier, allocs a page for the region etc

> Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/4] Emulate VMXON region correctly
  2014-06-03 20:11   ` Bandan Das
@ 2014-06-04  7:18     ` Jan Kiszka
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Kiszka @ 2014-06-04  7:18 UTC (permalink / raw)
  To: Bandan Das, Paolo Bonzini
  Cc: kvm, Gleb Natapov, Marcelo Tosatti, linux-kernel

On 2014-06-03 22:11, Bandan Das wrote:
> Paolo Bonzini <pbonzini@redhat.com> writes:
> 
>> Il 06/05/2014 08:19, Bandan Das ha scritto:
>>> Reference: https://bugzilla.kernel.org/show_bug.cgi?id=54521
>>>
>>> The vmxon region is unused by nvmx, but adding these checks
>>> are probably harmless and may detect buggy L1 hypervisors in
>>> the future!
>>
>> Applied to kvm/queue, unit test patches are welcome (hint, hint!).
> 
> Jan and Paolo,
> 
> Ok, I thought a little bit about this and I am wondering what 
> kind of tests we could have here. All the changes are pure 
> checks and no data is returned back to the guest. The
> vmxon code in kvm-unit-tests already does the right thing i.e
> sets the revision identifier, allocs a page for the region etc

Then try to do the wrong thing in the unit test and validate that you
get the right error report from the emulated VMX hardware.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-06-04  7:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-06  6:19 [PATCH v2 0/4] Emulate VMXON region correctly Bandan Das
2014-05-06  6:19 ` [PATCH v2 1/4] KVM: nVMX: rearrange get_vmx_mem_address Bandan Das
2014-05-06  6:19 ` [PATCH v2 2/4] KVM: nVMX: additional checks on vmxon region Bandan Das
2014-05-06  6:19 ` [PATCH v2 3/4] KVM: nVMX: fail on invalid vmclear/vmptrld pointer Bandan Das
2014-05-06  6:19 ` [PATCH v2 4/4] KVM: nVMX: move vmclear and vmptrld pre-checks to nested_vmx_check_vmptr Bandan Das
2014-05-06 17:01 ` [PATCH v2 0/4] Emulate VMXON region correctly Paolo Bonzini
2014-06-03 20:11   ` Bandan Das
2014-06-04  7:18     ` Jan Kiszka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).