* Xen Security Advisory 220 (CVE-2017-10916) - x86: PKRU and BND* leakage between vCPU-s
@ 2017-07-07 13:54 Xen.org security team
0 siblings, 0 replies; only message in thread
From: Xen.org security team @ 2017-07-07 13:54 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 5357 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-10916 / XSA-220
version 3
x86: PKRU and BND* leakage between vCPU-s
UPDATES IN VERSION 3
====================
CVE assigned.
ISSUE DESCRIPTION
=================
Memory Protection Extensions (MPX) and Protection Key (PKU) are features in
newer processors, whose state is intended to be per-thread and context
switched along with all other XSAVE state.
Xen's vCPU context switch code would save and restore the state only
if the guest had set the relevant XSTATE enable bits. However,
surprisingly, the use of these features is not dependent (PKU) or may
not be dependent (MPX) on having the relevant XSTATE bits enabled.
VMs which use MPX or PKU, and context switch the state manually rather
than via XSAVE, will have the state leak between vCPUs (possibly,
between vCPUs in different guests). This in turn corrupts state in
the destination vCPU, and hence may lead to weakened protections
Experimentally, MPX appears not to make any interaction with BND*
state if BNDCFGS.EN is set but XCR0.BND{CSR,REGS} are clear. However,
the SDM is not clear in this case; therefore MPX is included in this
advisory as a precaution.
IMPACT
======
There is an information leak, of control information mentioning
pointers into guest address space; this may weaken address space
randomisation and make other attacks easier.
When an innocent guest acquires leaked state, it will run with
incorrect protection state. This could weaken the protection intended
by the MPX or PKU features, making other attacks easier which would
otherwise be excluded; and the incorrect state could also cause a
denial of service by preventing legitimate accesses.
VULNERABLE SYSTEMS
==================
Xen 4.4 and earlier are not vulnerable, as they do not use or expose
MPX or PKU to guests. Xen 4.5 and later expose MPX to guests. Xen
4.7 and later expose PKU to guests. Therefore, Xen 4.5 and later are
vulnerable.
Only x86 hardware implementing the MPX or PKU features is vulnerable.
At the time of writing, these are Intel Skylake (and later) processors
for MPX, and Intel Skylake Server (and later) processors for PKU.
ARM hardware is not vulnerable.
The vulnerability is only exposed to HVM guests. PV guests cannot
exploit the vulnerability.
Vulnerable guest operating systems
- ----------------------------------
Guests which use XSAVE for context switching PKU and MPX state are not
vulnerable to inbound corruption caused by another malicious domain.
With respect to PKU, the remaining outbound information leak is of no
conceivable consequence. And, experimentally, MPX does not appear to
have a real vulnerability, even though the CPU documentation is not
clear.
Therefore we think that these guests (those which use XSAVE) are not
vulnerable.
Linux uses XSAVE, so is therefore not vulnerable.
MITIGATION
==========
Passing "pku=0" on the hypervisor command line will avoid the PKU
vulnerability (by not advertising the feature to guests).
There is no corresponding option for the probably-theoretical MPX
vulnerability.
CREDITS
=======
This issue was discovered by Andrew Cooper of Citrix.
RESOLUTION
==========
Applying the appropriate attached patch resolves this issue.
xsa220.patch xen-unstable
xsa220-4.8.patch Xen 4.8
xsa220-4.7.patch Xen 4.7
xsa220-4.6.patch Xen 4.6
xsa220-4.5.patch Xen 4.5
$ sha256sum xsa220*
8b86d9a284c0b14717467e672e63aebfc2bce201658493a54c64fb7c1863ce49 xsa220.patch
4b53ad5748313fb92c68eac1160b00d1bf7310019657028122a455855334252b xsa220-4.5.patch
befe5ca5321d903428fc496abeee3a3b5eb0cee27a382e20d3caf8cc7bdfced2 xsa220-4.6.patch
555fa741348909943393aaf73571bc7817b30eafcff73dbfcd73911113db5d7f xsa220-4.7.patch
7a41ad9c6f9d46536abae051c517456bdfa3564278e98f80222a904df749fb0c xsa220-4.8.patch
$
DEPLOYMENT DURING EMBARGO
=========================
Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.
But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).
Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.
(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable. This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)
For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJZX5IqAAoJEIP+FMlX6CvZFiQH/2iqblUF6Qb0sGpYgJxsw6IN
uS8grqZsLyMR5ftpHA1F+NQufs5kQkhK88cJdSmHu7FwpFkUnH0BM6ufVoe7dRSH
Nobe0epkhV0tLwX1Hz5zJUE4ufaWF0VHHZIG/BzFgUk1lUUjEyG7SHh8GhTdEBG+
MGL2GSBYXpYIyXHwRUIs7+p9Vf92m7J9JXCQWOK7tRKE+j8lahJ21eQITgFRZWW8
44zdXFk5/I6kiJZJPfLkVuVgWQLgozr/R+qO3lkCc/+47a+LwPxgap4t/rDJrkEl
U/YyPMdLg4KZMr8aCgciREOO7TwxR6ndJFD3bj8Iwjt981uhbVNL18TqaUdC68c=
=Ybk9
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa220.patch --]
[-- Type: application/octet-stream, Size: 3131 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: x86: avoid leaking PKRU and BND* between vCPU-s
PKRU is explicitly "XSAVE-managed but not XSAVE-enabled", so guests
might access the register (via {RD,WR}PKRU) without setting XCR0.PKRU.
Force context switching as well as migrating the register as soon as
CR4.PKE is being set the first time.
For MPX (BND<n>, BNDCFGU, and BNDSTATUS) the situation is less clear,
and the SDM has not entirely consistent information for that case.
While experimentally the instructions don't change register state as
long as the two XCR0 bits aren't both 1, be on the safe side and enable
both if BNDCFGS.EN is being set the first time.
This is XSA-220.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -307,10 +307,39 @@ int hvm_set_guest_pat(struct vcpu *v, u6
bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
{
- return hvm_funcs.set_guest_bndcfgs &&
- is_canonical_address(val) &&
- !(val & IA32_BNDCFGS_RESERVED) &&
- hvm_funcs.set_guest_bndcfgs(v, val);
+ if ( !hvm_funcs.set_guest_bndcfgs ||
+ !is_canonical_address(val) ||
+ (val & IA32_BNDCFGS_RESERVED) )
+ return false;
+
+ /*
+ * While MPX instructions are supposed to be gated on XCR0.BND*, let's
+ * nevertheless force the relevant XCR0 bits on when the feature is being
+ * enabled in BNDCFGS.
+ */
+ if ( (val & IA32_BNDCFGS_ENABLE) &&
+ !(v->arch.xcr0_accum & (XSTATE_BNDREGS | XSTATE_BNDCSR)) )
+ {
+ uint64_t xcr0 = get_xcr0();
+ int rc;
+
+ if ( v != current )
+ return false;
+
+ rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ xcr0 | XSTATE_BNDREGS | XSTATE_BNDCSR);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc);
+ return false;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
+ /* nothing, best effort only */;
+ }
+
+ return hvm_funcs.set_guest_bndcfgs(v, val);
}
/*
@@ -2335,6 +2364,27 @@ int hvm_set_cr4(unsigned long value, boo
paging_update_paging_modes(v);
}
+ /*
+ * {RD,WR}PKRU are not gated on XCR0.PKRU and hence an oddly behaving
+ * guest may enable the feature in CR4 without enabling it in XCR0. We
+ * need to context switch / migrate PKRU nevertheless.
+ */
+ if ( (value & X86_CR4_PKE) && !(v->arch.xcr0_accum & XSTATE_PKRU) )
+ {
+ int rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ get_xcr0() | XSTATE_PKRU);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.PKRU: %d", rc);
+ return X86EMUL_EXCEPTION;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ get_xcr0() & ~XSTATE_PKRU) )
+ /* nothing, best effort only */;
+ }
+
return X86EMUL_OKAY;
}
[-- Attachment #3: xsa220-4.5.patch --]
[-- Type: application/octet-stream, Size: 3223 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: x86: avoid leaking BND* between vCPU-s
For MPX (BND<n>, BNDCFGU, and BNDSTATUS) the situation is less clear,
and the SDM has not entirely consistent information for that case.
While experimentally the instructions don't change register state as
long as the two XCR0 bits aren't both 1, be on the safe side and enable
both if BNDCFGS.EN is being set the first time.
This is XSA-220.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -32,7 +32,7 @@
#include <asm/regs.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
-#include <asm/types.h>
+#include <asm/xstate.h>
#include <asm/debugreg.h>
#include <asm/msr.h>
#include <asm/spinlock.h>
@@ -588,6 +588,45 @@ static int vmx_load_vmcs_ctxt(struct vcp
return 0;
}
+static bool_t vmx_set_guest_bndcfgs(struct vcpu *v, u64 val)
+{
+ if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
+ !is_canonical_address(val) ||
+ (val & IA32_BNDCFGS_RESERVED) )
+ return 0;
+
+ /*
+ * While MPX instructions are supposed to be gated on XCR0.BND*, let's
+ * nevertheless force the relevant XCR0 bits on when the feature is being
+ * enabled in BNDCFGS.
+ */
+ if ( (val & IA32_BNDCFGS_ENABLE) &&
+ !(v->arch.xcr0_accum & (XSTATE_BNDREGS | XSTATE_BNDCSR)) )
+ {
+ uint64_t xcr0 = get_xcr0();
+ int rc;
+
+ if ( v != current )
+ return 0;
+
+ rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ xcr0 | XSTATE_BNDREGS | XSTATE_BNDCSR);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc);
+ return 0;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
+ /* nothing, best effort only */;
+ }
+
+ __vmwrite(GUEST_BNDCFGS, val);
+
+ return 1;
+}
+
static unsigned int __init vmx_init_msr(void)
{
return cpu_has_mpx && cpu_has_vmx_mpx;
@@ -619,11 +658,8 @@ static int vmx_load_msr(struct vcpu *v,
switch ( ctxt->msr[i].index )
{
case MSR_IA32_BNDCFGS:
- if ( cpu_has_mpx && cpu_has_vmx_mpx &&
- is_canonical_address(ctxt->msr[i].val) &&
- !(ctxt->msr[i].val & IA32_BNDCFGS_RESERVED) )
- __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);
- else
+ if ( !vmx_set_guest_bndcfgs(v, ctxt->msr[i].val) &&
+ ctxt->msr[i].val )
err = -ENXIO;
break;
default:
@@ -2327,11 +2363,8 @@ static int vmx_msr_write_intercept(unsig
break;
}
case MSR_IA32_BNDCFGS:
- if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
- !is_canonical_address(msr_content) ||
- (msr_content & IA32_BNDCFGS_RESERVED) )
+ if ( !vmx_set_guest_bndcfgs(v, msr_content) )
goto gp_fault;
- __vmwrite(GUEST_BNDCFGS, msr_content);
break;
case IA32_FEATURE_CONTROL_MSR:
case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_TRUE_ENTRY_CTLS:
[-- Attachment #4: xsa220-4.6.patch --]
[-- Type: application/octet-stream, Size: 3203 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: x86: avoid leaking BND* between vCPU-s
For MPX (BND<n>, BNDCFGU, and BNDSTATUS) the situation is less clear,
and the SDM has not entirely consistent information for that case.
While experimentally the instructions don't change register state as
long as the two XCR0 bits aren't both 1, be on the safe side and enable
both if BNDCFGS.EN is being set the first time.
This is XSA-220.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -31,6 +31,7 @@
#include <asm/regs.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
+#include <asm/xstate.h>
#include <asm/guest_access.h>
#include <asm/debugreg.h>
#include <asm/msr.h>
@@ -625,6 +626,45 @@ static int vmx_load_vmcs_ctxt(struct vcp
return 0;
}
+static bool_t vmx_set_guest_bndcfgs(struct vcpu *v, u64 val)
+{
+ if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
+ !is_canonical_address(val) ||
+ (val & IA32_BNDCFGS_RESERVED) )
+ return 0;
+
+ /*
+ * While MPX instructions are supposed to be gated on XCR0.BND*, let's
+ * nevertheless force the relevant XCR0 bits on when the feature is being
+ * enabled in BNDCFGS.
+ */
+ if ( (val & IA32_BNDCFGS_ENABLE) &&
+ !(v->arch.xcr0_accum & (XSTATE_BNDREGS | XSTATE_BNDCSR)) )
+ {
+ uint64_t xcr0 = get_xcr0();
+ int rc;
+
+ if ( v != current )
+ return 0;
+
+ rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ xcr0 | XSTATE_BNDREGS | XSTATE_BNDCSR);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc);
+ return 0;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
+ /* nothing, best effort only */;
+ }
+
+ __vmwrite(GUEST_BNDCFGS, val);
+
+ return 1;
+}
+
static unsigned int __init vmx_init_msr(void)
{
return cpu_has_mpx && cpu_has_vmx_mpx;
@@ -656,11 +696,8 @@ static int vmx_load_msr(struct vcpu *v,
switch ( ctxt->msr[i].index )
{
case MSR_IA32_BNDCFGS:
- if ( cpu_has_mpx && cpu_has_vmx_mpx &&
- is_canonical_address(ctxt->msr[i].val) &&
- !(ctxt->msr[i].val & IA32_BNDCFGS_RESERVED) )
- __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);
- else
+ if ( !vmx_set_guest_bndcfgs(v, ctxt->msr[i].val) &&
+ ctxt->msr[i].val )
err = -ENXIO;
break;
default:
@@ -2552,11 +2589,8 @@ static int vmx_msr_write_intercept(unsig
break;
}
case MSR_IA32_BNDCFGS:
- if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
- !is_canonical_address(msr_content) ||
- (msr_content & IA32_BNDCFGS_RESERVED) )
+ if ( !vmx_set_guest_bndcfgs(v, msr_content) )
goto gp_fault;
- __vmwrite(GUEST_BNDCFGS, msr_content);
break;
case IA32_FEATURE_CONTROL_MSR:
case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_TRUE_ENTRY_CTLS:
[-- Attachment #5: xsa220-4.7.patch --]
[-- Type: application/octet-stream, Size: 4442 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: x86: avoid leaking PKRU and BND* between vCPU-s
PKRU is explicitly "XSAVE-managed but not XSAVE-enabled", so guests
might access the register (via {RD,WR}PKRU) without setting XCR0.PKRU.
Force context switching as well as migrating the register as soon as
CR4.PKE is being set the first time.
For MPX (BND<n>, BNDCFGU, and BNDSTATUS) the situation is less clear,
and the SDM has not entirely consistent information for that case.
While experimentally the instructions don't change register state as
long as the two XCR0 bits aren't both 1, be on the safe side and enable
both if BNDCFGS.EN is being set the first time.
This is XSA-220.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2452,6 +2452,27 @@ int hvm_set_cr4(unsigned long value, boo
paging_update_paging_modes(v);
}
+ /*
+ * {RD,WR}PKRU are not gated on XCR0.PKRU and hence an oddly behaving
+ * guest may enable the feature in CR4 without enabling it in XCR0. We
+ * need to context switch / migrate PKRU nevertheless.
+ */
+ if ( (value & X86_CR4_PKE) && !(v->arch.xcr0_accum & XSTATE_PKRU) )
+ {
+ int rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ get_xcr0() | XSTATE_PKRU);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.PKRU: %d", rc);
+ goto gpf;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ get_xcr0() & ~XSTATE_PKRU) )
+ /* nothing, best effort only */;
+ }
+
return X86EMUL_OKAY;
gpf:
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -31,6 +31,7 @@
#include <asm/regs.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
+#include <asm/xstate.h>
#include <asm/guest_access.h>
#include <asm/debugreg.h>
#include <asm/msr.h>
@@ -783,6 +784,45 @@ static int vmx_load_vmcs_ctxt(struct vcp
return 0;
}
+static bool_t vmx_set_guest_bndcfgs(struct vcpu *v, u64 val)
+{
+ if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
+ !is_canonical_address(val) ||
+ (val & IA32_BNDCFGS_RESERVED) )
+ return 0;
+
+ /*
+ * While MPX instructions are supposed to be gated on XCR0.BND*, let's
+ * nevertheless force the relevant XCR0 bits on when the feature is being
+ * enabled in BNDCFGS.
+ */
+ if ( (val & IA32_BNDCFGS_ENABLE) &&
+ !(v->arch.xcr0_accum & (XSTATE_BNDREGS | XSTATE_BNDCSR)) )
+ {
+ uint64_t xcr0 = get_xcr0();
+ int rc;
+
+ if ( v != current )
+ return 0;
+
+ rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ xcr0 | XSTATE_BNDREGS | XSTATE_BNDCSR);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc);
+ return 0;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
+ /* nothing, best effort only */;
+ }
+
+ __vmwrite(GUEST_BNDCFGS, val);
+
+ return 1;
+}
+
static unsigned int __init vmx_init_msr(void)
{
return (cpu_has_mpx && cpu_has_vmx_mpx) +
@@ -822,11 +862,8 @@ static int vmx_load_msr(struct vcpu *v,
switch ( ctxt->msr[i].index )
{
case MSR_IA32_BNDCFGS:
- if ( cpu_has_mpx && cpu_has_vmx_mpx &&
- is_canonical_address(ctxt->msr[i].val) &&
- !(ctxt->msr[i].val & IA32_BNDCFGS_RESERVED) )
- __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);
- else if ( ctxt->msr[i].val )
+ if ( !vmx_set_guest_bndcfgs(v, ctxt->msr[i].val) &&
+ ctxt->msr[i].val )
err = -ENXIO;
break;
case MSR_IA32_XSS:
@@ -2878,11 +2915,8 @@ static int vmx_msr_write_intercept(unsig
break;
}
case MSR_IA32_BNDCFGS:
- if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
- !is_canonical_address(msr_content) ||
- (msr_content & IA32_BNDCFGS_RESERVED) )
+ if ( !vmx_set_guest_bndcfgs(v, msr_content) )
goto gp_fault;
- __vmwrite(GUEST_BNDCFGS, msr_content);
break;
case IA32_FEATURE_CONTROL_MSR:
case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_TRUE_ENTRY_CTLS:
[-- Attachment #6: xsa220-4.8.patch --]
[-- Type: application/octet-stream, Size: 3119 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: x86: avoid leaking PKRU and BND* between vCPU-s
PKRU is explicitly "XSAVE-managed but not XSAVE-enabled", so guests
might access the register (via {RD,WR}PKRU) without setting XCR0.PKRU.
Force context switching as well as migrating the register as soon as
CR4.PKE is being set the first time.
For MPX (BND<n>, BNDCFGU, and BNDSTATUS) the situation is less clear,
and the SDM has not entirely consistent information for that case.
While experimentally the instructions don't change register state as
long as the two XCR0 bits aren't both 1, be on the safe side and enable
both if BNDCFGS.EN is being set the first time.
This is XSA-220.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -311,10 +311,39 @@ int hvm_set_guest_pat(struct vcpu *v, u6
bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
{
- return hvm_funcs.set_guest_bndcfgs &&
- is_canonical_address(val) &&
- !(val & IA32_BNDCFGS_RESERVED) &&
- hvm_funcs.set_guest_bndcfgs(v, val);
+ if ( !hvm_funcs.set_guest_bndcfgs ||
+ !is_canonical_address(val) ||
+ (val & IA32_BNDCFGS_RESERVED) )
+ return false;
+
+ /*
+ * While MPX instructions are supposed to be gated on XCR0.BND*, let's
+ * nevertheless force the relevant XCR0 bits on when the feature is being
+ * enabled in BNDCFGS.
+ */
+ if ( (val & IA32_BNDCFGS_ENABLE) &&
+ !(v->arch.xcr0_accum & (XSTATE_BNDREGS | XSTATE_BNDCSR)) )
+ {
+ uint64_t xcr0 = get_xcr0();
+ int rc;
+
+ if ( v != current )
+ return false;
+
+ rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ xcr0 | XSTATE_BNDREGS | XSTATE_BNDCSR);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc);
+ return false;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) )
+ /* nothing, best effort only */;
+ }
+
+ return hvm_funcs.set_guest_bndcfgs(v, val);
}
/*
@@ -2477,6 +2506,27 @@ int hvm_set_cr4(unsigned long value, boo
paging_update_paging_modes(v);
}
+ /*
+ * {RD,WR}PKRU are not gated on XCR0.PKRU and hence an oddly behaving
+ * guest may enable the feature in CR4 without enabling it in XCR0. We
+ * need to context switch / migrate PKRU nevertheless.
+ */
+ if ( (value & X86_CR4_PKE) && !(v->arch.xcr0_accum & XSTATE_PKRU) )
+ {
+ int rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ get_xcr0() | XSTATE_PKRU);
+
+ if ( rc )
+ {
+ HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.PKRU: %d", rc);
+ goto gpf;
+ }
+
+ if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK,
+ get_xcr0() & ~XSTATE_PKRU) )
+ /* nothing, best effort only */;
+ }
+
return X86EMUL_OKAY;
gpf:
[-- Attachment #7: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2017-07-07 13:54 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-07 13:54 Xen Security Advisory 220 (CVE-2017-10916) - x86: PKRU and BND* leakage between vCPU-s Xen.org security team
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).