From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v8 10/17] x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL, PRED_CMD}
Date: Fri, 12 Jan 2018 18:01:00 +0000 [thread overview]
Message-ID: <1515780067-31735-11-git-send-email-andrew.cooper3@citrix.com> (raw)
In-Reply-To: <1515780067-31735-1-git-send-email-andrew.cooper3@citrix.com>
For performance reasons, HVM guests should have direct access to these MSRs
when possible.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v7:
* Drop excess brackets
---
xen/arch/x86/domctl.c | 19 +++++++++++++++++++
xen/arch/x86/hvm/svm/svm.c | 5 +++++
xen/arch/x86/hvm/vmx/vmx.c | 18 ++++++++++++++++++
xen/arch/x86/msr.c | 3 ++-
xen/include/asm-x86/msr.h | 5 ++++-
5 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 72b4489..e5fdde7 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -53,6 +53,7 @@ static int update_domain_cpuid_info(struct domain *d,
struct cpuid_policy *p = d->arch.cpuid;
const struct cpuid_leaf leaf = { ctl->eax, ctl->ebx, ctl->ecx, ctl->edx };
int old_vendor = p->x86_vendor;
+ unsigned int old_7d0 = p->feat.raw[0].d, old_e8b = p->extd.raw[8].b;
bool call_policy_changed = false; /* Avoid for_each_vcpu() unnecessarily */
/*
@@ -218,6 +219,14 @@ static int update_domain_cpuid_info(struct domain *d,
d->arch.pv_domain.cpuidmasks->_7ab0 = mask;
}
+
+ /*
+ * If the IBSRB/STIBP policy has changed, we need to recalculate the
+ * MSR interception bitmaps and STIBP protection default.
+ */
+ call_policy_changed = ((old_7d0 ^ p->feat.raw[0].d) &
+ (cpufeat_mask(X86_FEATURE_IBRSB) |
+ cpufeat_mask(X86_FEATURE_STIBP)));
break;
case 0xa:
@@ -292,6 +301,16 @@ static int update_domain_cpuid_info(struct domain *d,
d->arch.pv_domain.cpuidmasks->e1cd = mask;
}
break;
+
+ case 0x80000008:
+ /*
+ * If the IBRB policy has changed, we need to recalculate the MSR
+ * interception bitmaps.
+ */
+ call_policy_changed = (is_hvm_domain(d) &&
+ ((old_e8b ^ p->extd.raw[8].b) &
+ cpufeat_mask(X86_FEATURE_IBPB)));
+ break;
}
if ( call_policy_changed )
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index c48fdfa..ee47508 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -617,6 +617,7 @@ static void svm_cpuid_policy_changed(struct vcpu *v)
{
struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
struct vmcb_struct *vmcb = arch_svm->vmcb;
+ const struct cpuid_policy *cp = v->domain->arch.cpuid;
u32 bitmap = vmcb_get_exception_intercepts(vmcb);
if ( opt_hvm_fep ||
@@ -626,6 +627,10 @@ static void svm_cpuid_policy_changed(struct vcpu *v)
bitmap &= ~(1U << TRAP_invalid_op);
vmcb_set_exception_intercepts(vmcb, bitmap);
+
+ /* Give access to MSR_PRED_CMD if the guest has been told about it. */
+ svm_intercept_msr(v, MSR_PRED_CMD,
+ cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
}
static void svm_sync_vmcb(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e036303..8609de3 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -656,6 +656,8 @@ void vmx_update_exception_bitmap(struct vcpu *v)
static void vmx_cpuid_policy_changed(struct vcpu *v)
{
+ const struct cpuid_policy *cp = v->domain->arch.cpuid;
+
if ( opt_hvm_fep ||
(v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
v->arch.hvm_vmx.exception_bitmap |= (1U << TRAP_invalid_op);
@@ -665,6 +667,22 @@ static void vmx_cpuid_policy_changed(struct vcpu *v)
vmx_vmcs_enter(v);
vmx_update_exception_bitmap(v);
vmx_vmcs_exit(v);
+
+ /*
+ * We can only pass though MSR_SPEC_CTRL if the guest knows about all bits
+ * in it. Otherwise, Xen may be fixing up in the background.
+ */
+ v->arch.msr->spec_ctrl.direct_access = cp->feat.ibrsb && cp->feat.stibp;
+ if ( v->arch.msr->spec_ctrl.direct_access )
+ vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+ else
+ vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+
+ /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */
+ if ( cp->feat.ibrsb || cp->extd.ibpb )
+ vmx_clear_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW);
+ else
+ vmx_set_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW);
}
int vmx_guest_x86_mode(struct vcpu *v)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 02a7b49..697cc6e 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -132,7 +132,8 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
case MSR_SPEC_CTRL:
if ( !cp->feat.ibrsb )
goto gp_fault;
- *val = vp->spec_ctrl.guest;
+ *val = (vp->spec_ctrl.direct_access
+ ? vp->spec_ctrl.host : vp->spec_ctrl.guest);
break;
case MSR_INTEL_PLATFORM_INFO:
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 3d0012d..007e966 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -229,10 +229,13 @@ struct msr_vcpu_policy
* Only the bottom two bits are defined, so no need to waste space
* with uint64_t at the moment. We maintain the guests idea of the
* value it wrote, and a value to install into hardware (extended to
- * uint32_t to simplify the asm) which might be different.
+ * uint32_t to simplify the asm) which might be different. HVM guests
+ * might be given direct access to the MSRs, at which point the
+ * 'guest' value becomes stale.
*/
uint32_t host;
uint8_t guest;
+ bool direct_access;
} spec_ctrl;
/* 0x00000140 MSR_INTEL_MISC_FEATURES_ENABLES */
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-01-12 18:01 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-12 18:00 [PATCH v8 00/17] x86: Mitigations for SP2/CVE-2017-5715/Branch Target Injection Andrew Cooper
2018-01-12 18:00 ` [PATCH v8 01/17] x86: Support compiling with indirect branch thunks Andrew Cooper
2018-01-14 19:48 ` David Woodhouse
2018-01-15 0:00 ` Andrew Cooper
2018-01-15 4:11 ` Konrad Rzeszutek Wilk
2018-01-15 10:14 ` Jan Beulich
2018-01-15 10:40 ` Andrew Cooper
2018-01-15 10:48 ` Jan Beulich
2018-01-12 18:00 ` [PATCH v8 02/17] x86: Support indirect thunks from assembly code Andrew Cooper
2018-01-15 10:28 ` Jan Beulich
2018-01-16 13:55 ` Andrew Cooper
2018-01-16 14:00 ` Jan Beulich
2018-02-04 10:57 ` David Woodhouse
2018-02-05 8:56 ` Jan Beulich
2018-01-12 18:00 ` [PATCH v8 03/17] x86/boot: Report details of speculative mitigations Andrew Cooper
2018-01-12 18:00 ` [PATCH v8 04/17] x86/amd: Try to set lfence as being Dispatch Serialising Andrew Cooper
2018-01-12 18:00 ` [PATCH v8 05/17] x86: Introduce alternative indirect thunks Andrew Cooper
2018-01-15 10:53 ` Jan Beulich
2018-01-12 18:00 ` [PATCH v8 06/17] x86/feature: Definitions for Indirect Branch Controls Andrew Cooper
2018-01-12 18:00 ` [PATCH v8 07/17] x86/cmdline: Introduce a command line option to disable IBRS/IBPB, STIBP and IBPB Andrew Cooper
2018-01-12 18:00 ` [PATCH v8 08/17] x86/msr: Emulation of MSR_{SPEC_CTRL, PRED_CMD} for guests Andrew Cooper
2018-01-16 11:10 ` David Woodhouse
2018-01-16 16:58 ` Andrew Cooper
2018-01-17 9:11 ` Jan Beulich
2018-01-17 9:39 ` Andrew Cooper
2018-01-12 18:00 ` [PATCH v8 09/17] x86/migrate: Move MSR_SPEC_CTRL on migrate Andrew Cooper
2018-01-12 18:01 ` Andrew Cooper [this message]
2018-01-15 11:11 ` [PATCH v8 10/17] x86/hvm: Permit guests direct access to MSR_{SPEC_CTRL, PRED_CMD} Jan Beulich
2018-01-15 16:02 ` Boris Ostrovsky
2018-01-16 0:39 ` Tian, Kevin
2018-01-12 18:01 ` [PATCH v8 11/17] x86: Protect unaware domains from meddling hyperthreads Andrew Cooper
2018-01-15 11:26 ` Jan Beulich
2018-01-16 21:11 ` Andrew Cooper
2018-01-17 8:40 ` Jan Beulich
2018-01-17 8:43 ` Andrew Cooper
2018-01-12 18:01 ` [PATCH v8 12/17] x86/entry: Organise the use of MSR_SPEC_CTRL at each entry/exit point Andrew Cooper
2018-01-15 12:09 ` Jan Beulich
2018-01-16 21:24 ` Andrew Cooper
2018-01-17 8:47 ` Jan Beulich
2018-01-17 9:25 ` Andrew Cooper
2018-01-12 18:01 ` [PATCH v8 13/17] x86/boot: Calculate the most appropriate BTI mitigation to use Andrew Cooper
2018-01-16 14:10 ` Boris Ostrovsky
2018-01-16 14:13 ` Andrew Cooper
2018-01-16 14:25 ` Boris Ostrovsky
2018-01-16 15:12 ` Andrew Cooper
2018-01-12 18:01 ` [PATCH v8 14/17] x86/entry: Clobber the Return Stack Buffer/Return Address Stack on entry to Xen Andrew Cooper
2018-01-12 18:01 ` [PATCH v8 15/17] x86/ctxt: Issue a speculation barrier between vcpu contexts Andrew Cooper
2018-01-15 12:54 ` David Woodhouse
2018-01-15 13:02 ` Andrew Cooper
2018-01-15 13:23 ` David Woodhouse
2018-01-15 21:39 ` David Woodhouse
2018-01-17 17:26 ` David Woodhouse
2018-01-18 9:12 ` David Woodhouse
2018-01-12 18:01 ` [PATCH v8 16/17] x86/cpuid: Offer Indirect Branch Controls to guests Andrew Cooper
2018-01-12 18:01 ` [PATCH v8 17/17] x86/idle: Clear SPEC_CTRL while idle Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1515780067-31735-11-git-send-email-andrew.cooper3@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).