* [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
@ 2017-11-25 18:15 Andrew Cooper
2017-11-27 9:53 ` Jan Beulich
2017-11-27 13:02 ` [PATCH v2 " Andrew Cooper
0 siblings, 2 replies; 9+ messages in thread
From: Andrew Cooper @ 2017-11-25 18:15 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Wei Liu, Jan Beulich
Xen 4.8 and later virtualises CPUID Faulting support for guests. However, the
value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
that the current cpuid faulting setting is lost on migrate/suspend/resume.
To move this MSR, use the new guest_{rd,wr}msr() infrastructure. This avoids
duplicating or opencoding the feature check and value logic, as well as
abstracting away the internal value representation. One small adjustment to
guest_wrmsr() is required to cope with being called in toolstack context.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Julien Grall <julien.grall@arm.com>
This needs backporting to 4.8 and later, and therefore should be considered
for 4.10 at this point.
---
xen/arch/x86/domctl.c | 46 +++++++++++++++++++++++++++++++++++++++++++++-
xen/arch/x86/hvm/hvm.c | 40 +++++++++++++++++++++++++++++++++++++++-
xen/arch/x86/msr.c | 3 ++-
xen/include/asm-x86/msr.h | 3 +++
4 files changed, 89 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 80b4df9..e98c4a3 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1286,7 +1286,7 @@ long arch_do_domctl(
struct xen_domctl_vcpu_msrs *vmsrs = &domctl->u.vcpu_msrs;
struct xen_domctl_vcpu_msr msr;
struct vcpu *v;
- uint32_t nr_msrs = 0;
+ uint32_t nr_msrs = 1;
ret = -ESRCH;
if ( (vmsrs->vcpu >= d->max_vcpus) ||
@@ -1311,10 +1311,49 @@ long arch_do_domctl(
vmsrs->msr_count = nr_msrs;
else
{
+ static const uint32_t msrs[] = {
+ MSR_INTEL_MISC_FEATURES_ENABLES,
+ };
+ unsigned int j;
+
i = 0;
vcpu_pause(v);
+ for ( j = 0; j < ARRAY_SIZE(msrs); ++j )
+ {
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs[j], &val);
+
+ /*
+ * It is the programmers responsibility to ensure that
+ * msrs[] contain generally-readable MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
+
+ if ( rc != X86EMUL_OKAY )
+ {
+ ASSERT_UNREACHABLE();
+ ret = -ENXIO;
+ break;
+ }
+
+ if ( !val )
+ continue; /* Skip empty MSRs. */
+
+ if ( i < vmsrs->msr_count && !ret )
+ {
+ msr.index = msrs[j];
+ msr.reserved = 0;
+ msr.value = val;
+ if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) )
+ ret = -EFAULT;
+ }
+ ++i;
+ }
+
if ( boot_cpu_has(X86_FEATURE_DBEXT) )
{
unsigned int j;
@@ -1375,6 +1414,11 @@ long arch_do_domctl(
switch ( msr.index )
{
+ case MSR_INTEL_MISC_FEATURES_ENABLES:
+ if ( guest_wrmsr(v, msr.index, msr.value) != X86EMUL_OKAY )
+ break;
+ continue;
+
case MSR_AMD64_DR0_ADDRESS_MASK:
if ( !boot_cpu_has(X86_FEATURE_DBEXT) ||
(msr.value >> 32) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c765a5e..7f18f3b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1322,11 +1322,14 @@ static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
}
#define HVM_CPU_MSR_SIZE(cnt) offsetof(struct hvm_msr, msr[cnt])
-static unsigned int __read_mostly msr_count_max;
+static unsigned int __read_mostly msr_count_max = 1;
static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
+ static const uint32_t msrs[] = {
+ MSR_INTEL_MISC_FEATURES_ENABLES,
+ };
for_each_vcpu ( d, v )
{
@@ -1340,6 +1343,32 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
ctxt = (struct hvm_msr *)&h->data[h->cur];
ctxt->count = 0;
+ for ( i = 0; i < ARRAY_SIZE(msrs); ++i )
+ {
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs[i], &val);
+
+ /*
+ * It is the programmers responsibility to ensure that msrs[]
+ * contain generally-readable MSRs. X86EMUL_EXCEPTION here
+ * implies a missing feature.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
+
+ if ( rc != X86EMUL_OKAY )
+ {
+ ASSERT_UNREACHABLE();
+ return -ENXIO;
+ }
+
+ if ( !val )
+ continue; /* Skip empty MSRs. */
+
+ ctxt->msr[ctxt->count].index = msrs[i];
+ ctxt->msr[ctxt->count++].val = val;
+ }
+
if ( hvm_funcs.save_msr )
hvm_funcs.save_msr(v, ctxt);
@@ -1426,6 +1455,15 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
switch ( ctxt->msr[i].index )
{
+ int rc;
+
+ case MSR_INTEL_MISC_FEATURES_ENABLES:
+ rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val);
+
+ if ( rc != X86EMUL_OKAY )
+ err = -ENXIO;
+ break;
+
default:
if ( !ctxt->msr[i]._rsvd )
err = -ENXIO;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index baba44f..31983ed 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -150,6 +150,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
{
+ const struct vcpu *curr = current;
struct domain *d = v->domain;
struct msr_domain_policy *dp = d->arch.msr;
struct msr_vcpu_policy *vp = v->arch.msr;
@@ -176,7 +177,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
vp->misc_features_enables.cpuid_faulting =
val & MSR_MISC_FEATURES_CPUID_FAULTING;
- if ( is_hvm_domain(d) && cpu_has_cpuid_faulting &&
+ if ( v == curr && is_hvm_domain(d) && cpu_has_cpuid_faulting &&
(old_cpuid_faulting ^ vp->misc_features_enables.cpuid_faulting) )
ctxt_switch_levelling(v);
break;
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 751fa25..41732a4 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -231,6 +231,9 @@ int init_vcpu_msr_policy(struct vcpu *v);
* not (yet) handled by it and must be processed by legacy handlers. Such
* behaviour is needed for transition period until all rd/wrmsr are handled
* by the new MSR infrastructure.
+ *
+ * These functions are also used by the migration logic, so need to cope with
+ * being used outside of v's context.
*/
int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val);
int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val);
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-25 18:15 [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting Andrew Cooper
@ 2017-11-27 9:53 ` Jan Beulich
2017-11-27 11:37 ` Andrew Cooper
2017-11-27 13:02 ` [PATCH v2 " Andrew Cooper
1 sibling, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-11-27 9:53 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Julien Grall, Wei Liu, Xen-devel
>>> On 25.11.17 at 19:15, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -1286,7 +1286,7 @@ long arch_do_domctl(
> struct xen_domctl_vcpu_msrs *vmsrs = &domctl->u.vcpu_msrs;
> struct xen_domctl_vcpu_msr msr;
> struct vcpu *v;
> - uint32_t nr_msrs = 0;
> + uint32_t nr_msrs = 1;
I think this ought to be ARRAY_SIZE(<whatever>), to avoid a
possible update of one but not the other. The price to pay (moving
the array outwards) is reasonable imo. However, ...
> @@ -1311,10 +1311,49 @@ long arch_do_domctl(
> vmsrs->msr_count = nr_msrs;
> else
> {
> + static const uint32_t msrs[] = {
> + MSR_INTEL_MISC_FEATURES_ENABLES,
... is this really a non-optional MSR? In particular,
calculate_hvm_max_policy() ties it to INTEL_PLATFORM_INFO,
which in turn is being tied to running on an Intel CPU.
calculate_pv_max_policy() is even more picky. I think you want
to introduce a function in msr.c complementing guest_rdmsr() /
guest_wrmsr(), similar to HVM's .init_msr() hook.
> + };
> + unsigned int j;
> +
> i = 0;
>
> vcpu_pause(v);
>
> + for ( j = 0; j < ARRAY_SIZE(msrs); ++j )
> + {
> + uint64_t val;
> + int rc = guest_rdmsr(v, msrs[j], &val);
> +
> + /*
> + * It is the programmers responsibility to ensure that
> + * msrs[] contain generally-readable MSRs.
> + * X86EMUL_EXCEPTION here implies a missing feature.
> + */
> + if ( rc == X86EMUL_EXCEPTION )
> + continue;
> +
> + if ( rc != X86EMUL_OKAY )
> + {
> + ASSERT_UNREACHABLE();
> + ret = -ENXIO;
> + break;
> + }
> +
> + if ( !val )
> + continue; /* Skip empty MSRs. */
> +
> + if ( i < vmsrs->msr_count && !ret )
> + {
> + msr.index = msrs[j];
> + msr.reserved = 0;
> + msr.value = val;
> + if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) )
> + ret = -EFAULT;
> + }
> + ++i;
> + }
> +
> if ( boot_cpu_has(X86_FEATURE_DBEXT) )
> {
> unsigned int j;
With the j you introduce in the next outer scope, I think this one
should be removed.
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1322,11 +1322,14 @@ static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
> }
>
> #define HVM_CPU_MSR_SIZE(cnt) offsetof(struct hvm_msr, msr[cnt])
> -static unsigned int __read_mostly msr_count_max;
> +static unsigned int __read_mostly msr_count_max = 1;
>
> static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
> {
> struct vcpu *v;
> + static const uint32_t msrs[] = {
> + MSR_INTEL_MISC_FEATURES_ENABLES,
> + };
Same comments as above.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-27 9:53 ` Jan Beulich
@ 2017-11-27 11:37 ` Andrew Cooper
2017-11-27 13:19 ` Jan Beulich
0 siblings, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2017-11-27 11:37 UTC (permalink / raw)
To: Jan Beulich; +Cc: Julien Grall, Wei Liu, Xen-devel
On 27/11/17 09:53, Jan Beulich wrote:
>>>> On 25.11.17 at 19:15, <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -1286,7 +1286,7 @@ long arch_do_domctl(
>> struct xen_domctl_vcpu_msrs *vmsrs = &domctl->u.vcpu_msrs;
>> struct xen_domctl_vcpu_msr msr;
>> struct vcpu *v;
>> - uint32_t nr_msrs = 0;
>> + uint32_t nr_msrs = 1;
> I think this ought to be ARRAY_SIZE(<whatever>), to avoid a
> possible update of one but not the other. The price to pay (moving
> the array outwards) is reasonable imo.
Ok.
> However, ...
>
>> @@ -1311,10 +1311,49 @@ long arch_do_domctl(
>> vmsrs->msr_count = nr_msrs;
>> else
>> {
>> + static const uint32_t msrs[] = {
>> + MSR_INTEL_MISC_FEATURES_ENABLES,
> ... is this really a non-optional MSR? In particular,
> calculate_hvm_max_policy() ties it to INTEL_PLATFORM_INFO,
> which in turn is being tied to running on an Intel CPU.
> calculate_pv_max_policy() is even more picky. I think you want
> to introduce a function in msr.c complementing guest_rdmsr() /
> guest_wrmsr(), similar to HVM's .init_msr() hook.
Whether an MSR should be migrated depends on whether the guest is able
to use it.
I'm specifically looking to remove the concept of optional MSRs to avoid
duplicating the MSR policy logic, and risking it being different. In
reality, what this means is that the migration code will be expected to
cope with the union of all possible MSRs, and only actually get a subset
to put into the stream.
Also, this MSR specifically is about to become common, but that is
post-4.10 at this point.
>
>> + };
>> + unsigned int j;
>> +
>> i = 0;
>>
>> vcpu_pause(v);
>>
>> + for ( j = 0; j < ARRAY_SIZE(msrs); ++j )
>> + {
>> + uint64_t val;
>> + int rc = guest_rdmsr(v, msrs[j], &val);
>> +
>> + /*
>> + * It is the programmers responsibility to ensure that
>> + * msrs[] contain generally-readable MSRs.
>> + * X86EMUL_EXCEPTION here implies a missing feature.
>> + */
>> + if ( rc == X86EMUL_EXCEPTION )
>> + continue;
>> +
>> + if ( rc != X86EMUL_OKAY )
>> + {
>> + ASSERT_UNREACHABLE();
>> + ret = -ENXIO;
>> + break;
>> + }
>> +
>> + if ( !val )
>> + continue; /* Skip empty MSRs. */
>> +
>> + if ( i < vmsrs->msr_count && !ret )
>> + {
>> + msr.index = msrs[j];
>> + msr.reserved = 0;
>> + msr.value = val;
>> + if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) )
>> + ret = -EFAULT;
>> + }
>> + ++i;
>> + }
>> +
>> if ( boot_cpu_has(X86_FEATURE_DBEXT) )
>> {
>> unsigned int j;
> With the j you introduce in the next outer scope, I think this one
> should be removed.
Oh, indeed. I hadn't even noticed this one here.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-27 11:37 ` Andrew Cooper
@ 2017-11-27 13:19 ` Jan Beulich
0 siblings, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2017-11-27 13:19 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Julien Grall, Wei Liu, Xen-devel
>>> On 27.11.17 at 12:37, <andrew.cooper3@citrix.com> wrote:
> On 27/11/17 09:53, Jan Beulich wrote:
>>>>> On 25.11.17 at 19:15, <andrew.cooper3@citrix.com> wrote:
>>> @@ -1311,10 +1311,49 @@ long arch_do_domctl(
>>> vmsrs->msr_count = nr_msrs;
>>> else
>>> {
>>> + static const uint32_t msrs[] = {
>>> + MSR_INTEL_MISC_FEATURES_ENABLES,
>> ... is this really a non-optional MSR? In particular,
>> calculate_hvm_max_policy() ties it to INTEL_PLATFORM_INFO,
>> which in turn is being tied to running on an Intel CPU.
>> calculate_pv_max_policy() is even more picky. I think you want
>> to introduce a function in msr.c complementing guest_rdmsr() /
>> guest_wrmsr(), similar to HVM's .init_msr() hook.
>
> Whether an MSR should be migrated depends on whether the guest is able
> to use it.
Indeed, and the MSR here is unusable by a guest as long as Xen
runs on an Intel CPU, regardless of the virtual CPU vendor.
> I'm specifically looking to remove the concept of optional MSRs to avoid
> duplicating the MSR policy logic, and risking it being different. In
> reality, what this means is that the migration code will be expected to
> cope with the union of all possible MSRs, and only actually get a subset
> to put into the stream.
If that's the goal, I think it would help if you said so in the patch
description.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-25 18:15 [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting Andrew Cooper
2017-11-27 9:53 ` Jan Beulich
@ 2017-11-27 13:02 ` Andrew Cooper
2017-11-27 14:41 ` Jan Beulich
1 sibling, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2017-11-27 13:02 UTC (permalink / raw)
To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Wei Liu, Jan Beulich
Xen 4.8 and later virtualises CPUID Faulting support for guests. However, the
value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
that the current cpuid faulting setting is lost on migrate/suspend/resume.
To move this MSR, use the new guest_{rd,wr}msr() infrastructure. This avoids
duplicating or opencoding the feature check and value logic, as well as
abstracting away the internal value representation. One small adjustment to
guest_wrmsr() is required to cope with being called in toolstack context.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Julien Grall <julien.grall@arm.com>
v2:
* Move the msrs[] array to an outer scope and derive the number to send with
ARRAY_SIZE()
* Don't create a shadowed j variable in arch_do_domctl()
This needs backporting to 4.8 and later, and therefore should be considered
for 4.10 at this point.
---
xen/arch/x86/domctl.c | 49 ++++++++++++++++++++++++++++++++++++++++++++---
xen/arch/x86/hvm/hvm.c | 41 ++++++++++++++++++++++++++++++++++++++-
xen/arch/x86/msr.c | 3 ++-
xen/include/asm-x86/msr.h | 3 +++
4 files changed, 91 insertions(+), 5 deletions(-)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 80b4df9..1ddd3d0 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1286,7 +1286,10 @@ long arch_do_domctl(
struct xen_domctl_vcpu_msrs *vmsrs = &domctl->u.vcpu_msrs;
struct xen_domctl_vcpu_msr msr;
struct vcpu *v;
- uint32_t nr_msrs = 0;
+ static const uint32_t msrs_to_send[] = {
+ MSR_INTEL_MISC_FEATURES_ENABLES,
+ };
+ uint32_t nr_msrs = ARRAY_SIZE(msrs_to_send);
ret = -ESRCH;
if ( (vmsrs->vcpu >= d->max_vcpus) ||
@@ -1311,14 +1314,49 @@ long arch_do_domctl(
vmsrs->msr_count = nr_msrs;
else
{
+ unsigned int j;
+
i = 0;
vcpu_pause(v);
- if ( boot_cpu_has(X86_FEATURE_DBEXT) )
+ for ( j = 0; j < ARRAY_SIZE(msrs_to_send); ++j )
{
- unsigned int j;
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs_to_send[j], &val);
+
+ /*
+ * It is the programmers responsibility to ensure that
+ * msrs[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and
+ * that the guest doesn't have access to the MSR.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
+
+ if ( rc != X86EMUL_OKAY )
+ {
+ ASSERT_UNREACHABLE();
+ ret = -ENXIO;
+ break;
+ }
+
+ if ( !val )
+ continue; /* Skip empty MSRs. */
+ if ( i < vmsrs->msr_count && !ret )
+ {
+ msr.index = msrs_to_send[j];
+ msr.reserved = 0;
+ msr.value = val;
+ if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) )
+ ret = -EFAULT;
+ }
+ ++i;
+ }
+
+ if ( boot_cpu_has(X86_FEATURE_DBEXT) )
+ {
if ( v->arch.pv_vcpu.dr_mask[0] )
{
if ( i < vmsrs->msr_count && !ret )
@@ -1375,6 +1413,11 @@ long arch_do_domctl(
switch ( msr.index )
{
+ case MSR_INTEL_MISC_FEATURES_ENABLES:
+ if ( guest_wrmsr(v, msr.index, msr.value) != X86EMUL_OKAY )
+ break;
+ continue;
+
case MSR_AMD64_DR0_ADDRESS_MASK:
if ( !boot_cpu_has(X86_FEATURE_DBEXT) ||
(msr.value >> 32) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c765a5e..ec3dc48 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1322,7 +1322,10 @@ static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
}
#define HVM_CPU_MSR_SIZE(cnt) offsetof(struct hvm_msr, msr[cnt])
-static unsigned int __read_mostly msr_count_max;
+static const uint32_t msrs_to_send[] = {
+ MSR_INTEL_MISC_FEATURES_ENABLES,
+};
+static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
@@ -1340,6 +1343,33 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
ctxt = (struct hvm_msr *)&h->data[h->cur];
ctxt->count = 0;
+ for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+ {
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+
+ /*
+ * It is the programmers responsibility to ensure that msrs[]
+ * contain generally-read/write MSRs. X86EMUL_EXCEPTION here
+ * implies a missing feature, and that the guest doesn't have
+ * access to the MSR.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
+
+ if ( rc != X86EMUL_OKAY )
+ {
+ ASSERT_UNREACHABLE();
+ return -ENXIO;
+ }
+
+ if ( !val )
+ continue; /* Skip empty MSRs. */
+
+ ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ ctxt->msr[ctxt->count++].val = val;
+ }
+
if ( hvm_funcs.save_msr )
hvm_funcs.save_msr(v, ctxt);
@@ -1426,6 +1456,15 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
switch ( ctxt->msr[i].index )
{
+ int rc;
+
+ case MSR_INTEL_MISC_FEATURES_ENABLES:
+ rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val);
+
+ if ( rc != X86EMUL_OKAY )
+ err = -ENXIO;
+ break;
+
default:
if ( !ctxt->msr[i]._rsvd )
err = -ENXIO;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index baba44f..31983ed 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -150,6 +150,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
{
+ const struct vcpu *curr = current;
struct domain *d = v->domain;
struct msr_domain_policy *dp = d->arch.msr;
struct msr_vcpu_policy *vp = v->arch.msr;
@@ -176,7 +177,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
vp->misc_features_enables.cpuid_faulting =
val & MSR_MISC_FEATURES_CPUID_FAULTING;
- if ( is_hvm_domain(d) && cpu_has_cpuid_faulting &&
+ if ( v == curr && is_hvm_domain(d) && cpu_has_cpuid_faulting &&
(old_cpuid_faulting ^ vp->misc_features_enables.cpuid_faulting) )
ctxt_switch_levelling(v);
break;
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index 751fa25..41732a4 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -231,6 +231,9 @@ int init_vcpu_msr_policy(struct vcpu *v);
* not (yet) handled by it and must be processed by legacy handlers. Such
* behaviour is needed for transition period until all rd/wrmsr are handled
* by the new MSR infrastructure.
+ *
+ * These functions are also used by the migration logic, so need to cope with
+ * being used outside of v's context.
*/
int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val);
int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val);
--
2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH v2 for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-27 13:02 ` [PATCH v2 " Andrew Cooper
@ 2017-11-27 14:41 ` Jan Beulich
2017-11-30 18:54 ` Andrew Cooper
0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-11-27 14:41 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Julien Grall, Wei Liu, Xen-devel
>>> On 27.11.17 at 14:02, <andrew.cooper3@citrix.com> wrote:
> Xen 4.8 and later virtualises CPUID Faulting support for guests. However, the
> value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
> that the current cpuid faulting setting is lost on migrate/suspend/resume.
>
> To move this MSR, use the new guest_{rd,wr}msr() infrastructure. This avoids
> duplicating or opencoding the feature check and value logic, as well as
> abstracting away the internal value representation. One small adjustment to
> guest_wrmsr() is required to cope with being called in toolstack context.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
With the further intentions mentioned in the description (as a
justification for some of the earlier requested changes to not
be done), as indicated in a late response to v1
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v2 for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-27 14:41 ` Jan Beulich
@ 2017-11-30 18:54 ` Andrew Cooper
2017-12-01 11:21 ` Jan Beulich
0 siblings, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2017-11-30 18:54 UTC (permalink / raw)
To: Jan Beulich; +Cc: Julien Grall, Wei Liu, Xen-devel
On 27/11/17 14:41, Jan Beulich wrote:
>>>> On 27.11.17 at 14:02, <andrew.cooper3@citrix.com> wrote:
>> Xen 4.8 and later virtualises CPUID Faulting support for guests. However, the
>> value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
>> that the current cpuid faulting setting is lost on migrate/suspend/resume.
>>
>> To move this MSR, use the new guest_{rd,wr}msr() infrastructure. This avoids
>> duplicating or opencoding the feature check and value logic, as well as
>> abstracting away the internal value representation. One small adjustment to
>> guest_wrmsr() is required to cope with being called in toolstack context.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> With the further intentions mentioned in the description (as a
> justification for some of the earlier requested changes to not
> be done), as indicated in a late response to v1
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
I thought that was already clear from the second paragraph. Either way,
how about this?
Xen 4.8 and later virtualises CPUID Faulting support for guests.
However, the
value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
that the current cpuid faulting setting is lost on migrate/suspend/resume.
Instead of following the MSR status quo, take the opportunity to make the
logic more generic, and in particular, trivial to extend for future MSRs.
This is done by discarding the notion of optional MSRs, and requiring the
toolstack to be prepared to move all of the MSRs, although only a subset
will
typically need to move.
This allows for the use of guest_{rd,wr}msr() alone to evaluate whether
an MSR
needs moving. This is a benefit because it means there is a single piece of
logic responsible for evaluating whether a guest can use an MSR, and which
values are acceptable.
One small adjustment to guest_wrmsr() is required to cope with being
called in
toolstack context.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v2 for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-11-30 18:54 ` Andrew Cooper
@ 2017-12-01 11:21 ` Jan Beulich
2017-12-01 16:15 ` Julien Grall
0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-12-01 11:21 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Julien Grall, Wei Liu, Xen-devel
>>> On 30.11.17 at 19:54, <andrew.cooper3@citrix.com> wrote:
> On 27/11/17 14:41, Jan Beulich wrote:
>>>>> On 27.11.17 at 14:02, <andrew.cooper3@citrix.com> wrote:
>>> Xen 4.8 and later virtualises CPUID Faulting support for guests. However,
> the
>>> value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
>>> that the current cpuid faulting setting is lost on migrate/suspend/resume.
>>>
>>> To move this MSR, use the new guest_{rd,wr}msr() infrastructure. This
> avoids
>>> duplicating or opencoding the feature check and value logic, as well as
>>> abstracting away the internal value representation. One small adjustment to
>>> guest_wrmsr() is required to cope with being called in toolstack context.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> With the further intentions mentioned in the description (as a
>> justification for some of the earlier requested changes to not
>> be done), as indicated in a late response to v1
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> I thought that was already clear from the second paragraph. Either way,
> how about this?
Yes, I like this new version better. Thanks.
Jan
> Xen 4.8 and later virtualises CPUID Faulting support for guests.
> However, the
> value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
> that the current cpuid faulting setting is lost on migrate/suspend/resume.
>
> Instead of following the MSR status quo, take the opportunity to make the
> logic more generic, and in particular, trivial to extend for future MSRs.
>
> This is done by discarding the notion of optional MSRs, and requiring the
> toolstack to be prepared to move all of the MSRs, although only a subset
> will
> typically need to move.
>
> This allows for the use of guest_{rd,wr}msr() alone to evaluate whether
> an MSR
> needs moving. This is a benefit because it means there is a single piece of
> logic responsible for evaluating whether a guest can use an MSR, and which
> values are acceptable.
>
> One small adjustment to guest_wrmsr() is required to cope with being
> called in
> toolstack context.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v2 for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting
2017-12-01 11:21 ` Jan Beulich
@ 2017-12-01 16:15 ` Julien Grall
0 siblings, 0 replies; 9+ messages in thread
From: Julien Grall @ 2017-12-01 16:15 UTC (permalink / raw)
To: Jan Beulich, Andrew Cooper; +Cc: Julien Grall, Wei Liu, Xen-devel
Hi Jan,
On 01/12/17 11:21, Jan Beulich wrote:
>>>> On 30.11.17 at 19:54, <andrew.cooper3@citrix.com> wrote:
>> On 27/11/17 14:41, Jan Beulich wrote:
>>>>>> On 27.11.17 at 14:02, <andrew.cooper3@citrix.com> wrote:
>>>> Xen 4.8 and later virtualises CPUID Faulting support for guests. However,
>> the
>>>> value of MSR_MISC_FEATURES_ENABLES is omitted from the vcpu state, meaning
>>>> that the current cpuid faulting setting is lost on migrate/suspend/resume.
>>>>
>>>> To move this MSR, use the new guest_{rd,wr}msr() infrastructure. This
>> avoids
>>>> duplicating or opencoding the feature check and value logic, as well as
>>>> abstracting away the internal value representation. One small adjustment to
>>>> guest_wrmsr() is required to cope with being called in toolstack context.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> With the further intentions mentioned in the description (as a
>>> justification for some of the earlier requested changes to not
>>> be done), as indicated in a late response to v1
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> I thought that was already clear from the second paragraph. Either way,
>> how about this?
>
> Yes, I like this new version better. Thanks.
Release-acked-by: Julien Grall <julien.grall@linaro.org>
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-12-01 16:15 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-25 18:15 [PATCH for-4.10] x86: Avoid corruption on migrate for vcpus using CPUID Faulting Andrew Cooper
2017-11-27 9:53 ` Jan Beulich
2017-11-27 11:37 ` Andrew Cooper
2017-11-27 13:19 ` Jan Beulich
2017-11-27 13:02 ` [PATCH v2 " Andrew Cooper
2017-11-27 14:41 ` Jan Beulich
2017-11-30 18:54 ` Andrew Cooper
2017-12-01 11:21 ` Jan Beulich
2017-12-01 16:15 ` Julien Grall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).