xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] x86/AMD: feature masking adjustments
@ 2014-04-07  9:36 Jan Beulich
  2014-04-07  9:41 ` [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11 Jan Beulich
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Jan Beulich @ 2014-04-07  9:36 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Aravind Gopalakrishnan, suravee.suthikulpanit

1: feature masking is unavailable on Fam11
2: support further feature masking MSRs
3: clean up pre-canned family/revision handling for CPUID masking

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Patch 1 is new, the other two patches are merely rebased on top of it.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11
  2014-04-07  9:36 [PATCH v2 0/3] x86/AMD: feature masking adjustments Jan Beulich
@ 2014-04-07  9:41 ` Jan Beulich
  2014-04-07 10:14   ` Andrew Cooper
  2014-04-07  9:43 ` [PATCH v2 2/3] x86/AMD: support further feature masking MSRs Jan Beulich
  2014-04-07  9:43 ` [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking Jan Beulich
  2 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2014-04-07  9:41 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Aravind Gopalakrishnan, suravee.suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 887 bytes --]

Reported-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -107,6 +107,10 @@ static void __devinit set_cpuidmask(cons
 	ASSERT((status == not_parsed) && (smp_processor_id() == 0));
 	status = no_mask;
 
+	/* Fam11 doesn't support masking at all. */
+	if (c->x86 == 0x11)
+		return;
+
 	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
 	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {
 		feat_ecx = opt_cpuid_mask_ecx;
@@ -176,7 +180,6 @@ static void __devinit set_cpuidmask(cons
 	       extfeat_ecx, extfeat_edx);
 
  setmask:
-	/* FIXME check if processor supports CPUID masking */
 	/* AMD processors prior to family 10h required a 32-bit password */
 	if (c->x86 >= 0x10) {
 		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);




[-- Attachment #2: x86-AMD-Fam11-no-feature-masks.patch --]
[-- Type: text/plain, Size: 933 bytes --]

x86/AMD: feature masking is unavailable on Fam11

Reported-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -107,6 +107,10 @@ static void __devinit set_cpuidmask(cons
 	ASSERT((status == not_parsed) && (smp_processor_id() == 0));
 	status = no_mask;
 
+	/* Fam11 doesn't support masking at all. */
+	if (c->x86 == 0x11)
+		return;
+
 	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
 	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {
 		feat_ecx = opt_cpuid_mask_ecx;
@@ -176,7 +180,6 @@ static void __devinit set_cpuidmask(cons
 	       extfeat_ecx, extfeat_edx);
 
  setmask:
-	/* FIXME check if processor supports CPUID masking */
 	/* AMD processors prior to family 10h required a 32-bit password */
 	if (c->x86 >= 0x10) {
 		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-07  9:36 [PATCH v2 0/3] x86/AMD: feature masking adjustments Jan Beulich
  2014-04-07  9:41 ` [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11 Jan Beulich
@ 2014-04-07  9:43 ` Jan Beulich
  2014-04-07 10:23   ` Andrew Cooper
  2014-04-07 15:21   ` Boris Ostrovsky
  2014-04-07  9:43 ` [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking Jan Beulich
  2 siblings, 2 replies; 19+ messages in thread
From: Jan Beulich @ 2014-04-07  9:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Aravind Gopalakrishnan, suravee.suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 6917 bytes --]

Newer AMD CPUs also allow masking CPUID leaf 6 ECX and CPUID leaf 7
sub-leaf 0 EAX and EBX.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
---
v2: Adjust for new prior patch. Apply leaf 6 mask only on Fam15.

--- 2014-03-17.orig/docs/misc/xen-command-line.markdown	2014-04-04 11:27:32.000000000 +0200
+++ 2014-03-17/docs/misc/xen-command-line.markdown	2014-03-18 14:37:31.000000000 +0100
@@ -331,24 +331,42 @@ Indicate where the responsibility for dr
 ### cpuid\_mask\_cpu (AMD only)
 > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
 
-If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
-(unspecified on the command line), specify a pre-canned cpuid mask to
-mask the current processor down to appear as the specified processor.
-It is important to ensure that all hosts in a pool appear the same to
-guests to allow successful live migration.
+If the other **cpuid\_mask\_{,ext\_,thermal\_,l7s0\_}e{a,b,c,d}x**
+options are fully set (unspecified on the command line), specify a
+pre-canned cpuid mask to mask the current processor down to appear as
+the specified processor. It is important to ensure that all hosts in a
+pool appear the same to guests to allow successful live migration.
 
-### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
+### cpuid\_mask\_{{,ext\_}ecx,edx}
 > `= <integer>`
 
 > Default: `~0` (all bits set)
 
-These five command line parameters are used to specify cpuid masks to
+These four command line parameters are used to specify cpuid masks to
 help with cpuid levelling across a pool of hosts.  Setting a bit in
 the mask indicates that the feature should be enabled, while clearing
 a bit in the mask indicates that the feature should be disabled.  It
 is important to ensure that all hosts in a pool appear the same to
 guests to allow successful live migration.
 
+### cpuid\_mask\_xsave\_eax (Intel only)
+> `= <integer>`
+
+> Default: `~0` (all bits set)
+
+This command line parameter is also used to specify a cpuid mask to
+help with cpuid levelling across a pool of hosts.  See the description
+of the other respective options above.
+
+### cpuid\_mask\_{l7s0\_{eax,ebx},thermal\_ecx} (AMD only)
+> `= <integer>`
+
+> Default: `~0` (all bits set)
+
+These three command line parameters are also used to specify cpuid
+masks to help with cpuid levelling across a pool of hosts.  See the
+description of the other respective options above.
+
 ### cpuidle
 > `= <boolean>`
 
--- 2014-03-17.orig/xen/arch/x86/cpu/amd.c	2014-04-04 11:31:51.000000000 +0200
+++ 2014-03-17/xen/arch/x86/cpu/amd.c	2014-04-04 11:35:31.000000000 +0200
@@ -30,9 +30,17 @@
  * "fam_10_rev_c"
  * "fam_11_rev_b"
  */
-static char opt_famrev[14];
+static char __initdata opt_famrev[14];
 string_param("cpuid_mask_cpu", opt_famrev);
 
+static unsigned int __initdata opt_cpuid_mask_l7s0_eax = ~0u;
+integer_param("cpuid_mask_l7s0_eax", opt_cpuid_mask_l7s0_eax);
+static unsigned int __initdata opt_cpuid_mask_l7s0_ebx = ~0u;
+integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);
+
+static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u;
+integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx);
+
 /* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
 s8 __read_mostly opt_allow_unsafe;
 boolean_param("allow_unsafe", opt_allow_unsafe);
@@ -96,7 +104,11 @@ static void __devinit set_cpuidmask(cons
 {
 	static unsigned int feat_ecx, feat_edx;
 	static unsigned int extfeat_ecx, extfeat_edx;
+	static unsigned int l7s0_eax, l7s0_ebx;
+	static unsigned int thermal_ecx;
+	static bool_t skip_l7s0_eax_ebx, skip_thermal_ecx;
 	static enum { not_parsed, no_mask, set_mask } status;
+	unsigned int eax, ebx, ecx, edx;
 
 	if (status == no_mask)
 		return;
@@ -104,7 +116,7 @@ static void __devinit set_cpuidmask(cons
 	if (status == set_mask)
 		goto setmask;
 
-	ASSERT((status == not_parsed) && (smp_processor_id() == 0));
+	ASSERT((status == not_parsed) && (c == &boot_cpu_data));
 	status = no_mask;
 
 	/* Fam11 doesn't support masking at all. */
@@ -112,11 +124,16 @@ static void __devinit set_cpuidmask(cons
 		return;
 
 	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
-	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {
+	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx &
+	      opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx &
+	      opt_cpuid_mask_thermal_ecx)) {
 		feat_ecx = opt_cpuid_mask_ecx;
 		feat_edx = opt_cpuid_mask_edx;
 		extfeat_ecx = opt_cpuid_mask_ext_ecx;
 		extfeat_edx = opt_cpuid_mask_ext_edx;
+		l7s0_eax = opt_cpuid_mask_l7s0_eax;
+		l7s0_ebx = opt_cpuid_mask_l7s0_ebx;
+		thermal_ecx = opt_cpuid_mask_thermal_ecx;
 	} else if (*opt_famrev == '\0') {
 		return;
 	} else if (!strcmp(opt_famrev, "fam_0f_rev_c")) {
@@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
 	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n", 
 	       extfeat_ecx, extfeat_edx);
 
+	if (c->cpuid_level >= 7)
+		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
+	else
+		ebx = eax = 0;
+	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
+		if (l7s0_eax > eax)
+			l7s0_eax = eax;
+		l7s0_ebx &= ebx;
+		printk("Writing CPUID leaf 7 subleaf 0 feature mask EAX:EBX -> %08Xh:%08Xh\n",
+		       l7s0_eax, l7s0_ebx);
+	} else
+		skip_l7s0_eax_ebx = 1;
+
+	/* Only Fam15 has the respective MSR. */
+	ecx = c->x86 == 0x15 && c->cpuid_level >= 6 ? cpuid_ecx(6) : 0;
+	if (ecx && ~thermal_ecx) {
+		thermal_ecx &= ecx;
+		printk("Writing CPUID thermal/power feature mask ECX -> %08Xh\n",
+		       thermal_ecx);
+	} else
+		skip_thermal_ecx = 1;
+
  setmask:
 	/* AMD processors prior to family 10h required a 32-bit password */
 	if (c->x86 >= 0x10) {
 		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
 		wrmsr(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
+		if (!skip_l7s0_eax_ebx)
+			wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax);
+		if (!skip_thermal_ecx) {
+			rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx);
+			wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx);
+		}
 	} else {
 		wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
 		wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
--- 2014-03-17.orig/xen/include/asm-x86/msr-index.h	2014-03-10 11:36:00.000000000 +0100
+++ 2014-03-17/xen/include/asm-x86/msr-index.h	2014-03-18 12:19:10.000000000 +0100
@@ -192,6 +192,8 @@
 #define MSR_AMD_FAM15H_EVNTSEL5		0xc001020a
 #define MSR_AMD_FAM15H_PERFCTR5		0xc001020b
 
+#define MSR_AMD_L7S0_FEATURE_MASK	0xc0011002
+#define MSR_AMD_THRM_FEATURE_MASK	0xc0011003
 #define MSR_K8_FEATURE_MASK		0xc0011004
 #define MSR_K8_EXT_FEATURE_MASK		0xc0011005
 



[-- Attachment #2: x86-AMD-more-feature-masks.patch --]
[-- Type: text/plain, Size: 6962 bytes --]

x86/AMD: support further feature masking MSRs

Newer AMD CPUs also allow masking CPUID leaf 6 ECX and CPUID leaf 7
sub-leaf 0 EAX and EBX.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
---
v2: Adjust for new prior patch. Apply leaf 6 mask only on Fam15.

--- 2014-03-17.orig/docs/misc/xen-command-line.markdown	2014-04-04 11:27:32.000000000 +0200
+++ 2014-03-17/docs/misc/xen-command-line.markdown	2014-03-18 14:37:31.000000000 +0100
@@ -331,24 +331,42 @@ Indicate where the responsibility for dr
 ### cpuid\_mask\_cpu (AMD only)
 > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
 
-If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
-(unspecified on the command line), specify a pre-canned cpuid mask to
-mask the current processor down to appear as the specified processor.
-It is important to ensure that all hosts in a pool appear the same to
-guests to allow successful live migration.
+If the other **cpuid\_mask\_{,ext\_,thermal\_,l7s0\_}e{a,b,c,d}x**
+options are fully set (unspecified on the command line), specify a
+pre-canned cpuid mask to mask the current processor down to appear as
+the specified processor. It is important to ensure that all hosts in a
+pool appear the same to guests to allow successful live migration.
 
-### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
+### cpuid\_mask\_{{,ext\_}ecx,edx}
 > `= <integer>`
 
 > Default: `~0` (all bits set)
 
-These five command line parameters are used to specify cpuid masks to
+These four command line parameters are used to specify cpuid masks to
 help with cpuid levelling across a pool of hosts.  Setting a bit in
 the mask indicates that the feature should be enabled, while clearing
 a bit in the mask indicates that the feature should be disabled.  It
 is important to ensure that all hosts in a pool appear the same to
 guests to allow successful live migration.
 
+### cpuid\_mask\_xsave\_eax (Intel only)
+> `= <integer>`
+
+> Default: `~0` (all bits set)
+
+This command line parameter is also used to specify a cpuid mask to
+help with cpuid levelling across a pool of hosts.  See the description
+of the other respective options above.
+
+### cpuid\_mask\_{l7s0\_{eax,ebx},thermal\_ecx} (AMD only)
+> `= <integer>`
+
+> Default: `~0` (all bits set)
+
+These three command line parameters are also used to specify cpuid
+masks to help with cpuid levelling across a pool of hosts.  See the
+description of the other respective options above.
+
 ### cpuidle
 > `= <boolean>`
 
--- 2014-03-17.orig/xen/arch/x86/cpu/amd.c	2014-04-04 11:31:51.000000000 +0200
+++ 2014-03-17/xen/arch/x86/cpu/amd.c	2014-04-04 11:35:31.000000000 +0200
@@ -30,9 +30,17 @@
  * "fam_10_rev_c"
  * "fam_11_rev_b"
  */
-static char opt_famrev[14];
+static char __initdata opt_famrev[14];
 string_param("cpuid_mask_cpu", opt_famrev);
 
+static unsigned int __initdata opt_cpuid_mask_l7s0_eax = ~0u;
+integer_param("cpuid_mask_l7s0_eax", opt_cpuid_mask_l7s0_eax);
+static unsigned int __initdata opt_cpuid_mask_l7s0_ebx = ~0u;
+integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);
+
+static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u;
+integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx);
+
 /* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
 s8 __read_mostly opt_allow_unsafe;
 boolean_param("allow_unsafe", opt_allow_unsafe);
@@ -96,7 +104,11 @@ static void __devinit set_cpuidmask(cons
 {
 	static unsigned int feat_ecx, feat_edx;
 	static unsigned int extfeat_ecx, extfeat_edx;
+	static unsigned int l7s0_eax, l7s0_ebx;
+	static unsigned int thermal_ecx;
+	static bool_t skip_l7s0_eax_ebx, skip_thermal_ecx;
 	static enum { not_parsed, no_mask, set_mask } status;
+	unsigned int eax, ebx, ecx, edx;
 
 	if (status == no_mask)
 		return;
@@ -104,7 +116,7 @@ static void __devinit set_cpuidmask(cons
 	if (status == set_mask)
 		goto setmask;
 
-	ASSERT((status == not_parsed) && (smp_processor_id() == 0));
+	ASSERT((status == not_parsed) && (c == &boot_cpu_data));
 	status = no_mask;
 
 	/* Fam11 doesn't support masking at all. */
@@ -112,11 +124,16 @@ static void __devinit set_cpuidmask(cons
 		return;
 
 	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
-	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {
+	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx &
+	      opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx &
+	      opt_cpuid_mask_thermal_ecx)) {
 		feat_ecx = opt_cpuid_mask_ecx;
 		feat_edx = opt_cpuid_mask_edx;
 		extfeat_ecx = opt_cpuid_mask_ext_ecx;
 		extfeat_edx = opt_cpuid_mask_ext_edx;
+		l7s0_eax = opt_cpuid_mask_l7s0_eax;
+		l7s0_ebx = opt_cpuid_mask_l7s0_ebx;
+		thermal_ecx = opt_cpuid_mask_thermal_ecx;
 	} else if (*opt_famrev == '\0') {
 		return;
 	} else if (!strcmp(opt_famrev, "fam_0f_rev_c")) {
@@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
 	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n", 
 	       extfeat_ecx, extfeat_edx);
 
+	if (c->cpuid_level >= 7)
+		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
+	else
+		ebx = eax = 0;
+	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
+		if (l7s0_eax > eax)
+			l7s0_eax = eax;
+		l7s0_ebx &= ebx;
+		printk("Writing CPUID leaf 7 subleaf 0 feature mask EAX:EBX -> %08Xh:%08Xh\n",
+		       l7s0_eax, l7s0_ebx);
+	} else
+		skip_l7s0_eax_ebx = 1;
+
+	/* Only Fam15 has the respective MSR. */
+	ecx = c->x86 == 0x15 && c->cpuid_level >= 6 ? cpuid_ecx(6) : 0;
+	if (ecx && ~thermal_ecx) {
+		thermal_ecx &= ecx;
+		printk("Writing CPUID thermal/power feature mask ECX -> %08Xh\n",
+		       thermal_ecx);
+	} else
+		skip_thermal_ecx = 1;
+
  setmask:
 	/* AMD processors prior to family 10h required a 32-bit password */
 	if (c->x86 >= 0x10) {
 		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
 		wrmsr(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
+		if (!skip_l7s0_eax_ebx)
+			wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax);
+		if (!skip_thermal_ecx) {
+			rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx);
+			wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx);
+		}
 	} else {
 		wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
 		wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
--- 2014-03-17.orig/xen/include/asm-x86/msr-index.h	2014-03-10 11:36:00.000000000 +0100
+++ 2014-03-17/xen/include/asm-x86/msr-index.h	2014-03-18 12:19:10.000000000 +0100
@@ -192,6 +192,8 @@
 #define MSR_AMD_FAM15H_EVNTSEL5		0xc001020a
 #define MSR_AMD_FAM15H_PERFCTR5		0xc001020b
 
+#define MSR_AMD_L7S0_FEATURE_MASK	0xc0011002
+#define MSR_AMD_THRM_FEATURE_MASK	0xc0011003
 #define MSR_K8_FEATURE_MASK		0xc0011004
 #define MSR_K8_EXT_FEATURE_MASK		0xc0011005
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking
  2014-04-07  9:36 [PATCH v2 0/3] x86/AMD: feature masking adjustments Jan Beulich
  2014-04-07  9:41 ` [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11 Jan Beulich
  2014-04-07  9:43 ` [PATCH v2 2/3] x86/AMD: support further feature masking MSRs Jan Beulich
@ 2014-04-07  9:43 ` Jan Beulich
  2014-04-07 10:48   ` Andrew Cooper
  2 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2014-04-07  9:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Aravind Gopalakrishnan, suravee.suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 4187 bytes --]

Make it so this is easier to extend, and move the parsing code/data
into .init.* sections.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -91,6 +91,51 @@ static inline int wrmsr_amd_safe(unsigne
 	return err;
 }
 
+static const struct cpuidmask {
+	uint16_t fam;
+	char rev[2];
+	unsigned int ecx, edx, ext_ecx, ext_edx;
+} pre_canned[] __initconst = {
+#define CAN(fam, id, rev) { \
+		fam, #rev, \
+		AMD_FEATURES_##id##_REV_##rev##_ECX, \
+		AMD_FEATURES_##id##_REV_##rev##_EDX, \
+		AMD_EXTFEATURES_##id##_REV_##rev##_ECX, \
+		AMD_EXTFEATURES_##id##_REV_##rev##_EDX \
+	}
+#define CAN_FAM(fam, rev) CAN(0x##fam, FAM##fam##h, rev)
+#define CAN_K8(rev)       CAN(0x0f,    K8,          rev)
+	CAN_FAM(11, B),
+	CAN_FAM(10, C),
+	CAN_FAM(10, B),
+	CAN_K8(G),
+	CAN_K8(F),
+	CAN_K8(E),
+	CAN_K8(D),
+	CAN_K8(C)
+#undef CAN
+};
+
+static const struct cpuidmask *__init noinline get_cpuidmask(const char *opt)
+{
+	unsigned long fam;
+	char rev;
+	unsigned int i;
+
+	if (strncmp(opt, "fam_", 4))
+		return NULL;
+	fam = simple_strtoul(opt + 4, &opt, 16);
+	if (strncmp(opt, "_rev_", 5))
+		return NULL;
+	rev = toupper(opt[5]);
+
+	for (i = 0; i < ARRAY_SIZE(pre_canned); ++i)
+		if (fam == pre_canned[i].fam && rev == *pre_canned[i].rev)
+			return &pre_canned[i];
+
+	return NULL;
+}
+
 /*
  * Mask the features and extended features returned by CPUID.  Parameters are
  * set from the boot line via two methods:
@@ -136,50 +181,18 @@ static void __devinit set_cpuidmask(cons
 		thermal_ecx = opt_cpuid_mask_thermal_ecx;
 	} else if (*opt_famrev == '\0') {
 		return;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_c")) {
-		feat_ecx = AMD_FEATURES_K8_REV_C_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_C_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_C_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_C_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_d")) {
-		feat_ecx = AMD_FEATURES_K8_REV_D_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_D_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_D_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_D_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_e")) {
-		feat_ecx = AMD_FEATURES_K8_REV_E_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_E_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_E_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_E_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_f")) {
-		feat_ecx = AMD_FEATURES_K8_REV_F_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_F_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_F_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_F_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_g")) {
-		feat_ecx = AMD_FEATURES_K8_REV_G_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_G_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_G_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_G_EDX;
-	} else if (!strcmp(opt_famrev, "fam_10_rev_b")) {
-		feat_ecx = AMD_FEATURES_FAM10h_REV_B_ECX;
-		feat_edx = AMD_FEATURES_FAM10h_REV_B_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_FAM10h_REV_B_ECX;
-		extfeat_edx = AMD_EXTFEATURES_FAM10h_REV_B_EDX;
-	} else if (!strcmp(opt_famrev, "fam_10_rev_c")) {
-		feat_ecx = AMD_FEATURES_FAM10h_REV_C_ECX;
-		feat_edx = AMD_FEATURES_FAM10h_REV_C_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_FAM10h_REV_C_ECX;
-		extfeat_edx = AMD_EXTFEATURES_FAM10h_REV_C_EDX;
-	} else if (!strcmp(opt_famrev, "fam_11_rev_b")) {
-		feat_ecx = AMD_FEATURES_FAM11h_REV_B_ECX;
-		feat_edx = AMD_FEATURES_FAM11h_REV_B_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_FAM11h_REV_B_ECX;
-		extfeat_edx = AMD_EXTFEATURES_FAM11h_REV_B_EDX;
 	} else {
-		printk("Invalid processor string: %s\n", opt_famrev);
-		printk("CPUID will not be masked\n");
-		return;
+		const struct cpuidmask *m = get_cpuidmask(opt_famrev);
+
+		if (!m) {
+			printk("Invalid processor string: %s\n", opt_famrev);
+			printk("CPUID will not be masked\n");
+			return;
+		}
+		feat_ecx = m->ecx;
+		feat_edx = m->edx;
+		extfeat_ecx = m->ext_ecx;
+		extfeat_edx = m->ext_edx;
 	}
 
         /* Setting bits in the CPUID mask MSR that are not set in the



[-- Attachment #2: x86-AMD-simplify-feature-masks.patch --]
[-- Type: text/plain, Size: 4258 bytes --]

x86/AMD: clean up pre-canned family/revision handling for CPUID masking

Make it so this is easier to extend, and move the parsing code/data
into .init.* sections.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -91,6 +91,51 @@ static inline int wrmsr_amd_safe(unsigne
 	return err;
 }
 
+static const struct cpuidmask {
+	uint16_t fam;
+	char rev[2];
+	unsigned int ecx, edx, ext_ecx, ext_edx;
+} pre_canned[] __initconst = {
+#define CAN(fam, id, rev) { \
+		fam, #rev, \
+		AMD_FEATURES_##id##_REV_##rev##_ECX, \
+		AMD_FEATURES_##id##_REV_##rev##_EDX, \
+		AMD_EXTFEATURES_##id##_REV_##rev##_ECX, \
+		AMD_EXTFEATURES_##id##_REV_##rev##_EDX \
+	}
+#define CAN_FAM(fam, rev) CAN(0x##fam, FAM##fam##h, rev)
+#define CAN_K8(rev)       CAN(0x0f,    K8,          rev)
+	CAN_FAM(11, B),
+	CAN_FAM(10, C),
+	CAN_FAM(10, B),
+	CAN_K8(G),
+	CAN_K8(F),
+	CAN_K8(E),
+	CAN_K8(D),
+	CAN_K8(C)
+#undef CAN
+};
+
+static const struct cpuidmask *__init noinline get_cpuidmask(const char *opt)
+{
+	unsigned long fam;
+	char rev;
+	unsigned int i;
+
+	if (strncmp(opt, "fam_", 4))
+		return NULL;
+	fam = simple_strtoul(opt + 4, &opt, 16);
+	if (strncmp(opt, "_rev_", 5))
+		return NULL;
+	rev = toupper(opt[5]);
+
+	for (i = 0; i < ARRAY_SIZE(pre_canned); ++i)
+		if (fam == pre_canned[i].fam && rev == *pre_canned[i].rev)
+			return &pre_canned[i];
+
+	return NULL;
+}
+
 /*
  * Mask the features and extended features returned by CPUID.  Parameters are
  * set from the boot line via two methods:
@@ -136,50 +181,18 @@ static void __devinit set_cpuidmask(cons
 		thermal_ecx = opt_cpuid_mask_thermal_ecx;
 	} else if (*opt_famrev == '\0') {
 		return;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_c")) {
-		feat_ecx = AMD_FEATURES_K8_REV_C_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_C_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_C_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_C_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_d")) {
-		feat_ecx = AMD_FEATURES_K8_REV_D_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_D_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_D_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_D_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_e")) {
-		feat_ecx = AMD_FEATURES_K8_REV_E_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_E_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_E_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_E_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_f")) {
-		feat_ecx = AMD_FEATURES_K8_REV_F_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_F_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_F_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_F_EDX;
-	} else if (!strcmp(opt_famrev, "fam_0f_rev_g")) {
-		feat_ecx = AMD_FEATURES_K8_REV_G_ECX;
-		feat_edx = AMD_FEATURES_K8_REV_G_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_K8_REV_G_ECX;
-		extfeat_edx = AMD_EXTFEATURES_K8_REV_G_EDX;
-	} else if (!strcmp(opt_famrev, "fam_10_rev_b")) {
-		feat_ecx = AMD_FEATURES_FAM10h_REV_B_ECX;
-		feat_edx = AMD_FEATURES_FAM10h_REV_B_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_FAM10h_REV_B_ECX;
-		extfeat_edx = AMD_EXTFEATURES_FAM10h_REV_B_EDX;
-	} else if (!strcmp(opt_famrev, "fam_10_rev_c")) {
-		feat_ecx = AMD_FEATURES_FAM10h_REV_C_ECX;
-		feat_edx = AMD_FEATURES_FAM10h_REV_C_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_FAM10h_REV_C_ECX;
-		extfeat_edx = AMD_EXTFEATURES_FAM10h_REV_C_EDX;
-	} else if (!strcmp(opt_famrev, "fam_11_rev_b")) {
-		feat_ecx = AMD_FEATURES_FAM11h_REV_B_ECX;
-		feat_edx = AMD_FEATURES_FAM11h_REV_B_EDX;
-		extfeat_ecx = AMD_EXTFEATURES_FAM11h_REV_B_ECX;
-		extfeat_edx = AMD_EXTFEATURES_FAM11h_REV_B_EDX;
 	} else {
-		printk("Invalid processor string: %s\n", opt_famrev);
-		printk("CPUID will not be masked\n");
-		return;
+		const struct cpuidmask *m = get_cpuidmask(opt_famrev);
+
+		if (!m) {
+			printk("Invalid processor string: %s\n", opt_famrev);
+			printk("CPUID will not be masked\n");
+			return;
+		}
+		feat_ecx = m->ecx;
+		feat_edx = m->edx;
+		extfeat_ecx = m->ext_ecx;
+		extfeat_edx = m->ext_edx;
 	}
 
         /* Setting bits in the CPUID mask MSR that are not set in the

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11
  2014-04-07  9:41 ` [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11 Jan Beulich
@ 2014-04-07 10:14   ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2014-04-07 10:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit


[-- Attachment #1.1: Type: text/plain, Size: 1144 bytes --]

On 07/04/14 10:41, Jan Beulich wrote:
> Reported-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -107,6 +107,10 @@ static void __devinit set_cpuidmask(cons
>  	ASSERT((status == not_parsed) && (smp_processor_id() == 0));
>  	status = no_mask;
>  
> +	/* Fam11 doesn't support masking at all. */
> +	if (c->x86 == 0x11)
> +		return;
> +
>  	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
>  	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {
>  		feat_ecx = opt_cpuid_mask_ecx;
> @@ -176,7 +180,6 @@ static void __devinit set_cpuidmask(cons
>  	       extfeat_ecx, extfeat_edx);
>  
>   setmask:
> -	/* FIXME check if processor supports CPUID masking */
>  	/* AMD processors prior to family 10h required a 32-bit password */
>  	if (c->x86 >= 0x10) {
>  		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


[-- Attachment #1.2: Type: text/html, Size: 2136 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-07  9:43 ` [PATCH v2 2/3] x86/AMD: support further feature masking MSRs Jan Beulich
@ 2014-04-07 10:23   ` Andrew Cooper
  2014-04-07 11:53     ` Jan Beulich
  2014-04-07 15:21   ` Boris Ostrovsky
  1 sibling, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2014-04-07 10:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit


[-- Attachment #1.1: Type: text/plain, Size: 7874 bytes --]

On 07/04/14 10:43, Jan Beulich wrote:
> Newer AMD CPUs also allow masking CPUID leaf 6 ECX and CPUID leaf 7
> sub-leaf 0 EAX and EBX.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
> ---
> v2: Adjust for new prior patch. Apply leaf 6 mask only on Fam15.
>
> --- 2014-03-17.orig/docs/misc/xen-command-line.markdown	2014-04-04 11:27:32.000000000 +0200
> +++ 2014-03-17/docs/misc/xen-command-line.markdown	2014-03-18 14:37:31.000000000 +0100
> @@ -331,24 +331,42 @@ Indicate where the responsibility for dr
>  ### cpuid\_mask\_cpu (AMD only)
>  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
>  
> -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> -(unspecified on the command line), specify a pre-canned cpuid mask to
> -mask the current processor down to appear as the specified processor.
> -It is important to ensure that all hosts in a pool appear the same to
> -guests to allow successful live migration.
> +If the other **cpuid\_mask\_{,ext\_,thermal\_,l7s0\_}e{a,b,c,d}x**
> +options are fully set (unspecified on the command line), specify a
> +pre-canned cpuid mask to mask the current processor down to appear as
> +the specified processor. It is important to ensure that all hosts in a
> +pool appear the same to guests to allow successful live migration.
>  
> -### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
> +### cpuid\_mask\_{{,ext\_}ecx,edx}
>  > `= <integer>`
>  
>  > Default: `~0` (all bits set)
>  
> -These five command line parameters are used to specify cpuid masks to
> +These four command line parameters are used to specify cpuid masks to
>  help with cpuid levelling across a pool of hosts.  Setting a bit in
>  the mask indicates that the feature should be enabled, while clearing
>  a bit in the mask indicates that the feature should be disabled.  It
>  is important to ensure that all hosts in a pool appear the same to
>  guests to allow successful live migration.
>  
> +### cpuid\_mask\_xsave\_eax (Intel only)
> +> `= <integer>`
> +
> +> Default: `~0` (all bits set)
> +
> +This command line parameter is also used to specify a cpuid mask to
> +help with cpuid levelling across a pool of hosts.  See the description
> +of the other respective options above.
> +
> +### cpuid\_mask\_{l7s0\_{eax,ebx},thermal\_ecx} (AMD only)
> +> `= <integer>`
> +
> +> Default: `~0` (all bits set)
> +
> +These three command line parameters are also used to specify cpuid
> +masks to help with cpuid levelling across a pool of hosts.  See the
> +description of the other respective options above.
> +
>  ### cpuidle
>  > `= <boolean>`
>  
> --- 2014-03-17.orig/xen/arch/x86/cpu/amd.c	2014-04-04 11:31:51.000000000 +0200
> +++ 2014-03-17/xen/arch/x86/cpu/amd.c	2014-04-04 11:35:31.000000000 +0200
> @@ -30,9 +30,17 @@
>   * "fam_10_rev_c"
>   * "fam_11_rev_b"
>   */
> -static char opt_famrev[14];
> +static char __initdata opt_famrev[14];
>  string_param("cpuid_mask_cpu", opt_famrev);
>  
> +static unsigned int __initdata opt_cpuid_mask_l7s0_eax = ~0u;
> +integer_param("cpuid_mask_l7s0_eax", opt_cpuid_mask_l7s0_eax);
> +static unsigned int __initdata opt_cpuid_mask_l7s0_ebx = ~0u;
> +integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);
> +
> +static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u;
> +integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx);
> +
>  /* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
>  s8 __read_mostly opt_allow_unsafe;
>  boolean_param("allow_unsafe", opt_allow_unsafe);
> @@ -96,7 +104,11 @@ static void __devinit set_cpuidmask(cons
>  {
>  	static unsigned int feat_ecx, feat_edx;
>  	static unsigned int extfeat_ecx, extfeat_edx;
> +	static unsigned int l7s0_eax, l7s0_ebx;
> +	static unsigned int thermal_ecx;
> +	static bool_t skip_l7s0_eax_ebx, skip_thermal_ecx;
>  	static enum { not_parsed, no_mask, set_mask } status;
> +	unsigned int eax, ebx, ecx, edx;
>  
>  	if (status == no_mask)
>  		return;
> @@ -104,7 +116,7 @@ static void __devinit set_cpuidmask(cons
>  	if (status == set_mask)
>  		goto setmask;
>  
> -	ASSERT((status == not_parsed) && (smp_processor_id() == 0));
> +	ASSERT((status == not_parsed) && (c == &boot_cpu_data));
>  	status = no_mask;
>  
>  	/* Fam11 doesn't support masking at all. */
> @@ -112,11 +124,16 @@ static void __devinit set_cpuidmask(cons
>  		return;
>  
>  	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
> -	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {
> +	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx &
> +	      opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx &
> +	      opt_cpuid_mask_thermal_ecx)) {
>  		feat_ecx = opt_cpuid_mask_ecx;
>  		feat_edx = opt_cpuid_mask_edx;
>  		extfeat_ecx = opt_cpuid_mask_ext_ecx;
>  		extfeat_edx = opt_cpuid_mask_ext_edx;
> +		l7s0_eax = opt_cpuid_mask_l7s0_eax;
> +		l7s0_ebx = opt_cpuid_mask_l7s0_ebx;
> +		thermal_ecx = opt_cpuid_mask_thermal_ecx;
>  	} else if (*opt_famrev == '\0') {
>  		return;
>  	} else if (!strcmp(opt_famrev, "fam_0f_rev_c")) {
> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>  	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n", 
>  	       extfeat_ecx, extfeat_edx);
>  
> +	if (c->cpuid_level >= 7)
> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> +	else
> +		ebx = eax = 0;
> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
> +		if (l7s0_eax > eax)
> +			l7s0_eax = eax;
> +		l7s0_ebx &= ebx;
> +		printk("Writing CPUID leaf 7 subleaf 0 feature mask EAX:EBX -> %08Xh:%08Xh\n",
> +		       l7s0_eax, l7s0_ebx);
> +	} else
> +		skip_l7s0_eax_ebx = 1;
> +
> +	/* Only Fam15 has the respective MSR. */
> +	ecx = c->x86 == 0x15 && c->cpuid_level >= 6 ? cpuid_ecx(6) : 0;
> +	if (ecx && ~thermal_ecx) {
> +		thermal_ecx &= ecx;
> +		printk("Writing CPUID thermal/power feature mask ECX -> %08Xh\n",
> +		       thermal_ecx);
> +	} else
> +		skip_thermal_ecx = 1;
> +
>   setmask:
>  	/* AMD processors prior to family 10h required a 32-bit password */
>  	if (c->x86 >= 0x10) {
>  		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>  		wrmsr(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
> +		if (!skip_l7s0_eax_ebx)
> +			wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax);
> +		if (!skip_thermal_ecx) {
> +			rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx);
> +			wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx);
> +		}
>  	} else {
>  		wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>  		wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);

While editing this, can we remove this crazy split between wrmsr and
wrmsr_amd ?  It is safe to use wrmsr_amd in all cases where wrmsr is needed.

I already did this as part of the prep work for per-VM masking, but in
combination with above, the following should be fine:

wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
if (!skip_l7s0_eax_ebx)
    wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax);
if (!skip_thermal_ecx) {
    rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx);
    wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx);
}

~Andrew

> --- 2014-03-17.orig/xen/include/asm-x86/msr-index.h	2014-03-10 11:36:00.000000000 +0100
> +++ 2014-03-17/xen/include/asm-x86/msr-index.h	2014-03-18 12:19:10.000000000 +0100
> @@ -192,6 +192,8 @@
>  #define MSR_AMD_FAM15H_EVNTSEL5		0xc001020a
>  #define MSR_AMD_FAM15H_PERFCTR5		0xc001020b
>  
> +#define MSR_AMD_L7S0_FEATURE_MASK	0xc0011002
> +#define MSR_AMD_THRM_FEATURE_MASK	0xc0011003
>  #define MSR_K8_FEATURE_MASK		0xc0011004
>  #define MSR_K8_EXT_FEATURE_MASK		0xc0011005
>  
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


[-- Attachment #1.2: Type: text/html, Size: 8809 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking
  2014-04-07  9:43 ` [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking Jan Beulich
@ 2014-04-07 10:48   ` Andrew Cooper
  2014-04-07 11:55     ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2014-04-07 10:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit


[-- Attachment #1.1: Type: text/plain, Size: 4752 bytes --]

On 07/04/14 10:43, Jan Beulich wrote:
> Make it so this is easier to extend, and move the parsing code/data
> into .init.* sections.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Aravind Gopalakrishnan<aravind.gopalakrishnan@amd.com>
>
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -91,6 +91,51 @@ static inline int wrmsr_amd_safe(unsigne
>  	return err;
>  }
>  
> +static const struct cpuidmask {
> +	uint16_t fam;
> +	char rev[2];
> +	unsigned int ecx, edx, ext_ecx, ext_edx;
> +} pre_canned[] __initconst = {

possibly name the structure amd_precanned_featuremasks or similar, to
make it a little more descriptive than just "pre_canned" ?

> +#define CAN(fam, id, rev) { \
> +		fam, #rev, \
> +		AMD_FEATURES_##id##_REV_##rev##_ECX, \
> +		AMD_FEATURES_##id##_REV_##rev##_EDX, \
> +		AMD_EXTFEATURES_##id##_REV_##rev##_ECX, \
> +		AMD_EXTFEATURES_##id##_REV_##rev##_EDX \
> +	}
> +#define CAN_FAM(fam, rev) CAN(0x##fam, FAM##fam##h, rev)
> +#define CAN_K8(rev)       CAN(0x0f,    K8,          rev)
> +	CAN_FAM(11, B),
> +	CAN_FAM(10, C),
> +	CAN_FAM(10, B),
> +	CAN_K8(G),
> +	CAN_K8(F),
> +	CAN_K8(E),
> +	CAN_K8(D),
> +	CAN_K8(C)
> +#undef CAN
> +};
> +
> +static const struct cpuidmask *__init noinline get_cpuidmask(const char *opt)
> +{
> +	unsigned long fam;
> +	char rev;
> +	unsigned int i;
> +
> +	if (strncmp(opt, "fam_", 4))
> +		return NULL;
> +	fam = simple_strtoul(opt + 4, &opt, 16);
> +	if (strncmp(opt, "_rev_", 5))
> +		return NULL;
> +	rev = toupper(opt[5]);

if ( opt[6] != '\0' )
   return NULL;

Otherwise this code will start accepting malformed values it would
previously discard.

~Andrew

> +
> +	for (i = 0; i < ARRAY_SIZE(pre_canned); ++i)
> +		if (fam == pre_canned[i].fam && rev == *pre_canned[i].rev)
> +			return &pre_canned[i];
> +
> +	return NULL;
> +}
> +
>  /*
>   * Mask the features and extended features returned by CPUID.  Parameters are
>   * set from the boot line via two methods:
> @@ -136,50 +181,18 @@ static void __devinit set_cpuidmask(cons
>  		thermal_ecx = opt_cpuid_mask_thermal_ecx;
>  	} else if (*opt_famrev == '\0') {
>  		return;
> -	} else if (!strcmp(opt_famrev, "fam_0f_rev_c")) {
> -		feat_ecx = AMD_FEATURES_K8_REV_C_ECX;
> -		feat_edx = AMD_FEATURES_K8_REV_C_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_K8_REV_C_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_K8_REV_C_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_0f_rev_d")) {
> -		feat_ecx = AMD_FEATURES_K8_REV_D_ECX;
> -		feat_edx = AMD_FEATURES_K8_REV_D_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_K8_REV_D_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_K8_REV_D_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_0f_rev_e")) {
> -		feat_ecx = AMD_FEATURES_K8_REV_E_ECX;
> -		feat_edx = AMD_FEATURES_K8_REV_E_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_K8_REV_E_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_K8_REV_E_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_0f_rev_f")) {
> -		feat_ecx = AMD_FEATURES_K8_REV_F_ECX;
> -		feat_edx = AMD_FEATURES_K8_REV_F_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_K8_REV_F_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_K8_REV_F_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_0f_rev_g")) {
> -		feat_ecx = AMD_FEATURES_K8_REV_G_ECX;
> -		feat_edx = AMD_FEATURES_K8_REV_G_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_K8_REV_G_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_K8_REV_G_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_10_rev_b")) {
> -		feat_ecx = AMD_FEATURES_FAM10h_REV_B_ECX;
> -		feat_edx = AMD_FEATURES_FAM10h_REV_B_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_FAM10h_REV_B_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_FAM10h_REV_B_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_10_rev_c")) {
> -		feat_ecx = AMD_FEATURES_FAM10h_REV_C_ECX;
> -		feat_edx = AMD_FEATURES_FAM10h_REV_C_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_FAM10h_REV_C_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_FAM10h_REV_C_EDX;
> -	} else if (!strcmp(opt_famrev, "fam_11_rev_b")) {
> -		feat_ecx = AMD_FEATURES_FAM11h_REV_B_ECX;
> -		feat_edx = AMD_FEATURES_FAM11h_REV_B_EDX;
> -		extfeat_ecx = AMD_EXTFEATURES_FAM11h_REV_B_ECX;
> -		extfeat_edx = AMD_EXTFEATURES_FAM11h_REV_B_EDX;
>  	} else {
> -		printk("Invalid processor string: %s\n", opt_famrev);
> -		printk("CPUID will not be masked\n");
> -		return;
> +		const struct cpuidmask *m = get_cpuidmask(opt_famrev);
> +
> +		if (!m) {
> +			printk("Invalid processor string: %s\n", opt_famrev);
> +			printk("CPUID will not be masked\n");
> +			return;
> +		}
> +		feat_ecx = m->ecx;
> +		feat_edx = m->edx;
> +		extfeat_ecx = m->ext_ecx;
> +		extfeat_edx = m->ext_edx;
>  	}
>  
>          /* Setting bits in the CPUID mask MSR that are not set in the
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


[-- Attachment #1.2: Type: text/html, Size: 5687 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-07 10:23   ` Andrew Cooper
@ 2014-04-07 11:53     ` Jan Beulich
  2014-04-07 12:09       ` Andrew Cooper
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2014-04-07 11:53 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

>>> On 07.04.14 at 12:23, <andrew.cooper3@citrix.com> wrote:
> On 07/04/14 10:43, Jan Beulich wrote:
>>  	/* AMD processors prior to family 10h required a 32-bit password */
>>  	if (c->x86 >= 0x10) {
>>  		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>>  		wrmsr(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
>> +		if (!skip_l7s0_eax_ebx)
>> +			wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax);
>> +		if (!skip_thermal_ecx) {
>> +			rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx);
>> +			wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx);
>> +		}
>>  	} else {
>>  		wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>>  		wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
> 
> While editing this, can we remove this crazy split between wrmsr and
> wrmsr_amd ?  It is safe to use wrmsr_amd in all cases where wrmsr is needed.

I'm against this - the way it is now makes it very explicit where the
extra input is required.

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking
  2014-04-07 10:48   ` Andrew Cooper
@ 2014-04-07 11:55     ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2014-04-07 11:55 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

>>> On 07.04.14 at 12:48, <andrew.cooper3@citrix.com> wrote:
> On 07/04/14 10:43, Jan Beulich wrote:
>> +static const struct cpuidmask *__init noinline get_cpuidmask(const char *opt)
>> +{
>> +	unsigned long fam;
>> +	char rev;
>> +	unsigned int i;
>> +
>> +	if (strncmp(opt, "fam_", 4))
>> +		return NULL;
>> +	fam = simple_strtoul(opt + 4, &opt, 16);
>> +	if (strncmp(opt, "_rev_", 5))
>> +		return NULL;
>> +	rev = toupper(opt[5]);
> 
> if ( opt[6] != '\0' )
>    return NULL;
> 
> Otherwise this code will start accepting malformed values it would
> previously discard.

Oh yes, of course.

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-07 11:53     ` Jan Beulich
@ 2014-04-07 12:09       ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2014-04-07 12:09 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

On 07/04/14 12:53, Jan Beulich wrote:
>>>> On 07.04.14 at 12:23, <andrew.cooper3@citrix.com> wrote:
>> On 07/04/14 10:43, Jan Beulich wrote:
>>>  	/* AMD processors prior to family 10h required a 32-bit password */
>>>  	if (c->x86 >= 0x10) {
>>>  		wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>>>  		wrmsr(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
>>> +		if (!skip_l7s0_eax_ebx)
>>> +			wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax);
>>> +		if (!skip_thermal_ecx) {
>>> +			rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx);
>>> +			wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx);
>>> +		}
>>>  	} else {
>>>  		wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx);
>>>  		wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx);
>> While editing this, can we remove this crazy split between wrmsr and
>> wrmsr_amd ?  It is safe to use wrmsr_amd in all cases where wrmsr is needed.
> I'm against this - the way it is now makes it very explicit where the
> extra input is required.
>
> Jan
>

It bloats the function, which would be less bad if it were an __init
function.

A comment can perfectly easily say /* Password required for fam 10h and
older */

~Andrew

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-07  9:43 ` [PATCH v2 2/3] x86/AMD: support further feature masking MSRs Jan Beulich
  2014-04-07 10:23   ` Andrew Cooper
@ 2014-04-07 15:21   ` Boris Ostrovsky
  2014-04-08  7:15     ` Jan Beulich
  1 sibling, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2014-04-07 15:21 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

On 04/07/2014 05:43 AM, Jan Beulich wrote:

@@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
  	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
  	       extfeat_ecx, extfeat_edx);
  
+	if (c->cpuid_level >= 7)
+		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
+	else
+		ebx = eax = 0;
+	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
+		if (l7s0_eax > eax)
+			l7s0_eax = eax;
+		l7s0_ebx &= ebx;


Can you explain why eax is treated differently here (i.e. not ANDing it 
as is done with ebx)?

-boris

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-07 15:21   ` Boris Ostrovsky
@ 2014-04-08  7:15     ` Jan Beulich
  2014-04-08 13:50       ` Boris Ostrovsky
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2014-04-08  7:15 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
> On 04/07/2014 05:43 AM, Jan Beulich wrote:
> 
> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>   	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
>   	       extfeat_ecx, extfeat_edx);
>   
> +	if (c->cpuid_level >= 7)
> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> +	else
> +		ebx = eax = 0;
> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
> +		if (l7s0_eax > eax)
> +			l7s0_eax = eax;
> +		l7s0_ebx &= ebx;
> 
> 
> Can you explain why eax is treated differently here (i.e. not ANDing it 
> as is done with ebx)?

Generally I think code like this implies that you know the specification:
eax here represents the maximum supported subleaf, and hence
needs to be limited rather than masked.

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-08  7:15     ` Jan Beulich
@ 2014-04-08 13:50       ` Boris Ostrovsky
  2014-04-08 14:02         ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2014-04-08 13:50 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

On 04/08/2014 03:15 AM, Jan Beulich wrote:
>>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
>> On 04/07/2014 05:43 AM, Jan Beulich wrote:
>>
>> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>>    	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
>>    	       extfeat_ecx, extfeat_edx);
>>    
>> +	if (c->cpuid_level >= 7)
>> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>> +	else
>> +		ebx = eax = 0;
>> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
>> +		if (l7s0_eax > eax)
>> +			l7s0_eax = eax;
>> +		l7s0_ebx &= ebx;
>>
>>
>> Can you explain why eax is treated differently here (i.e. not ANDing it
>> as is done with ebx)?
> Generally I think code like this implies that you know the specification:
> eax here represents the maximum supported subleaf, and hence
> needs to be limited rather than masked.

All specs that I have say that bits of CPUID Fn0000_0007_EAX_x0 are 
reserved.

-boris

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-08 13:50       ` Boris Ostrovsky
@ 2014-04-08 14:02         ` Jan Beulich
  2014-04-08 14:14           ` Boris Ostrovsky
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2014-04-08 14:02 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

>>> On 08.04.14 at 15:50, <boris.ostrovsky@oracle.com> wrote:
> On 04/08/2014 03:15 AM, Jan Beulich wrote:
>>>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
>>> On 04/07/2014 05:43 AM, Jan Beulich wrote:
>>>
>>> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>>>    	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
>>>    	       extfeat_ecx, extfeat_edx);
>>>    
>>> +	if (c->cpuid_level >= 7)
>>> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>>> +	else
>>> +		ebx = eax = 0;
>>> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
>>> +		if (l7s0_eax > eax)
>>> +			l7s0_eax = eax;
>>> +		l7s0_ebx &= ebx;
>>>
>>>
>>> Can you explain why eax is treated differently here (i.e. not ANDing it
>>> as is done with ebx)?
>> Generally I think code like this implies that you know the specification:
>> eax here represents the maximum supported subleaf, and hence
>> needs to be limited rather than masked.
> 
> All specs that I have say that bits of CPUID Fn0000_0007_EAX_x0 are 
> reserved.

Intel's SDM Vol 2 rev 49 (325383-049US) says "07H Sub-leaf 0 (Input
ECX = 0). EAX Bits 31-00: Reports the maximum input value for
supported leaf 7 sub-leaves." Not sure what other specs you might
be looking at.

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-08 14:02         ` Jan Beulich
@ 2014-04-08 14:14           ` Boris Ostrovsky
  2014-04-08 14:33             ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2014-04-08 14:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

On 04/08/2014 10:02 AM, Jan Beulich wrote:
>>>> On 08.04.14 at 15:50, <boris.ostrovsky@oracle.com> wrote:
>> On 04/08/2014 03:15 AM, Jan Beulich wrote:
>>>>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
>>>> On 04/07/2014 05:43 AM, Jan Beulich wrote:
>>>>
>>>> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>>>>     	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
>>>>     	       extfeat_ecx, extfeat_edx);
>>>>     
>>>> +	if (c->cpuid_level >= 7)
>>>> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>>>> +	else
>>>> +		ebx = eax = 0;
>>>> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
>>>> +		if (l7s0_eax > eax)
>>>> +			l7s0_eax = eax;
>>>> +		l7s0_ebx &= ebx;
>>>>
>>>>
>>>> Can you explain why eax is treated differently here (i.e. not ANDing it
>>>> as is done with ebx)?
>>> Generally I think code like this implies that you know the specification:
>>> eax here represents the maximum supported subleaf, and hence
>>> needs to be limited rather than masked.
>> All specs that I have say that bits of CPUID Fn0000_0007_EAX_x0 are
>> reserved.
> Intel's SDM Vol 2 rev 49 (325383-049US) says "07H Sub-leaf 0 (Input
> ECX = 0). EAX Bits 31-00: Reports the maximum input value for
> supported leaf 7 sub-leaves." Not sure what other specs you might
> be looking at.

This is a patch to xen/arch/x86/cpu/amd.c so I was looking at AMD's 
BKDGs.  Why is Intel's definition relevant here?

-boris

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-08 14:14           ` Boris Ostrovsky
@ 2014-04-08 14:33             ` Jan Beulich
  2014-04-08 15:14               ` Boris Ostrovsky
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2014-04-08 14:33 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

>>> On 08.04.14 at 16:14, <boris.ostrovsky@oracle.com> wrote:
> On 04/08/2014 10:02 AM, Jan Beulich wrote:
>>>>> On 08.04.14 at 15:50, <boris.ostrovsky@oracle.com> wrote:
>>> On 04/08/2014 03:15 AM, Jan Beulich wrote:
>>>>>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
>>>>> On 04/07/2014 05:43 AM, Jan Beulich wrote:
>>>>>
>>>>> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>>>>>     	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
>>>>>     	       extfeat_ecx, extfeat_edx);
>>>>>     
>>>>> +	if (c->cpuid_level >= 7)
>>>>> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>>>>> +	else
>>>>> +		ebx = eax = 0;
>>>>> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
>>>>> +		if (l7s0_eax > eax)
>>>>> +			l7s0_eax = eax;
>>>>> +		l7s0_ebx &= ebx;
>>>>>
>>>>>
>>>>> Can you explain why eax is treated differently here (i.e. not ANDing it
>>>>> as is done with ebx)?
>>>> Generally I think code like this implies that you know the specification:
>>>> eax here represents the maximum supported subleaf, and hence
>>>> needs to be limited rather than masked.
>>> All specs that I have say that bits of CPUID Fn0000_0007_EAX_x0 are
>>> reserved.
>> Intel's SDM Vol 2 rev 49 (325383-049US) says "07H Sub-leaf 0 (Input
>> ECX = 0). EAX Bits 31-00: Reports the maximum input value for
>> supported leaf 7 sub-leaves." Not sure what other specs you might
>> be looking at.
> 
> This is a patch to xen/arch/x86/cpu/amd.c so I was looking at AMD's 
> BKDGs.  Why is Intel's definition relevant here?

Because leaf 7 is (largely) Intel-defined (AMD adds their extensions
usually at leaves 800000xx).

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-08 14:33             ` Jan Beulich
@ 2014-04-08 15:14               ` Boris Ostrovsky
  2014-04-09 15:39                 ` Aravind Gopalakrishnan
  0 siblings, 1 reply; 19+ messages in thread
From: Boris Ostrovsky @ 2014-04-08 15:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, Aravind Gopalakrishnan,
	suravee.suthikulpanit

On 04/08/2014 10:33 AM, Jan Beulich wrote:
>>>> On 08.04.14 at 16:14, <boris.ostrovsky@oracle.com> wrote:
>> On 04/08/2014 10:02 AM, Jan Beulich wrote:
>>>>>> On 08.04.14 at 15:50, <boris.ostrovsky@oracle.com> wrote:
>>>> On 04/08/2014 03:15 AM, Jan Beulich wrote:
>>>>>>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
>>>>>> On 04/07/2014 05:43 AM, Jan Beulich wrote:
>>>>>>
>>>>>> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>>>>>>      	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n",
>>>>>>      	       extfeat_ecx, extfeat_edx);
>>>>>>      
>>>>>> +	if (c->cpuid_level >= 7)
>>>>>> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>>>>>> +	else
>>>>>> +		ebx = eax = 0;
>>>>>> +	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
>>>>>> +		if (l7s0_eax > eax)
>>>>>> +			l7s0_eax = eax;
>>>>>> +		l7s0_ebx &= ebx;
>>>>>>
>>>>>>
>>>>>> Can you explain why eax is treated differently here (i.e. not ANDing it
>>>>>> as is done with ebx)?
>>>>> Generally I think code like this implies that you know the specification:
>>>>> eax here represents the maximum supported subleaf, and hence
>>>>> needs to be limited rather than masked.
>>>> All specs that I have say that bits of CPUID Fn0000_0007_EAX_x0 are
>>>> reserved.
>>> Intel's SDM Vol 2 rev 49 (325383-049US) says "07H Sub-leaf 0 (Input
>>> ECX = 0). EAX Bits 31-00: Reports the maximum input value for
>>> supported leaf 7 sub-leaves." Not sure what other specs you might
>>> be looking at.
>> This is a patch to xen/arch/x86/cpu/amd.c so I was looking at AMD's
>> BKDGs.  Why is Intel's definition relevant here?
> Because leaf 7 is (largely) Intel-defined (AMD adds their extensions
> usually at leaves 800000xx).

I don't know whether we can assume that AMD will follow the same 
definitions. They usually try not to deliberately do something 
differently there but this is never guaranteed.

-boris

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-08 15:14               ` Boris Ostrovsky
@ 2014-04-09 15:39                 ` Aravind Gopalakrishnan
  2014-04-09 15:50                   ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Aravind Gopalakrishnan @ 2014-04-09 15:39 UTC (permalink / raw)
  To: Boris Ostrovsky, Jan Beulich
  Cc: xen-devel, Keir Fraser, suravee.suthikulpanit

On 4/8/2014 10:14 AM, Boris Ostrovsky wrote:
> On 04/08/2014 10:33 AM, Jan Beulich wrote:
>>>>> On 08.04.14 at 16:14, <boris.ostrovsky@oracle.com> wrote:
>>> On 04/08/2014 10:02 AM, Jan Beulich wrote:
>>>>>>> On 08.04.14 at 15:50, <boris.ostrovsky@oracle.com> wrote:
>>>>> On 04/08/2014 03:15 AM, Jan Beulich wrote:
>>>>>>>>> On 07.04.14 at 17:21, <boris.ostrovsky@oracle.com> wrote:
>>>>>>> On 04/07/2014 05:43 AM, Jan Beulich wrote:
>>>>>>>
>>>>>>> @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons
>>>>>>>          printk("Writing CPUID extended feature mask ECX:EDX -> 
>>>>>>> %08Xh:%08Xh\n",
>>>>>>>                 extfeat_ecx, extfeat_edx);
>>>>>>>      +    if (c->cpuid_level >= 7)
>>>>>>> +        cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>>>>>>> +    else
>>>>>>> +        ebx = eax = 0;
>>>>>>> +    if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
>>>>>>> +        if (l7s0_eax > eax)
>>>>>>> +            l7s0_eax = eax;
>>>>>>> +        l7s0_ebx &= ebx;
>>>>>>>
>>>>>>>
>>>>>>> Can you explain why eax is treated differently here (i.e. not 
>>>>>>> ANDing it
>>>>>>> as is done with ebx)?
>>>>>> Generally I think code like this implies that you know the 
>>>>>> specification:
>>>>>> eax here represents the maximum supported subleaf, and hence
>>>>>> needs to be limited rather than masked.
>>>>> All specs that I have say that bits of CPUID Fn0000_0007_EAX_x0 are
>>>>> reserved.
>>>> Intel's SDM Vol 2 rev 49 (325383-049US) says "07H Sub-leaf 0 (Input
>>>> ECX = 0). EAX Bits 31-00: Reports the maximum input value for
>>>> supported leaf 7 sub-leaves." Not sure what other specs you might
>>>> be looking at.
>>> This is a patch to xen/arch/x86/cpu/amd.c so I was looking at AMD's
>>> BKDGs.  Why is Intel's definition relevant here?
>> Because leaf 7 is (largely) Intel-defined (AMD adds their extensions
>> usually at leaves 800000xx).
>
> I don't know whether we can assume that AMD will follow the same 
> definitions. They usually try not to deliberately do something 
> differently there but this is never guaranteed.
>


Hmm. Boris is right.. All BKDG's I can refer to also say cpuid[7,0].eax 
is reserved.

So, we should be OK with allowing user to mask only cpuid[7,0].ebx ?


-Aravind.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/3] x86/AMD: support further feature masking MSRs
  2014-04-09 15:39                 ` Aravind Gopalakrishnan
@ 2014-04-09 15:50                   ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2014-04-09 15:50 UTC (permalink / raw)
  To: Aravind Gopalakrishnan
  Cc: xen-devel, Boris Ostrovsky, Keir Fraser, suravee.suthikulpanit

>>> On 09.04.14 at 17:39, <aravind.gopalakrishnan@amd.com> wrote:
> Hmm. Boris is right.. All BKDG's I can refer to also say cpuid[7,0].eax 
> is reserved.
> 
> So, we should be OK with allowing user to mask only cpuid[7,0].ebx ?

You have an MSR to mask eax output here, yet you don't know what
this register is going to be used for? Unless there's a reasonable level
of certainty that you'll intentionally break compatibility with Intel here,
I think we're well advised to treat the register the way Intel specifies
it. (Furthermore, as long as it's reserved, it's going to return 0 from
hardware only anyway, and hence we'd limit masked output to 0 - no
problem at all.)

Jan

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2014-04-09 15:50 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-07  9:36 [PATCH v2 0/3] x86/AMD: feature masking adjustments Jan Beulich
2014-04-07  9:41 ` [PATCH v2 1/3] x86/AMD: feature masking is unavailable on Fam11 Jan Beulich
2014-04-07 10:14   ` Andrew Cooper
2014-04-07  9:43 ` [PATCH v2 2/3] x86/AMD: support further feature masking MSRs Jan Beulich
2014-04-07 10:23   ` Andrew Cooper
2014-04-07 11:53     ` Jan Beulich
2014-04-07 12:09       ` Andrew Cooper
2014-04-07 15:21   ` Boris Ostrovsky
2014-04-08  7:15     ` Jan Beulich
2014-04-08 13:50       ` Boris Ostrovsky
2014-04-08 14:02         ` Jan Beulich
2014-04-08 14:14           ` Boris Ostrovsky
2014-04-08 14:33             ` Jan Beulich
2014-04-08 15:14               ` Boris Ostrovsky
2014-04-09 15:39                 ` Aravind Gopalakrishnan
2014-04-09 15:50                   ` Jan Beulich
2014-04-07  9:43 ` [PATCH v2 3/3] x86/AMD: clean up pre-canned family/revision handling for CPUID masking Jan Beulich
2014-04-07 10:48   ` Andrew Cooper
2014-04-07 11:55     ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).