xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
@ 2014-02-25 10:21 Jan Beulich
  2014-02-25 10:28 ` [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT for EMT Jan Beulich
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Jan Beulich @ 2014-02-25 10:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

1: x86/hvm: refine the judgment on IDENT_PT for EMT
2: x86/HVM: fix memory type merging in epte_get_entry_emt()
3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
4: x86/HVM: use manifest constants / enumerators for memory types
5: x86/HVM: adjust data definitions in mtrr.c

With this series in place (or actually the first three patches thereof,
as the rest is cleanup), apart from the need to fully drop the
dependency on HVM_PARAM_IDENT_PT (see the discussion started
at http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
the other main question is whether the dependency on iommu_snoop
is really correct: I don't see why the IOMMU's snooping capability
would affect the cachability of memory accesses - especially in the
GPU passthrough case, RAM pages may need mapping as UC/WC
if the GPU is permitted direct access to them - uniformly using WB
here seems to be calling for problems.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT for EMT
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
@ 2014-02-25 10:28 ` Jan Beulich
  2014-02-25 10:29 ` [PATCH 2/5] x86/HVM: fix memory type merging in epte_get_entry_emt() Jan Beulich
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-02-25 10:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 1887 bytes --]

When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.

In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
 
     *ipat = 0;
 
-    if ( (current->domain != d) &&
-         ((d->vcpu == NULL) || ((v = d->vcpu[0]) == NULL)) )
-        return MTRR_TYPE_WRBACK;
-
-    if ( !is_pvh_vcpu(v) &&
-         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
-        return MTRR_TYPE_WRBACK;
+    if ( v->domain != d )
+        v = d->vcpu ? d->vcpu[0] : NULL;
 
     if ( !mfn_valid(mfn_x(mfn)) )
         return MTRR_TYPE_UNCACHABLE;
@@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
         return MTRR_TYPE_WRBACK;
     }
 
-    gmtrr_mtype = is_hvm_vcpu(v) ?
+    gmtrr_mtype = is_hvm_domain(d) && v &&
+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << PAGE_SHIFT)) :
                   MTRR_TYPE_WRBACK;
 




[-- Attachment #2: EPT-ignore-IDENT_PT.patch --]
[-- Type: text/plain, Size: 1933 bytes --]

x86/hvm: refine the judgment on IDENT_PT for EMT

When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.

In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
 
     *ipat = 0;
 
-    if ( (current->domain != d) &&
-         ((d->vcpu == NULL) || ((v = d->vcpu[0]) == NULL)) )
-        return MTRR_TYPE_WRBACK;
-
-    if ( !is_pvh_vcpu(v) &&
-         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
-        return MTRR_TYPE_WRBACK;
+    if ( v->domain != d )
+        v = d->vcpu ? d->vcpu[0] : NULL;
 
     if ( !mfn_valid(mfn_x(mfn)) )
         return MTRR_TYPE_UNCACHABLE;
@@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
         return MTRR_TYPE_WRBACK;
     }
 
-    gmtrr_mtype = is_hvm_vcpu(v) ?
+    gmtrr_mtype = is_hvm_domain(d) && v &&
+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << PAGE_SHIFT)) :
                   MTRR_TYPE_WRBACK;
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/5] x86/HVM: fix memory type merging in epte_get_entry_emt()
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
  2014-02-25 10:28 ` [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT for EMT Jan Beulich
@ 2014-02-25 10:29 ` Jan Beulich
  2014-02-25 10:30 ` [PATCH 3/5] x86/HVM: consolidate passthrough handling " Jan Beulich
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-02-25 10:29 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

Using the minimum numeric value of guest and host specified memory
types is too simplistic - it works only correctly for a subset of
types. It is in particular the WT/WP combination that needs conversion
to UC if the two types conflict.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -719,5 +719,35 @@ uint8_t epte_get_entry_emt(struct domain
                   MTRR_TYPE_WRBACK;
 
     hmtrr_mtype = get_mtrr_type(&mtrr_state, (mfn_x(mfn) << PAGE_SHIFT));
-    return ((gmtrr_mtype <= hmtrr_mtype) ? gmtrr_mtype : hmtrr_mtype);
+
+    /* If both types match we're fine. */
+    if ( likely(gmtrr_mtype == hmtrr_mtype) )
+        return hmtrr_mtype;
+
+    /* If either type is UC, we have to go with that one. */
+    if ( gmtrr_mtype == MTRR_TYPE_UNCACHABLE ||
+         hmtrr_mtype == MTRR_TYPE_UNCACHABLE )
+        return MTRR_TYPE_UNCACHABLE;
+
+    /* If either type is WB, we have to go with the other one. */
+    if ( gmtrr_mtype == MTRR_TYPE_WRBACK )
+        return hmtrr_mtype;
+    if ( hmtrr_mtype == MTRR_TYPE_WRBACK )
+        return gmtrr_mtype;
+
+    /*
+     * At this point we have disagreeing WC, WT, or WP types. The only
+     * combination that can be cleanly resolved is WT:WP. The ones involving
+     * WC need to be converted to UC, both due to the memory ordering
+     * differences and because WC disallows reads to be cached (WT and WP
+     * permit this), while WT and WP require writes to go straight to memory
+     * (WC can buffer them).
+     */
+    if ( (gmtrr_mtype == MTRR_TYPE_WRTHROUGH &&
+          hmtrr_mtype == MTRR_TYPE_WRPROT) ||
+         (gmtrr_mtype == MTRR_TYPE_WRPROT &&
+          hmtrr_mtype == MTRR_TYPE_WRTHROUGH) )
+        return MTRR_TYPE_WRPROT;
+
+    return MTRR_TYPE_UNCACHABLE;
 }

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/5] x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
  2014-02-25 10:28 ` [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT for EMT Jan Beulich
  2014-02-25 10:29 ` [PATCH 2/5] x86/HVM: fix memory type merging in epte_get_entry_emt() Jan Beulich
@ 2014-02-25 10:30 ` Jan Beulich
  2014-02-25 10:30 ` [PATCH 4/5] x86/HVM: use manifest constants / enumerators for memory types Jan Beulich
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-02-25 10:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 1970 bytes --]

It is inconsistent to depend on iommu_enabled alone: For a guest
without devices passed through to it, it is of no concern whether the
IOMMU is enabled.

There's one rather special case to take care of: VMX code marks the
LAPIC access page as MMIO. The added assertion needs to take this into
consideration, and the subsequent handling of the direct MMIO case was
inconsistent too: That page would have been WB in the absence of an
IOMMU, but UC in the presence of it, while in fact the cachabilty of
this page is entirely unrelated to an IOMMU being in use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2090,9 +2090,9 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( apic_va == NULL )
         return -ENOMEM;
     share_xen_page_with_guest(virt_to_page(apic_va), d, XENSHARE_writable);
+    d->arch.hvm_domain.vmx.apic_access_mfn = virt_to_mfn(apic_va);
     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
         _mfn(virt_to_mfn(apic_va)));
-    d->arch.hvm_domain.vmx.apic_access_mfn = virt_to_mfn(apic_va);
 
     return 0;
 }
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -698,14 +698,20 @@ uint8_t epte_get_entry_emt(struct domain
     if ( hvm_get_mem_pinned_cacheattr(d, gfn, &type) )
         return type;
 
-    if ( !iommu_enabled )
+    if ( !iommu_enabled ||
+         (rangeset_is_empty(d->iomem_caps) &&
+          rangeset_is_empty(d->arch.ioport_caps) &&
+          !has_arch_pdevs(d)) )
     {
+        ASSERT(!direct_mmio ||
+               mfn_x(mfn) == d->arch.hvm_domain.vmx.apic_access_mfn);
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }
 
     if ( direct_mmio )
-        return MTRR_TYPE_UNCACHABLE;
+        return mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn
+               ? MTRR_TYPE_UNCACHABLE : MTRR_TYPE_WRBACK;
 
     if ( iommu_snoop )
     {




[-- Attachment #2: EPT-EMT-no-dev.patch --]
[-- Type: text/plain, Size: 2033 bytes --]

x86/HVM: consolidate passthrough handling in epte_get_entry_emt()

It is inconsistent to depend on iommu_enabled alone: For a guest
without devices passed through to it, it is of no concern whether the
IOMMU is enabled.

There's one rather special case to take care of: VMX code marks the
LAPIC access page as MMIO. The added assertion needs to take this into
consideration, and the subsequent handling of the direct MMIO case was
inconsistent too: That page would have been WB in the absence of an
IOMMU, but UC in the presence of it, while in fact the cachabilty of
this page is entirely unrelated to an IOMMU being in use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2090,9 +2090,9 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( apic_va == NULL )
         return -ENOMEM;
     share_xen_page_with_guest(virt_to_page(apic_va), d, XENSHARE_writable);
+    d->arch.hvm_domain.vmx.apic_access_mfn = virt_to_mfn(apic_va);
     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
         _mfn(virt_to_mfn(apic_va)));
-    d->arch.hvm_domain.vmx.apic_access_mfn = virt_to_mfn(apic_va);
 
     return 0;
 }
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -698,14 +698,20 @@ uint8_t epte_get_entry_emt(struct domain
     if ( hvm_get_mem_pinned_cacheattr(d, gfn, &type) )
         return type;
 
-    if ( !iommu_enabled )
+    if ( !iommu_enabled ||
+         (rangeset_is_empty(d->iomem_caps) &&
+          rangeset_is_empty(d->arch.ioport_caps) &&
+          !has_arch_pdevs(d)) )
     {
+        ASSERT(!direct_mmio ||
+               mfn_x(mfn) == d->arch.hvm_domain.vmx.apic_access_mfn);
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }
 
     if ( direct_mmio )
-        return MTRR_TYPE_UNCACHABLE;
+        return mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn
+               ? MTRR_TYPE_UNCACHABLE : MTRR_TYPE_WRBACK;
 
     if ( iommu_snoop )
     {

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 4/5] x86/HVM: use manifest constants / enumerators for memory types
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
                   ` (2 preceding siblings ...)
  2014-02-25 10:30 ` [PATCH 3/5] x86/HVM: consolidate passthrough handling " Jan Beulich
@ 2014-02-25 10:30 ` Jan Beulich
  2014-03-05 16:34   ` [PATCH v2 " Jan Beulich
  2014-02-25 10:31 ` [PATCH 5/5] x86/HVM: adjust data definitions in mtrr.c Jan Beulich
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2014-02-25 10:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 5128 bytes --]

... instead of literal numbers, thus making it possible for the reader
to understand the code without having to look up the meaning of these
numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -235,9 +235,16 @@ int hvm_set_guest_pat(struct vcpu *v, u6
     uint8_t *value = (uint8_t *)&guest_pat;
 
     for ( i = 0; i < 8; i++ )
-        if ( unlikely(!(value[i] == 0 || value[i] == 1 ||
-                        value[i] == 4 || value[i] == 5 ||
-                        value[i] == 6 || value[i] == 7)) ) {
+        switch ( value[i] )
+        {
+        case PAT_TYPE_UC_MINUS:
+        case PAT_TYPE_UNCACHABLE:
+        case PAT_TYPE_WRBACK:
+        case PAT_TYPE_WRCOMB:
+        case PAT_TYPE_WRPROT:
+        case PAT_TYPE_WRTHROUGH:
+            break;
+        default:
             HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",
                         guest_pat); 
             return 0;
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -42,22 +42,28 @@ static const uint8_t pat_entry_2_pte_fla
 
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][PAT_TYPE_NUMS] = {
-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-)*/
-/* RS means reserved type(2,3), and type is hardcoded here */
- /*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/
-            {0, 1, 2, 2, 0, 0, 0, 0},
- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/
-            {0, 1, 2, 2, 0, 0, 1, 1},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)*/
-            {0, 1, 2, 2, 4, 5, 4, 0},
- /*MTRR(WP):(UC,WC,RS,RS,WT,WP,WP,WC)*/
-            {0, 1, 2, 2, 4, 5, 5, 1},
- /*MTRR(WB):(UC,WC,RS,RS,WT,WP,WB,UC)*/
-            {0, 1, 2, 2, 4, 5, 6, 0}
+#define RS MEMORY_NUM_TYPES
+#define UC MTRR_TYPE_UNCACHABLE
+#define WB MTRR_TYPE_WRBACK
+#define WC MTRR_TYPE_WRCOMB
+#define WP MTRR_TYPE_WRPROT
+#define WT MTRR_TYPE_WRTHROUGH
+
+/*          PAT(UC, WC, RS, RS, WT, WP, WB, UC-) */
+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, UC},
+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},
+/* MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},
+/* MTRR(WB) */ {UC, WC, RS, RS, WT, WP, WB, UC}
+
+#undef UC
+#undef WC
+#undef WT
+#undef WP
+#undef WB
+#undef RS
 };
 
 /*
@@ -403,13 +409,26 @@ uint32_t get_pat_flags(struct vcpu *v,
     return pat_type_2_pte_flags(pat_entry_value);
 }
 
+static inline bool_t valid_mtrr_type(uint8_t type)
+{
+    switch ( type )
+    {
+    case MTRR_TYPE_UNCACHABLE:
+    case MTRR_TYPE_WRBACK:
+    case MTRR_TYPE_WRCOMB:
+    case MTRR_TYPE_WRPROT:
+    case MTRR_TYPE_WRTHROUGH:
+        return 1;
+    }
+    return 0;
+}
+
 bool_t mtrr_def_type_msr_set(struct mtrr_state *m, uint64_t msr_content)
 {
     uint8_t def_type = msr_content & 0xff;
     uint8_t enabled = (msr_content >> 10) & 0x3;
 
-    if ( unlikely(!(def_type == 0 || def_type == 1 || def_type == 4 ||
-                    def_type == 5 || def_type == 6)) )
+    if ( unlikely(!valid_mtrr_type(def_type)) )
     {
          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", def_type);
          return 0;
@@ -436,15 +455,11 @@ bool_t mtrr_fix_range_msr_set(struct mtr
     if ( fixed_range_base[row] != msr_content )
     {
         uint8_t *range = (uint8_t*)&msr_content;
-        int32_t i, type;
+        unsigned int i;
 
         for ( i = 0; i < 8; i++ )
-        {
-            type = range[i];
-            if ( unlikely(!(type == 0 || type == 1 ||
-                            type == 4 || type == 5 || type == 6)) )
+            if ( unlikely(!valid_mtrr_type(range[i])) )
                 return 0;
-        }
 
         fixed_range_base[row] = msr_content;
     }
@@ -455,7 +470,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr
 bool_t mtrr_var_range_msr_set(
     struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t msr_content)
 {
-    uint32_t index, type, phys_addr, eax, ebx, ecx, edx;
+    uint32_t index, phys_addr, eax, ebx, ecx, edx;
     uint64_t msr_mask;
     uint64_t *var_range_base = (uint64_t*)m->var_ranges;
 
@@ -463,9 +478,7 @@ bool_t mtrr_var_range_msr_set(
     if ( var_range_base[index] == msr_content )
         return 1;
 
-    type = (uint8_t)msr_content;
-    if ( unlikely(!(type == 0 || type == 1 ||
-                    type == 4 || type == 5 || type == 6)) )
+    if ( unlikely(!valid_mtrr_type((uint8_t)msr_content)) )
         return 0;
 
     phys_addr = 36;
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -20,7 +20,6 @@
 enum {
     PAT_TYPE_UNCACHABLE=0,
     PAT_TYPE_WRCOMB=1,
-    PAT_TYPE_RESERVED=2,
     PAT_TYPE_WRTHROUGH=4,
     PAT_TYPE_WRPROT=5,
     PAT_TYPE_WRBACK=6,



[-- Attachment #2: x86-PAT-types.patch --]
[-- Type: text/plain, Size: 5190 bytes --]

x86/HVM: use manifest constants / enumerators for memory types

... instead of literal numbers, thus making it possible for the reader
to understand the code without having to look up the meaning of these
numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -235,9 +235,16 @@ int hvm_set_guest_pat(struct vcpu *v, u6
     uint8_t *value = (uint8_t *)&guest_pat;
 
     for ( i = 0; i < 8; i++ )
-        if ( unlikely(!(value[i] == 0 || value[i] == 1 ||
-                        value[i] == 4 || value[i] == 5 ||
-                        value[i] == 6 || value[i] == 7)) ) {
+        switch ( value[i] )
+        {
+        case PAT_TYPE_UC_MINUS:
+        case PAT_TYPE_UNCACHABLE:
+        case PAT_TYPE_WRBACK:
+        case PAT_TYPE_WRCOMB:
+        case PAT_TYPE_WRPROT:
+        case PAT_TYPE_WRTHROUGH:
+            break;
+        default:
             HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",
                         guest_pat); 
             return 0;
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -42,22 +42,28 @@ static const uint8_t pat_entry_2_pte_fla
 
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][PAT_TYPE_NUMS] = {
-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-)*/
-/* RS means reserved type(2,3), and type is hardcoded here */
- /*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/
-            {0, 1, 2, 2, 0, 0, 0, 0},
- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/
-            {0, 1, 2, 2, 0, 0, 1, 1},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)*/
-            {0, 1, 2, 2, 4, 5, 4, 0},
- /*MTRR(WP):(UC,WC,RS,RS,WT,WP,WP,WC)*/
-            {0, 1, 2, 2, 4, 5, 5, 1},
- /*MTRR(WB):(UC,WC,RS,RS,WT,WP,WB,UC)*/
-            {0, 1, 2, 2, 4, 5, 6, 0}
+#define RS MEMORY_NUM_TYPES
+#define UC MTRR_TYPE_UNCACHABLE
+#define WB MTRR_TYPE_WRBACK
+#define WC MTRR_TYPE_WRCOMB
+#define WP MTRR_TYPE_WRPROT
+#define WT MTRR_TYPE_WRTHROUGH
+
+/*          PAT(UC, WC, RS, RS, WT, WP, WB, UC-) */
+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, UC},
+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},
+/* MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},
+/* MTRR(WB) */ {UC, WC, RS, RS, WT, WP, WB, UC}
+
+#undef UC
+#undef WC
+#undef WT
+#undef WP
+#undef WB
+#undef RS
 };
 
 /*
@@ -403,13 +409,26 @@ uint32_t get_pat_flags(struct vcpu *v,
     return pat_type_2_pte_flags(pat_entry_value);
 }
 
+static inline bool_t valid_mtrr_type(uint8_t type)
+{
+    switch ( type )
+    {
+    case MTRR_TYPE_UNCACHABLE:
+    case MTRR_TYPE_WRBACK:
+    case MTRR_TYPE_WRCOMB:
+    case MTRR_TYPE_WRPROT:
+    case MTRR_TYPE_WRTHROUGH:
+        return 1;
+    }
+    return 0;
+}
+
 bool_t mtrr_def_type_msr_set(struct mtrr_state *m, uint64_t msr_content)
 {
     uint8_t def_type = msr_content & 0xff;
     uint8_t enabled = (msr_content >> 10) & 0x3;
 
-    if ( unlikely(!(def_type == 0 || def_type == 1 || def_type == 4 ||
-                    def_type == 5 || def_type == 6)) )
+    if ( unlikely(!valid_mtrr_type(def_type)) )
     {
          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", def_type);
          return 0;
@@ -436,15 +455,11 @@ bool_t mtrr_fix_range_msr_set(struct mtr
     if ( fixed_range_base[row] != msr_content )
     {
         uint8_t *range = (uint8_t*)&msr_content;
-        int32_t i, type;
+        unsigned int i;
 
         for ( i = 0; i < 8; i++ )
-        {
-            type = range[i];
-            if ( unlikely(!(type == 0 || type == 1 ||
-                            type == 4 || type == 5 || type == 6)) )
+            if ( unlikely(!valid_mtrr_type(range[i])) )
                 return 0;
-        }
 
         fixed_range_base[row] = msr_content;
     }
@@ -455,7 +470,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr
 bool_t mtrr_var_range_msr_set(
     struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t msr_content)
 {
-    uint32_t index, type, phys_addr, eax, ebx, ecx, edx;
+    uint32_t index, phys_addr, eax, ebx, ecx, edx;
     uint64_t msr_mask;
     uint64_t *var_range_base = (uint64_t*)m->var_ranges;
 
@@ -463,9 +478,7 @@ bool_t mtrr_var_range_msr_set(
     if ( var_range_base[index] == msr_content )
         return 1;
 
-    type = (uint8_t)msr_content;
-    if ( unlikely(!(type == 0 || type == 1 ||
-                    type == 4 || type == 5 || type == 6)) )
+    if ( unlikely(!valid_mtrr_type((uint8_t)msr_content)) )
         return 0;
 
     phys_addr = 36;
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -20,7 +20,6 @@
 enum {
     PAT_TYPE_UNCACHABLE=0,
     PAT_TYPE_WRCOMB=1,
-    PAT_TYPE_RESERVED=2,
     PAT_TYPE_WRTHROUGH=4,
     PAT_TYPE_WRPROT=5,
     PAT_TYPE_WRBACK=6,

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 5/5] x86/HVM: adjust data definitions in mtrr.c
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
                   ` (3 preceding siblings ...)
  2014-02-25 10:30 ` [PATCH 4/5] x86/HVM: use manifest constants / enumerators for memory types Jan Beulich
@ 2014-02-25 10:31 ` Jan Beulich
  2014-03-03  8:35 ` [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Xu, Dongxiao
  2014-03-07  9:08 ` Keir Fraser
  6 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-02-25 10:31 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 2952 bytes --]

- use proper section attributes
- use initializers where possible
- clean up pat_type_2_pte_flags()

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -70,10 +70,14 @@ static const uint8_t mm_type_tbl[MTRR_NU
  * Reverse lookup table, to find a pat type according to MTRR and effective
  * memory type. This table is dynamically generated.
  */
-static uint8_t mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES];
+static uint8_t __read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES] =
+    { [0 ... MTRR_NUM_TYPES-1] =
+        { [0 ... MEMORY_NUM_TYPES-1] = INVALID_MEM_TYPE }
+    };
 
 /* Lookup table for PAT entry of a given PAT value in host PAT. */
-static uint8_t pat_entry_tbl[PAT_TYPE_NUMS];
+static uint8_t __read_mostly pat_entry_tbl[PAT_TYPE_NUMS] =
+    { [0 ... PAT_TYPE_NUMS-1] = INVALID_MEM_TYPE };
 
 static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
                            uint64_t *base, uint64_t *end)
@@ -149,23 +153,21 @@ bool_t is_var_mtrr_overlapped(struct mtr
 #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
 #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
 
-static int hvm_mtrr_pat_init(void)
+static int __init hvm_mtrr_pat_init(void)
 {
     unsigned int i, j, phys_addr;
 
-    memset(&mtrr_epat_tbl, INVALID_MEM_TYPE, sizeof(mtrr_epat_tbl));
     for ( i = 0; i < MTRR_NUM_TYPES; i++ )
     {
         for ( j = 0; j < PAT_TYPE_NUMS; j++ )
         {
-            int32_t tmp = mm_type_tbl[i][j];
-            if ( (tmp >= 0) && (tmp < MEMORY_NUM_TYPES) )
+            unsigned int tmp = mm_type_tbl[i][j];
+
+            if ( tmp < MEMORY_NUM_TYPES )
                 mtrr_epat_tbl[i][tmp] = j;
         }
     }
 
-    memset(&pat_entry_tbl, INVALID_MEM_TYPE,
-           PAT_TYPE_NUMS * sizeof(pat_entry_tbl[0]));
     for ( i = 0; i < PAT_TYPE_NUMS; i++ )
     {
         for ( j = 0; j < PAT_TYPE_NUMS; j++ )
@@ -190,16 +192,16 @@ __initcall(hvm_mtrr_pat_init);
 
 uint8_t pat_type_2_pte_flags(uint8_t pat_type)
 {
-    int32_t pat_entry = pat_entry_tbl[pat_type];
+    unsigned int pat_entry = pat_entry_tbl[pat_type];
 
-    /* INVALID_MEM_TYPE, means doesn't find the pat_entry in host pat for
-     * a given pat_type. If host pat covers all the pat types,
-     * it can't happen.
+    /*
+     * INVALID_MEM_TYPE, means doesn't find the pat_entry in host PAT for a
+     * given pat_type. If host PAT covers all the PAT types, it can't happen.
      */
-    if ( likely(pat_entry != INVALID_MEM_TYPE) )
-        return pat_entry_2_pte_flags[pat_entry];
+    if ( unlikely(pat_entry == INVALID_MEM_TYPE) )
+        pat_entry = pat_entry_tbl[PAT_TYPE_UNCACHABLE];
 
-    return pat_entry_2_pte_flags[pat_entry_tbl[PAT_TYPE_UNCACHABLE]];
+    return pat_entry_2_pte_flags[pat_entry];
 }
 
 int hvm_vcpu_cacheattr_init(struct vcpu *v)




[-- Attachment #2: x86-HVM-MTRR-sections.patch --]
[-- Type: text/plain, Size: 2992 bytes --]

x86/HVM: adjust data definitions in mtrr.c

- use proper section attributes
- use initializers where possible
- clean up pat_type_2_pte_flags()

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -70,10 +70,14 @@ static const uint8_t mm_type_tbl[MTRR_NU
  * Reverse lookup table, to find a pat type according to MTRR and effective
  * memory type. This table is dynamically generated.
  */
-static uint8_t mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES];
+static uint8_t __read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES] =
+    { [0 ... MTRR_NUM_TYPES-1] =
+        { [0 ... MEMORY_NUM_TYPES-1] = INVALID_MEM_TYPE }
+    };
 
 /* Lookup table for PAT entry of a given PAT value in host PAT. */
-static uint8_t pat_entry_tbl[PAT_TYPE_NUMS];
+static uint8_t __read_mostly pat_entry_tbl[PAT_TYPE_NUMS] =
+    { [0 ... PAT_TYPE_NUMS-1] = INVALID_MEM_TYPE };
 
 static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
                            uint64_t *base, uint64_t *end)
@@ -149,23 +153,21 @@ bool_t is_var_mtrr_overlapped(struct mtr
 #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
 #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
 
-static int hvm_mtrr_pat_init(void)
+static int __init hvm_mtrr_pat_init(void)
 {
     unsigned int i, j, phys_addr;
 
-    memset(&mtrr_epat_tbl, INVALID_MEM_TYPE, sizeof(mtrr_epat_tbl));
     for ( i = 0; i < MTRR_NUM_TYPES; i++ )
     {
         for ( j = 0; j < PAT_TYPE_NUMS; j++ )
         {
-            int32_t tmp = mm_type_tbl[i][j];
-            if ( (tmp >= 0) && (tmp < MEMORY_NUM_TYPES) )
+            unsigned int tmp = mm_type_tbl[i][j];
+
+            if ( tmp < MEMORY_NUM_TYPES )
                 mtrr_epat_tbl[i][tmp] = j;
         }
     }
 
-    memset(&pat_entry_tbl, INVALID_MEM_TYPE,
-           PAT_TYPE_NUMS * sizeof(pat_entry_tbl[0]));
     for ( i = 0; i < PAT_TYPE_NUMS; i++ )
     {
         for ( j = 0; j < PAT_TYPE_NUMS; j++ )
@@ -190,16 +192,16 @@ __initcall(hvm_mtrr_pat_init);
 
 uint8_t pat_type_2_pte_flags(uint8_t pat_type)
 {
-    int32_t pat_entry = pat_entry_tbl[pat_type];
+    unsigned int pat_entry = pat_entry_tbl[pat_type];
 
-    /* INVALID_MEM_TYPE, means doesn't find the pat_entry in host pat for
-     * a given pat_type. If host pat covers all the pat types,
-     * it can't happen.
+    /*
+     * INVALID_MEM_TYPE, means doesn't find the pat_entry in host PAT for a
+     * given pat_type. If host PAT covers all the PAT types, it can't happen.
      */
-    if ( likely(pat_entry != INVALID_MEM_TYPE) )
-        return pat_entry_2_pte_flags[pat_entry];
+    if ( unlikely(pat_entry == INVALID_MEM_TYPE) )
+        pat_entry = pat_entry_tbl[PAT_TYPE_UNCACHABLE];
 
-    return pat_entry_2_pte_flags[pat_entry_tbl[PAT_TYPE_UNCACHABLE]];
+    return pat_entry_2_pte_flags[pat_entry];
 }
 
 int hvm_vcpu_cacheattr_init(struct vcpu *v)

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
                   ` (4 preceding siblings ...)
  2014-02-25 10:31 ` [PATCH 5/5] x86/HVM: adjust data definitions in mtrr.c Jan Beulich
@ 2014-03-03  8:35 ` Xu, Dongxiao
  2014-03-04  8:12   ` Jan Beulich
  2014-03-07  9:08 ` Keir Fraser
  6 siblings, 1 reply; 11+ messages in thread
From: Xu, Dongxiao @ 2014-03-03  8:35 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Zhang, Yang Z, Keir Fraser, Dong, Eddie, Nakajima, Jun

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, February 25, 2014 6:21 PM
> To: xen-devel
> Cc: Xu, Dongxiao; Dong, Eddie; Nakajima, Jun; Zhang, Yang Z; Keir Fraser
> Subject: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
> 
> 1: x86/hvm: refine the judgment on IDENT_PT for EMT
> 2: x86/HVM: fix memory type merging in epte_get_entry_emt()
> 3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
> 4: x86/HVM: use manifest constants / enumerators for memory types
> 5: x86/HVM: adjust data definitions in mtrr.c

For 1-3 (related to logic change), they look good to me.

Thanks,
Dongxiao

> 
> With this series in place (or actually the first three patches thereof,
> as the rest is cleanup), apart from the need to fully drop the
> dependency on HVM_PARAM_IDENT_PT (see the discussion started
> at http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
> the other main question is whether the dependency on iommu_snoop
> is really correct: I don't see why the IOMMU's snooping capability
> would affect the cachability of memory accesses - especially in the
> GPU passthrough case, RAM pages may need mapping as UC/WC
> if the GPU is permitted direct access to them - uniformly using WB
> here seems to be calling for problems.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
  2014-03-03  8:35 ` [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Xu, Dongxiao
@ 2014-03-04  8:12   ` Jan Beulich
  2014-03-10  4:18     ` Xu, Dongxiao
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2014-03-04  8:12 UTC (permalink / raw)
  To: Dongxiao Xu; +Cc: Yang Z Zhang, xen-devel, KeirFraser, Eddie Dong, Jun Nakajima

>>> On 03.03.14 at 09:35, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Tuesday, February 25, 2014 6:21 PM
>> To: xen-devel
>> Cc: Xu, Dongxiao; Dong, Eddie; Nakajima, Jun; Zhang, Yang Z; Keir Fraser
>> Subject: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
>> 
>> 1: x86/hvm: refine the judgment on IDENT_PT for EMT
>> 2: x86/HVM: fix memory type merging in epte_get_entry_emt()
>> 3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
>> 4: x86/HVM: use manifest constants / enumerators for memory types
>> 5: x86/HVM: adjust data definitions in mtrr.c
> 
> For 1-3 (related to logic change), they look good to me.

Please be explicit here - I would _think_ that I can translate this to
a Reviewed-by, but I'm not sure.

Further, with you commenting on 1-3 only, do you have any
reservations towards 4 and/or 5?

Jan

>> With this series in place (or actually the first three patches thereof,
>> as the rest is cleanup), apart from the need to fully drop the
>> dependency on HVM_PARAM_IDENT_PT (see the discussion started
>> at http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
>> the other main question is whether the dependency on iommu_snoop
>> is really correct: I don't see why the IOMMU's snooping capability
>> would affect the cachability of memory accesses - especially in the
>> GPU passthrough case, RAM pages may need mapping as UC/WC
>> if the GPU is permitted direct access to them - uniformly using WB
>> here seems to be calling for problems.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 4/5] x86/HVM: use manifest constants / enumerators for memory types
  2014-02-25 10:30 ` [PATCH 4/5] x86/HVM: use manifest constants / enumerators for memory types Jan Beulich
@ 2014-03-05 16:34   ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2014-03-05 16:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Yang Z Zhang, Keir Fraser, Dongxiao Xu, Eddie Dong, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 5687 bytes --]

... instead of literal numbers, thus making it possible for the reader
to understand the code without having to look up the meaning of these
numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Spotted two more cases (in get_mtrr_type()).

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -235,9 +235,16 @@ int hvm_set_guest_pat(struct vcpu *v, u6
     uint8_t *value = (uint8_t *)&guest_pat;
 
     for ( i = 0; i < 8; i++ )
-        if ( unlikely(!(value[i] == 0 || value[i] == 1 ||
-                        value[i] == 4 || value[i] == 5 ||
-                        value[i] == 6 || value[i] == 7)) ) {
+        switch ( value[i] )
+        {
+        case PAT_TYPE_UC_MINUS:
+        case PAT_TYPE_UNCACHABLE:
+        case PAT_TYPE_WRBACK:
+        case PAT_TYPE_WRCOMB:
+        case PAT_TYPE_WRPROT:
+        case PAT_TYPE_WRTHROUGH:
+            break;
+        default:
             HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",
                         guest_pat); 
             return 0;
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -42,22 +42,28 @@ static const uint8_t pat_entry_2_pte_fla
 
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][PAT_TYPE_NUMS] = {
-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-)*/
-/* RS means reserved type(2,3), and type is hardcoded here */
- /*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/
-            {0, 1, 2, 2, 0, 0, 0, 0},
- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/
-            {0, 1, 2, 2, 0, 0, 1, 1},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)*/
-            {0, 1, 2, 2, 4, 5, 4, 0},
- /*MTRR(WP):(UC,WC,RS,RS,WT,WP,WP,WC)*/
-            {0, 1, 2, 2, 4, 5, 5, 1},
- /*MTRR(WB):(UC,WC,RS,RS,WT,WP,WB,UC)*/
-            {0, 1, 2, 2, 4, 5, 6, 0}
+#define RS MEMORY_NUM_TYPES
+#define UC MTRR_TYPE_UNCACHABLE
+#define WB MTRR_TYPE_WRBACK
+#define WC MTRR_TYPE_WRCOMB
+#define WP MTRR_TYPE_WRPROT
+#define WT MTRR_TYPE_WRTHROUGH
+
+/*          PAT(UC, WC, RS, RS, WT, WP, WB, UC-) */
+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, UC},
+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},
+/* MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},
+/* MTRR(WB) */ {UC, WC, RS, RS, WT, WP, WB, UC}
+
+#undef UC
+#undef WC
+#undef WT
+#undef WP
+#undef WB
+#undef RS
 };
 
 /*
@@ -301,11 +307,12 @@ static uint8_t get_mtrr_type(struct mtrr
         */
        return overlap_mtrr_pos;
 
-   if ( overlap_mtrr & 0x1 )
+   if ( overlap_mtrr & (1 << MTRR_TYPE_UNCACHABLE) )
        /* Two or more match, one is UC. */
        return MTRR_TYPE_UNCACHABLE;
 
-   if ( !(overlap_mtrr & 0xaf) )
+   if ( !(overlap_mtrr &
+          ~((1 << MTRR_TYPE_WRTHROUGH) | (1 << MTRR_TYPE_WRBACK))) )
        /* Two or more match, WT and WB. */
        return MTRR_TYPE_WRTHROUGH;
 
@@ -403,13 +410,26 @@ uint32_t get_pat_flags(struct vcpu *v,
     return pat_type_2_pte_flags(pat_entry_value);
 }
 
+static inline bool_t valid_mtrr_type(uint8_t type)
+{
+    switch ( type )
+    {
+    case MTRR_TYPE_UNCACHABLE:
+    case MTRR_TYPE_WRBACK:
+    case MTRR_TYPE_WRCOMB:
+    case MTRR_TYPE_WRPROT:
+    case MTRR_TYPE_WRTHROUGH:
+        return 1;
+    }
+    return 0;
+}
+
 bool_t mtrr_def_type_msr_set(struct mtrr_state *m, uint64_t msr_content)
 {
     uint8_t def_type = msr_content & 0xff;
     uint8_t enabled = (msr_content >> 10) & 0x3;
 
-    if ( unlikely(!(def_type == 0 || def_type == 1 || def_type == 4 ||
-                    def_type == 5 || def_type == 6)) )
+    if ( unlikely(!valid_mtrr_type(def_type)) )
     {
          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", def_type);
          return 0;
@@ -436,15 +456,11 @@ bool_t mtrr_fix_range_msr_set(struct mtr
     if ( fixed_range_base[row] != msr_content )
     {
         uint8_t *range = (uint8_t*)&msr_content;
-        int32_t i, type;
+        unsigned int i;
 
         for ( i = 0; i < 8; i++ )
-        {
-            type = range[i];
-            if ( unlikely(!(type == 0 || type == 1 ||
-                            type == 4 || type == 5 || type == 6)) )
+            if ( unlikely(!valid_mtrr_type(range[i])) )
                 return 0;
-        }
 
         fixed_range_base[row] = msr_content;
     }
@@ -455,7 +471,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr
 bool_t mtrr_var_range_msr_set(
     struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t msr_content)
 {
-    uint32_t index, type, phys_addr, eax, ebx, ecx, edx;
+    uint32_t index, phys_addr, eax, ebx, ecx, edx;
     uint64_t msr_mask;
     uint64_t *var_range_base = (uint64_t*)m->var_ranges;
 
@@ -463,9 +479,7 @@ bool_t mtrr_var_range_msr_set(
     if ( var_range_base[index] == msr_content )
         return 1;
 
-    type = (uint8_t)msr_content;
-    if ( unlikely(!(type == 0 || type == 1 ||
-                    type == 4 || type == 5 || type == 6)) )
+    if ( unlikely(!valid_mtrr_type((uint8_t)msr_content)) )
         return 0;
 
     phys_addr = 36;
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -20,7 +20,6 @@
 enum {
     PAT_TYPE_UNCACHABLE=0,
     PAT_TYPE_WRCOMB=1,
-    PAT_TYPE_RESERVED=2,
     PAT_TYPE_WRTHROUGH=4,
     PAT_TYPE_WRPROT=5,
     PAT_TYPE_WRBACK=6,



[-- Attachment #2: x86-PAT-types.patch --]
[-- Type: text/plain, Size: 5749 bytes --]

x86/HVM: use manifest constants / enumerators for memory types

... instead of literal numbers, thus making it possible for the reader
to understand the code without having to look up the meaning of these
numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Spotted two more cases (in get_mtrr_type()).

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -235,9 +235,16 @@ int hvm_set_guest_pat(struct vcpu *v, u6
     uint8_t *value = (uint8_t *)&guest_pat;
 
     for ( i = 0; i < 8; i++ )
-        if ( unlikely(!(value[i] == 0 || value[i] == 1 ||
-                        value[i] == 4 || value[i] == 5 ||
-                        value[i] == 6 || value[i] == 7)) ) {
+        switch ( value[i] )
+        {
+        case PAT_TYPE_UC_MINUS:
+        case PAT_TYPE_UNCACHABLE:
+        case PAT_TYPE_WRBACK:
+        case PAT_TYPE_WRCOMB:
+        case PAT_TYPE_WRPROT:
+        case PAT_TYPE_WRTHROUGH:
+            break;
+        default:
             HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",
                         guest_pat); 
             return 0;
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -42,22 +42,28 @@ static const uint8_t pat_entry_2_pte_fla
 
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][PAT_TYPE_NUMS] = {
-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-)*/
-/* RS means reserved type(2,3), and type is hardcoded here */
- /*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/
-            {0, 1, 2, 2, 0, 0, 0, 0},
- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/
-            {0, 1, 2, 2, 0, 0, 1, 1},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)*/
-            {0, 1, 2, 2, 4, 5, 4, 0},
- /*MTRR(WP):(UC,WC,RS,RS,WT,WP,WP,WC)*/
-            {0, 1, 2, 2, 4, 5, 5, 1},
- /*MTRR(WB):(UC,WC,RS,RS,WT,WP,WB,UC)*/
-            {0, 1, 2, 2, 4, 5, 6, 0}
+#define RS MEMORY_NUM_TYPES
+#define UC MTRR_TYPE_UNCACHABLE
+#define WB MTRR_TYPE_WRBACK
+#define WC MTRR_TYPE_WRCOMB
+#define WP MTRR_TYPE_WRPROT
+#define WT MTRR_TYPE_WRTHROUGH
+
+/*          PAT(UC, WC, RS, RS, WT, WP, WB, UC-) */
+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, UC},
+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},
+/* MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},
+/* MTRR(WB) */ {UC, WC, RS, RS, WT, WP, WB, UC}
+
+#undef UC
+#undef WC
+#undef WT
+#undef WP
+#undef WB
+#undef RS
 };
 
 /*
@@ -301,11 +307,12 @@ static uint8_t get_mtrr_type(struct mtrr
         */
        return overlap_mtrr_pos;
 
-   if ( overlap_mtrr & 0x1 )
+   if ( overlap_mtrr & (1 << MTRR_TYPE_UNCACHABLE) )
        /* Two or more match, one is UC. */
        return MTRR_TYPE_UNCACHABLE;
 
-   if ( !(overlap_mtrr & 0xaf) )
+   if ( !(overlap_mtrr &
+          ~((1 << MTRR_TYPE_WRTHROUGH) | (1 << MTRR_TYPE_WRBACK))) )
        /* Two or more match, WT and WB. */
        return MTRR_TYPE_WRTHROUGH;
 
@@ -403,13 +410,26 @@ uint32_t get_pat_flags(struct vcpu *v,
     return pat_type_2_pte_flags(pat_entry_value);
 }
 
+static inline bool_t valid_mtrr_type(uint8_t type)
+{
+    switch ( type )
+    {
+    case MTRR_TYPE_UNCACHABLE:
+    case MTRR_TYPE_WRBACK:
+    case MTRR_TYPE_WRCOMB:
+    case MTRR_TYPE_WRPROT:
+    case MTRR_TYPE_WRTHROUGH:
+        return 1;
+    }
+    return 0;
+}
+
 bool_t mtrr_def_type_msr_set(struct mtrr_state *m, uint64_t msr_content)
 {
     uint8_t def_type = msr_content & 0xff;
     uint8_t enabled = (msr_content >> 10) & 0x3;
 
-    if ( unlikely(!(def_type == 0 || def_type == 1 || def_type == 4 ||
-                    def_type == 5 || def_type == 6)) )
+    if ( unlikely(!valid_mtrr_type(def_type)) )
     {
          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", def_type);
          return 0;
@@ -436,15 +456,11 @@ bool_t mtrr_fix_range_msr_set(struct mtr
     if ( fixed_range_base[row] != msr_content )
     {
         uint8_t *range = (uint8_t*)&msr_content;
-        int32_t i, type;
+        unsigned int i;
 
         for ( i = 0; i < 8; i++ )
-        {
-            type = range[i];
-            if ( unlikely(!(type == 0 || type == 1 ||
-                            type == 4 || type == 5 || type == 6)) )
+            if ( unlikely(!valid_mtrr_type(range[i])) )
                 return 0;
-        }
 
         fixed_range_base[row] = msr_content;
     }
@@ -455,7 +471,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr
 bool_t mtrr_var_range_msr_set(
     struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t msr_content)
 {
-    uint32_t index, type, phys_addr, eax, ebx, ecx, edx;
+    uint32_t index, phys_addr, eax, ebx, ecx, edx;
     uint64_t msr_mask;
     uint64_t *var_range_base = (uint64_t*)m->var_ranges;
 
@@ -463,9 +479,7 @@ bool_t mtrr_var_range_msr_set(
     if ( var_range_base[index] == msr_content )
         return 1;
 
-    type = (uint8_t)msr_content;
-    if ( unlikely(!(type == 0 || type == 1 ||
-                    type == 4 || type == 5 || type == 6)) )
+    if ( unlikely(!valid_mtrr_type((uint8_t)msr_content)) )
         return 0;
 
     phys_addr = 36;
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -20,7 +20,6 @@
 enum {
     PAT_TYPE_UNCACHABLE=0,
     PAT_TYPE_WRCOMB=1,
-    PAT_TYPE_RESERVED=2,
     PAT_TYPE_WRTHROUGH=4,
     PAT_TYPE_WRPROT=5,
     PAT_TYPE_WRBACK=6,

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
  2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
                   ` (5 preceding siblings ...)
  2014-03-03  8:35 ` [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Xu, Dongxiao
@ 2014-03-07  9:08 ` Keir Fraser
  6 siblings, 0 replies; 11+ messages in thread
From: Keir Fraser @ 2014-03-07  9:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, Eddie Dong, Dongxiao Xu, Jun Nakajima, Yang Z Zhang,
	xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1076 bytes --]

Jan Beulich wrote:
>
> 1: x86/hvm: refine the judgment on IDENT_PT for EMT
> 2: x86/HVM: fix memory type merging in epte_get_entry_emt()
> 3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
> 4: x86/HVM: use manifest constants / enumerators for memory types
> 5: x86/HVM: adjust data definitions in mtrr.c
>
> With this series in place (or actually the first three patches thereof,
> as the rest is cleanup), apart from the need to fully drop the
> dependency on HVM_PARAM_IDENT_PT (see the discussion started
> at 
> http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
> the other main question is whether the dependency on iommu_snoop
> is really correct: I don't see why the IOMMU's snooping capability
> would affect the cachability of memory accesses - especially in the
> GPU passthrough case, RAM pages may need mapping as UC/WC
> if the GPU is permitted direct access to them - uniformly using WB
> here seems to be calling for problems.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

[-- Attachment #1.2: Type: text/html, Size: 1286 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
  2014-03-04  8:12   ` Jan Beulich
@ 2014-03-10  4:18     ` Xu, Dongxiao
  0 siblings, 0 replies; 11+ messages in thread
From: Xu, Dongxiao @ 2014-03-10  4:18 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Zhang, Yang Z, xen-devel, KeirFraser, Dong, Eddie, Nakajima, Jun

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan Beulich
> Sent: Tuesday, March 04, 2014 4:13 PM
> To: Xu, Dongxiao
> Cc: Zhang, Yang Z; xen-devel; KeirFraser; Dong, Eddie; Nakajima, Jun
> Subject: Re: [Xen-devel] [PATCH 0/5] x86: EPT/MTRR interaction adjustments and
> cleanup
> 
> >>> On 03.03.14 at 09:35, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Tuesday, February 25, 2014 6:21 PM
> >> To: xen-devel
> >> Cc: Xu, Dongxiao; Dong, Eddie; Nakajima, Jun; Zhang, Yang Z; Keir Fraser
> >> Subject: [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup
> >>
> >> 1: x86/hvm: refine the judgment on IDENT_PT for EMT
> >> 2: x86/HVM: fix memory type merging in epte_get_entry_emt()
> >> 3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
> >> 4: x86/HVM: use manifest constants / enumerators for memory types
> >> 5: x86/HVM: adjust data definitions in mtrr.c
> >
> > For 1-3 (related to logic change), they look good to me.
> 
> Please be explicit here - I would _think_ that I can translate this to
> a Reviewed-by, but I'm not sure.

Yes, sure.

> 
> Further, with you commenting on 1-3 only, do you have any
> reservations towards 4 and/or 5?

No reservations on 4 and 5. The two patches are mainly related to code clean up, thus I didn't review them in detail.

Thanks,
Dongxiao

> 
> Jan
> 
> >> With this series in place (or actually the first three patches thereof,
> >> as the rest is cleanup), apart from the need to fully drop the
> >> dependency on HVM_PARAM_IDENT_PT (see the discussion started
> >> at
> http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
> >> the other main question is whether the dependency on iommu_snoop
> >> is really correct: I don't see why the IOMMU's snooping capability
> >> would affect the cachability of memory accesses - especially in the
> >> GPU passthrough case, RAM pages may need mapping as UC/WC
> >> if the GPU is permitted direct access to them - uniformly using WB
> >> here seems to be calling for problems.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-03-10  4:19 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-25 10:21 [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Jan Beulich
2014-02-25 10:28 ` [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT for EMT Jan Beulich
2014-02-25 10:29 ` [PATCH 2/5] x86/HVM: fix memory type merging in epte_get_entry_emt() Jan Beulich
2014-02-25 10:30 ` [PATCH 3/5] x86/HVM: consolidate passthrough handling " Jan Beulich
2014-02-25 10:30 ` [PATCH 4/5] x86/HVM: use manifest constants / enumerators for memory types Jan Beulich
2014-03-05 16:34   ` [PATCH v2 " Jan Beulich
2014-02-25 10:31 ` [PATCH 5/5] x86/HVM: adjust data definitions in mtrr.c Jan Beulich
2014-03-03  8:35 ` [PATCH 0/5] x86: EPT/MTRR interaction adjustments and cleanup Xu, Dongxiao
2014-03-04  8:12   ` Jan Beulich
2014-03-10  4:18     ` Xu, Dongxiao
2014-03-07  9:08 ` Keir Fraser

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).