xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] x86/VMX: sanitize VM86 TSS handling
@ 2017-02-17 12:03 Jan Beulich
  2017-02-20 11:01 ` Tim Deegan
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Jan Beulich @ 2017-02-17 12:03 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Tim Deegan, Jun Nakajima,
	Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 10328 bytes --]

The present way of setting this up is flawed: Leaving the I/O bitmap
pointer at zero means that the interrupt redirection bitmap lives
outside (ahead of) the allocated space of the TSS. Similarly setting a
TSS limit of 255 when only 128 bytes get allocated means that 128 extra
bytes may be accessed by the CPU during I/O port access processing.

Introduce a new HVM param to set the allocated size of the TSS, and
have the hypervisor actually take care of setting namely the I/O bitmap
pointer. Both this and the segment limit now take the allocated size
into account.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Instead of HVM_PARAM_VM86_TSS_SIZE, introduce
    HVM_PARAM_VM86_TSS_SIZED, which requires the old parameter to no
    longer be saved in libxc's write_hvm_params(). Only initialize the
    TSS once after the param was set. Request only 384 bytes (and
    128-byte alignment) for the TSS. Add padding byte to capping value.
    Add comment to hvm_copy_to_guest_phys() invocations.

--- a/tools/firmware/hvmloader/hvmloader.c
+++ b/tools/firmware/hvmloader/hvmloader.c
@@ -177,18 +177,29 @@ static void cmos_write_memory_size(void)
 }
 
 /*
- * Set up an empty TSS area for virtual 8086 mode to use. 
- * The only important thing is that it musn't have any bits set 
- * in the interrupt redirection bitmap, so all zeros will do.
+ * Set up an empty TSS area for virtual 8086 mode to use. Its content is
+ * going to be managed by Xen, but zero fill it just in case.
  */
 static void init_vm86_tss(void)
 {
+/*
+ * Have the TSS cover the ISA port range, which makes it
+ * - 104 bytes base structure
+ * - 32 bytes interrupt redirection bitmap
+ * - 128 bytes I/O bitmap
+ * - one trailing byte
+ * or a total of to 265 bytes. As it needs to be a multiple of the requested
+ * alignment, this ends up requiring 384 bytes.
+ */
+#define TSS_SIZE (3 * 128)
     void *tss;
 
-    tss = mem_alloc(128, 128);
-    memset(tss, 0, 128);
-    hvm_param_set(HVM_PARAM_VM86_TSS, virt_to_phys(tss));
+    tss = mem_alloc(TSS_SIZE, 128);
+    memset(tss, 0, TSS_SIZE);
+    hvm_param_set(HVM_PARAM_VM86_TSS_SIZED,
+                  ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss));
     printf("vm86 TSS at %08lx\n", virt_to_phys(tss));
+#undef TSS_SIZE
 }
 
 static void apic_setup(void)
--- a/tools/libxc/xc_sr_save_x86_hvm.c
+++ b/tools/libxc/xc_sr_save_x86_hvm.c
@@ -67,7 +67,7 @@ static int write_hvm_params(struct xc_sr
         HVM_PARAM_PAGING_RING_PFN,
         HVM_PARAM_MONITOR_RING_PFN,
         HVM_PARAM_SHARING_RING_PFN,
-        HVM_PARAM_VM86_TSS,
+        HVM_PARAM_VM86_TSS_SIZED,
         HVM_PARAM_CONSOLE_PFN,
         HVM_PARAM_ACPI_IOPORTS_LOCATION,
         HVM_PARAM_VIRIDIAN,
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2849,6 +2849,50 @@ static int hvm_load_segment_selector(
     return 1;
 }
 
+struct tss32 {
+    uint16_t back_link, :16;
+    uint32_t esp0;
+    uint16_t ss0, :16;
+    uint32_t esp1;
+    uint16_t ss1, :16;
+    uint32_t esp2;
+    uint16_t ss2, :16;
+    uint32_t cr3, eip, eflags, eax, ecx, edx, ebx, esp, ebp, esi, edi;
+    uint16_t es, :16, cs, :16, ss, :16, ds, :16, fs, :16, gs, :16, ldt, :16;
+    uint16_t trace /* :1 */, iomap;
+};
+
+void hvm_prepare_vm86_tss(struct vcpu *v, uint32_t base, uint32_t limit)
+{
+    /*
+     * If the provided area is large enough to cover at least the ISA port
+     * range, keep the bitmaps outside the base structure. For rather small
+     * areas (namely relevant for guests having been migrated from older
+     * Xen versions), maximize interrupt vector and port coverage by pointing
+     * the I/O bitmap at 0x20 (which puts the interrupt redirection bitmap
+     * right at zero), accepting accesses to port 0x235 (represented by bit 5
+     * of byte 0x46) to trigger #GP (which will simply result in the access
+     * being handled by the emulator via a slightly different path than it
+     * would be anyway). Be sure to include one extra byte at the end of the
+     * I/O bitmap (hence the missing "- 1" in the comparison is not an
+     * off-by-one mistake), which we deliberately don't fill with all ones.
+     */
+    uint16_t iomap = (limit >= sizeof(struct tss32) + (0x100 / 8) + (0x400 / 8)
+                      ? sizeof(struct tss32) : 0) + (0x100 / 8);
+
+    ASSERT(limit >= sizeof(struct tss32) - 1);
+    /*
+     * Strictly speaking we'd have to use hvm_copy_to_guest_linear() below,
+     * but since the guest is (supposed to be, unless it corrupts that setup
+     * itself, which would harm only itself) running on an identmap, we can
+     * use the less overhead variant below, which also allows passing a vCPU
+     * argument.
+     */
+    hvm_copy_to_guest_phys(base, NULL, limit + 1, v);
+    hvm_copy_to_guest_phys(base + offsetof(struct tss32, iomap),
+                           &iomap, sizeof(iomap), v);
+}
+
 void hvm_task_switch(
     uint16_t tss_sel, enum hvm_task_switch_reason taskswitch_reason,
     int32_t errcode)
@@ -2861,18 +2905,7 @@ void hvm_task_switch(
     unsigned int eflags;
     pagefault_info_t pfinfo;
     int exn_raised, rc;
-    struct {
-        u16 back_link,__blh;
-        u32 esp0;
-        u16 ss0, _0;
-        u32 esp1;
-        u16 ss1, _1;
-        u32 esp2;
-        u16 ss2, _2;
-        u32 cr3, eip, eflags, eax, ecx, edx, ebx, esp, ebp, esi, edi;
-        u16 es, _3, cs, _4, ss, _5, ds, _6, fs, _7, gs, _8, ldt, _9;
-        u16 trace, iomap;
-    } tss;
+    struct tss32 tss;
 
     hvm_get_segment_register(v, x86_seg_gdtr, &gdt);
     hvm_get_segment_register(v, x86_seg_tr, &prev_tr);
@@ -3973,6 +4006,7 @@ static int hvm_allow_set_param(struct do
     /* The following parameters can be set by the guest. */
     case HVM_PARAM_CALLBACK_IRQ:
     case HVM_PARAM_VM86_TSS:
+    case HVM_PARAM_VM86_TSS_SIZED:
     case HVM_PARAM_ACPI_IOPORTS_LOCATION:
     case HVM_PARAM_VM_GENERATION_ID_ADDR:
     case HVM_PARAM_STORE_EVTCHN:
@@ -4181,6 +4215,40 @@ static int hvmop_set_param(
         }
         d->arch.x87_fip_width = a.value;
         break;
+
+    case HVM_PARAM_VM86_TSS:
+        /* Hardware would silently truncate high bits. */
+        if ( a.value != (uint32_t)a.value )
+        {
+            if ( d == curr_d )
+                domain_crash(d);
+            rc = -EINVAL;
+        }
+        /* Old hvmloader binaries hardcode the size to 128 bytes. */
+        if ( a.value )
+            a.value |= (128ULL << 32) | VM86_TSS_UPDATED;
+        a.index = HVM_PARAM_VM86_TSS_SIZED;
+        break;
+
+    case HVM_PARAM_VM86_TSS_SIZED:
+        if ( (a.value >> 32) < sizeof(struct tss32) )
+        {
+            if ( d == curr_d )
+                domain_crash(d);
+            rc = -EINVAL;
+        }
+        /*
+         * Cap at the theoretically useful maximum (base structure plus
+         * 256 bits interrupt redirection bitmap + 64k bits I/O bitmap
+         * plus one padding byte).
+         */
+        if ( (a.value >> 32) > sizeof(struct tss32) +
+                               (0x100 / 8) + (0x10000 / 8) + 1 )
+            a.value = (uint32_t)a.value |
+                      ((sizeof(struct tss32) + (0x100 / 8) +
+                                               (0x10000 / 8) + 1) << 32);
+        a.value |= VM86_TSS_UPDATED;
+        break;
     }
 
     if ( rc != 0 )
@@ -4210,6 +4278,7 @@ static int hvm_allow_get_param(struct do
     /* The following parameters can be read by the guest. */
     case HVM_PARAM_CALLBACK_IRQ:
     case HVM_PARAM_VM86_TSS:
+    case HVM_PARAM_VM86_TSS_SIZED:
     case HVM_PARAM_ACPI_IOPORTS_LOCATION:
     case HVM_PARAM_VM_GENERATION_ID_ADDR:
     case HVM_PARAM_STORE_PFN:
@@ -4267,6 +4336,12 @@ static int hvmop_get_param(
     case HVM_PARAM_ACPI_S_STATE:
         a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
         break;
+
+    case HVM_PARAM_VM86_TSS:
+        a.value = (uint32_t)d->arch.hvm_domain.params
+                                [HVM_PARAM_VM86_TSS_SIZED];
+        break;
+
     case HVM_PARAM_X87_FIP_WIDTH:
         a.value = d->arch.x87_fip_width;
         break;
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1119,12 +1119,21 @@ static void vmx_set_segment_register(str
         
         if ( seg == x86_seg_tr ) 
         {
-            if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VM86_TSS] )
+            const struct domain *d = v->domain;
+            uint64_t val = d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED];
+
+            if ( val )
             {
                 sel = 0;
                 attr = vm86_tr_attr;
-                limit = 0xff;
-                base = v->domain->arch.hvm_domain.params[HVM_PARAM_VM86_TSS];
+                limit = ((val & ~VM86_TSS_UPDATED) >> 32) - 1;
+                base = (uint32_t)val;
+                if ( val & VM86_TSS_UPDATED )
+                {
+                    hvm_prepare_vm86_tss(v, base, limit);
+                    cmpxchg(&d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED],
+                            val, val & ~VM86_TSS_UPDATED);
+                }
                 v->arch.hvm_vmx.vm86_segment_mask &= ~(1u << seg);
             }
             else
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -110,6 +110,9 @@ int hvm_do_hypercall(struct cpu_user_reg
 void hvm_hlt(unsigned int eflags);
 void hvm_triple_fault(void);
 
+#define VM86_TSS_UPDATED (1ULL << 63)
+void hvm_prepare_vm86_tss(struct vcpu *v, uint32_t base, uint32_t limit);
+
 void hvm_rdtsc_intercept(struct cpu_user_regs *regs);
 
 int __must_check hvm_handle_xsetbv(u32 index, u64 new_bv);
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -253,6 +253,12 @@
  */
 #define HVM_PARAM_X87_FIP_WIDTH 36
 
-#define HVM_NR_PARAMS 37
+/*
+ * TSS (and its size) used on Intel when CR0.PE=0. The address occupies
+ * the low 32 bits, while the size is in the high 32 ones.
+ */
+#define HVM_PARAM_VM86_TSS_SIZED 37
+
+#define HVM_NR_PARAMS 38
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */



[-- Attachment #2: x86-HVM-VM86-TSS.patch --]
[-- Type: text/plain, Size: 10363 bytes --]

x86/VMX: sanitize VM86 TSS handling

The present way of setting this up is flawed: Leaving the I/O bitmap
pointer at zero means that the interrupt redirection bitmap lives
outside (ahead of) the allocated space of the TSS. Similarly setting a
TSS limit of 255 when only 128 bytes get allocated means that 128 extra
bytes may be accessed by the CPU during I/O port access processing.

Introduce a new HVM param to set the allocated size of the TSS, and
have the hypervisor actually take care of setting namely the I/O bitmap
pointer. Both this and the segment limit now take the allocated size
into account.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Instead of HVM_PARAM_VM86_TSS_SIZE, introduce
    HVM_PARAM_VM86_TSS_SIZED, which requires the old parameter to no
    longer be saved in libxc's write_hvm_params(). Only initialize the
    TSS once after the param was set. Request only 384 bytes (and
    128-byte alignment) for the TSS. Add padding byte to capping value.
    Add comment to hvm_copy_to_guest_phys() invocations.

--- a/tools/firmware/hvmloader/hvmloader.c
+++ b/tools/firmware/hvmloader/hvmloader.c
@@ -177,18 +177,29 @@ static void cmos_write_memory_size(void)
 }
 
 /*
- * Set up an empty TSS area for virtual 8086 mode to use. 
- * The only important thing is that it musn't have any bits set 
- * in the interrupt redirection bitmap, so all zeros will do.
+ * Set up an empty TSS area for virtual 8086 mode to use. Its content is
+ * going to be managed by Xen, but zero fill it just in case.
  */
 static void init_vm86_tss(void)
 {
+/*
+ * Have the TSS cover the ISA port range, which makes it
+ * - 104 bytes base structure
+ * - 32 bytes interrupt redirection bitmap
+ * - 128 bytes I/O bitmap
+ * - one trailing byte
+ * or a total of to 265 bytes. As it needs to be a multiple of the requested
+ * alignment, this ends up requiring 384 bytes.
+ */
+#define TSS_SIZE (3 * 128)
     void *tss;
 
-    tss = mem_alloc(128, 128);
-    memset(tss, 0, 128);
-    hvm_param_set(HVM_PARAM_VM86_TSS, virt_to_phys(tss));
+    tss = mem_alloc(TSS_SIZE, 128);
+    memset(tss, 0, TSS_SIZE);
+    hvm_param_set(HVM_PARAM_VM86_TSS_SIZED,
+                  ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss));
     printf("vm86 TSS at %08lx\n", virt_to_phys(tss));
+#undef TSS_SIZE
 }
 
 static void apic_setup(void)
--- a/tools/libxc/xc_sr_save_x86_hvm.c
+++ b/tools/libxc/xc_sr_save_x86_hvm.c
@@ -67,7 +67,7 @@ static int write_hvm_params(struct xc_sr
         HVM_PARAM_PAGING_RING_PFN,
         HVM_PARAM_MONITOR_RING_PFN,
         HVM_PARAM_SHARING_RING_PFN,
-        HVM_PARAM_VM86_TSS,
+        HVM_PARAM_VM86_TSS_SIZED,
         HVM_PARAM_CONSOLE_PFN,
         HVM_PARAM_ACPI_IOPORTS_LOCATION,
         HVM_PARAM_VIRIDIAN,
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2849,6 +2849,50 @@ static int hvm_load_segment_selector(
     return 1;
 }
 
+struct tss32 {
+    uint16_t back_link, :16;
+    uint32_t esp0;
+    uint16_t ss0, :16;
+    uint32_t esp1;
+    uint16_t ss1, :16;
+    uint32_t esp2;
+    uint16_t ss2, :16;
+    uint32_t cr3, eip, eflags, eax, ecx, edx, ebx, esp, ebp, esi, edi;
+    uint16_t es, :16, cs, :16, ss, :16, ds, :16, fs, :16, gs, :16, ldt, :16;
+    uint16_t trace /* :1 */, iomap;
+};
+
+void hvm_prepare_vm86_tss(struct vcpu *v, uint32_t base, uint32_t limit)
+{
+    /*
+     * If the provided area is large enough to cover at least the ISA port
+     * range, keep the bitmaps outside the base structure. For rather small
+     * areas (namely relevant for guests having been migrated from older
+     * Xen versions), maximize interrupt vector and port coverage by pointing
+     * the I/O bitmap at 0x20 (which puts the interrupt redirection bitmap
+     * right at zero), accepting accesses to port 0x235 (represented by bit 5
+     * of byte 0x46) to trigger #GP (which will simply result in the access
+     * being handled by the emulator via a slightly different path than it
+     * would be anyway). Be sure to include one extra byte at the end of the
+     * I/O bitmap (hence the missing "- 1" in the comparison is not an
+     * off-by-one mistake), which we deliberately don't fill with all ones.
+     */
+    uint16_t iomap = (limit >= sizeof(struct tss32) + (0x100 / 8) + (0x400 / 8)
+                      ? sizeof(struct tss32) : 0) + (0x100 / 8);
+
+    ASSERT(limit >= sizeof(struct tss32) - 1);
+    /*
+     * Strictly speaking we'd have to use hvm_copy_to_guest_linear() below,
+     * but since the guest is (supposed to be, unless it corrupts that setup
+     * itself, which would harm only itself) running on an identmap, we can
+     * use the less overhead variant below, which also allows passing a vCPU
+     * argument.
+     */
+    hvm_copy_to_guest_phys(base, NULL, limit + 1, v);
+    hvm_copy_to_guest_phys(base + offsetof(struct tss32, iomap),
+                           &iomap, sizeof(iomap), v);
+}
+
 void hvm_task_switch(
     uint16_t tss_sel, enum hvm_task_switch_reason taskswitch_reason,
     int32_t errcode)
@@ -2861,18 +2905,7 @@ void hvm_task_switch(
     unsigned int eflags;
     pagefault_info_t pfinfo;
     int exn_raised, rc;
-    struct {
-        u16 back_link,__blh;
-        u32 esp0;
-        u16 ss0, _0;
-        u32 esp1;
-        u16 ss1, _1;
-        u32 esp2;
-        u16 ss2, _2;
-        u32 cr3, eip, eflags, eax, ecx, edx, ebx, esp, ebp, esi, edi;
-        u16 es, _3, cs, _4, ss, _5, ds, _6, fs, _7, gs, _8, ldt, _9;
-        u16 trace, iomap;
-    } tss;
+    struct tss32 tss;
 
     hvm_get_segment_register(v, x86_seg_gdtr, &gdt);
     hvm_get_segment_register(v, x86_seg_tr, &prev_tr);
@@ -3973,6 +4006,7 @@ static int hvm_allow_set_param(struct do
     /* The following parameters can be set by the guest. */
     case HVM_PARAM_CALLBACK_IRQ:
     case HVM_PARAM_VM86_TSS:
+    case HVM_PARAM_VM86_TSS_SIZED:
     case HVM_PARAM_ACPI_IOPORTS_LOCATION:
     case HVM_PARAM_VM_GENERATION_ID_ADDR:
     case HVM_PARAM_STORE_EVTCHN:
@@ -4181,6 +4215,40 @@ static int hvmop_set_param(
         }
         d->arch.x87_fip_width = a.value;
         break;
+
+    case HVM_PARAM_VM86_TSS:
+        /* Hardware would silently truncate high bits. */
+        if ( a.value != (uint32_t)a.value )
+        {
+            if ( d == curr_d )
+                domain_crash(d);
+            rc = -EINVAL;
+        }
+        /* Old hvmloader binaries hardcode the size to 128 bytes. */
+        if ( a.value )
+            a.value |= (128ULL << 32) | VM86_TSS_UPDATED;
+        a.index = HVM_PARAM_VM86_TSS_SIZED;
+        break;
+
+    case HVM_PARAM_VM86_TSS_SIZED:
+        if ( (a.value >> 32) < sizeof(struct tss32) )
+        {
+            if ( d == curr_d )
+                domain_crash(d);
+            rc = -EINVAL;
+        }
+        /*
+         * Cap at the theoretically useful maximum (base structure plus
+         * 256 bits interrupt redirection bitmap + 64k bits I/O bitmap
+         * plus one padding byte).
+         */
+        if ( (a.value >> 32) > sizeof(struct tss32) +
+                               (0x100 / 8) + (0x10000 / 8) + 1 )
+            a.value = (uint32_t)a.value |
+                      ((sizeof(struct tss32) + (0x100 / 8) +
+                                               (0x10000 / 8) + 1) << 32);
+        a.value |= VM86_TSS_UPDATED;
+        break;
     }
 
     if ( rc != 0 )
@@ -4210,6 +4278,7 @@ static int hvm_allow_get_param(struct do
     /* The following parameters can be read by the guest. */
     case HVM_PARAM_CALLBACK_IRQ:
     case HVM_PARAM_VM86_TSS:
+    case HVM_PARAM_VM86_TSS_SIZED:
     case HVM_PARAM_ACPI_IOPORTS_LOCATION:
     case HVM_PARAM_VM_GENERATION_ID_ADDR:
     case HVM_PARAM_STORE_PFN:
@@ -4267,6 +4336,12 @@ static int hvmop_get_param(
     case HVM_PARAM_ACPI_S_STATE:
         a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
         break;
+
+    case HVM_PARAM_VM86_TSS:
+        a.value = (uint32_t)d->arch.hvm_domain.params
+                                [HVM_PARAM_VM86_TSS_SIZED];
+        break;
+
     case HVM_PARAM_X87_FIP_WIDTH:
         a.value = d->arch.x87_fip_width;
         break;
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1119,12 +1119,21 @@ static void vmx_set_segment_register(str
         
         if ( seg == x86_seg_tr ) 
         {
-            if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VM86_TSS] )
+            const struct domain *d = v->domain;
+            uint64_t val = d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED];
+
+            if ( val )
             {
                 sel = 0;
                 attr = vm86_tr_attr;
-                limit = 0xff;
-                base = v->domain->arch.hvm_domain.params[HVM_PARAM_VM86_TSS];
+                limit = ((val & ~VM86_TSS_UPDATED) >> 32) - 1;
+                base = (uint32_t)val;
+                if ( val & VM86_TSS_UPDATED )
+                {
+                    hvm_prepare_vm86_tss(v, base, limit);
+                    cmpxchg(&d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED],
+                            val, val & ~VM86_TSS_UPDATED);
+                }
                 v->arch.hvm_vmx.vm86_segment_mask &= ~(1u << seg);
             }
             else
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -110,6 +110,9 @@ int hvm_do_hypercall(struct cpu_user_reg
 void hvm_hlt(unsigned int eflags);
 void hvm_triple_fault(void);
 
+#define VM86_TSS_UPDATED (1ULL << 63)
+void hvm_prepare_vm86_tss(struct vcpu *v, uint32_t base, uint32_t limit);
+
 void hvm_rdtsc_intercept(struct cpu_user_regs *regs);
 
 int __must_check hvm_handle_xsetbv(u32 index, u64 new_bv);
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -253,6 +253,12 @@
  */
 #define HVM_PARAM_X87_FIP_WIDTH 36
 
-#define HVM_NR_PARAMS 37
+/*
+ * TSS (and its size) used on Intel when CR0.PE=0. The address occupies
+ * the low 32 bits, while the size is in the high 32 ones.
+ */
+#define HVM_PARAM_VM86_TSS_SIZED 37
+
+#define HVM_NR_PARAMS 38
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/VMX: sanitize VM86 TSS handling
  2017-02-17 12:03 [PATCH v2] x86/VMX: sanitize VM86 TSS handling Jan Beulich
@ 2017-02-20 11:01 ` Tim Deegan
  2017-02-20 11:17   ` Jan Beulich
  2017-02-21  3:20 ` Tian, Kevin
  2017-02-22 10:45 ` Andrew Cooper
  2 siblings, 1 reply; 7+ messages in thread
From: Tim Deegan @ 2017-02-20 11:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Jun Nakajima, xen-devel,
	Roger Pau Monne

At 05:03 -0700 on 17 Feb (1487307837), Jan Beulich wrote:
> The present way of setting this up is flawed: Leaving the I/O bitmap
> pointer at zero means that the interrupt redirection bitmap lives
> outside (ahead of) the allocated space of the TSS. Similarly setting a
> TSS limit of 255 when only 128 bytes get allocated means that 128 extra
> bytes may be accessed by the CPU during I/O port access processing.
> 
> Introduce a new HVM param to set the allocated size of the TSS, and
> have the hypervisor actually take care of setting namely the I/O bitmap
> pointer. Both this and the segment limit now take the allocated size
> into account.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Instead of HVM_PARAM_VM86_TSS_SIZE, introduce
>     HVM_PARAM_VM86_TSS_SIZED, which requires the old parameter to no
>     longer be saved in libxc's write_hvm_params(). Only initialize the
>     TSS once after the param was set. Request only 384 bytes (and
>     128-byte alignment) for the TSS. Add padding byte to capping value.
>     Add comment to hvm_copy_to_guest_phys() invocations.

This still seems like it has too many moving parts -- why not just
declare the top half of the existing param to be the size, interpret
size==0 as size==128, and init the contents when the param is written?

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/VMX: sanitize VM86 TSS handling
  2017-02-20 11:01 ` Tim Deegan
@ 2017-02-20 11:17   ` Jan Beulich
  2017-02-20 11:22     ` Tim Deegan
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2017-02-20 11:17 UTC (permalink / raw)
  To: Tim Deegan
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Jun Nakajima, xen-devel,
	Roger Pau Monne

>>> On 20.02.17 at 12:01, <tim@xen.org> wrote:
> At 05:03 -0700 on 17 Feb (1487307837), Jan Beulich wrote:
>> The present way of setting this up is flawed: Leaving the I/O bitmap
>> pointer at zero means that the interrupt redirection bitmap lives
>> outside (ahead of) the allocated space of the TSS. Similarly setting a
>> TSS limit of 255 when only 128 bytes get allocated means that 128 extra
>> bytes may be accessed by the CPU during I/O port access processing.
>> 
>> Introduce a new HVM param to set the allocated size of the TSS, and
>> have the hypervisor actually take care of setting namely the I/O bitmap
>> pointer. Both this and the segment limit now take the allocated size
>> into account.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Instead of HVM_PARAM_VM86_TSS_SIZE, introduce
>>     HVM_PARAM_VM86_TSS_SIZED, which requires the old parameter to no
>>     longer be saved in libxc's write_hvm_params(). Only initialize the
>>     TSS once after the param was set. Request only 384 bytes (and
>>     128-byte alignment) for the TSS. Add padding byte to capping value.
>>     Add comment to hvm_copy_to_guest_phys() invocations.
> 
> This still seems like it has too many moving parts -- why not just
> declare the top half of the existing param to be the size, interpret
> size==0 as size==128, and init the contents when the param is written?

I would have done that if the parameters and their hypercall function
were tools only (and hence we could freely change their behavior).
Also, since we wouldn't be able to tell size 128 from size 0 with what
you propose, "get" would then possibly return a value different from
what was passed to "set".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/VMX: sanitize VM86 TSS handling
  2017-02-20 11:17   ` Jan Beulich
@ 2017-02-20 11:22     ` Tim Deegan
  0 siblings, 0 replies; 7+ messages in thread
From: Tim Deegan @ 2017-02-20 11:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, Jun Nakajima, xen-devel,
	Roger Pau Monne

At 04:17 -0700 on 20 Feb (1487564245), Jan Beulich wrote:
> >>> On 20.02.17 at 12:01, <tim@xen.org> wrote:
> > At 05:03 -0700 on 17 Feb (1487307837), Jan Beulich wrote:
> >> The present way of setting this up is flawed: Leaving the I/O bitmap
> >> pointer at zero means that the interrupt redirection bitmap lives
> >> outside (ahead of) the allocated space of the TSS. Similarly setting a
> >> TSS limit of 255 when only 128 bytes get allocated means that 128 extra
> >> bytes may be accessed by the CPU during I/O port access processing.
> >> 
> >> Introduce a new HVM param to set the allocated size of the TSS, and
> >> have the hypervisor actually take care of setting namely the I/O bitmap
> >> pointer. Both this and the segment limit now take the allocated size
> >> into account.
> >> 
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> v2: Instead of HVM_PARAM_VM86_TSS_SIZE, introduce
> >>     HVM_PARAM_VM86_TSS_SIZED, which requires the old parameter to no
> >>     longer be saved in libxc's write_hvm_params(). Only initialize the
> >>     TSS once after the param was set. Request only 384 bytes (and
> >>     128-byte alignment) for the TSS. Add padding byte to capping value.
> >>     Add comment to hvm_copy_to_guest_phys() invocations.
> > 
> > This still seems like it has too many moving parts -- why not just
> > declare the top half of the existing param to be the size, interpret
> > size==0 as size==128, and init the contents when the param is written?
> 
> I would have done that if the parameters and their hypercall function
> were tools only (and hence we could freely change their behavior).
> Also, since we wouldn't be able to tell size 128 from size 0 with what
> you propose, "get" would then possibly return a value different from
> what was passed to "set".

Sure you could - just keep the client's version and s/0/128/ when
setting up the TSS.  But let's not bikeshed this any further.

Reviewed-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/VMX: sanitize VM86 TSS handling
  2017-02-17 12:03 [PATCH v2] x86/VMX: sanitize VM86 TSS handling Jan Beulich
  2017-02-20 11:01 ` Tim Deegan
@ 2017-02-21  3:20 ` Tian, Kevin
  2017-02-22 10:45 ` Andrew Cooper
  2 siblings, 0 replies; 7+ messages in thread
From: Tian, Kevin @ 2017-02-21  3:20 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Nakajima, Jun, Roger Pau Monne

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, February 17, 2017 8:04 PM
> 
> The present way of setting this up is flawed: Leaving the I/O bitmap
> pointer at zero means that the interrupt redirection bitmap lives
> outside (ahead of) the allocated space of the TSS. Similarly setting a
> TSS limit of 255 when only 128 bytes get allocated means that 128 extra
> bytes may be accessed by the CPU during I/O port access processing.
> 
> Introduce a new HVM param to set the allocated size of the TSS, and
> have the hypervisor actually take care of setting namely the I/O bitmap
> pointer. Both this and the segment limit now take the allocated size
> into account.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/VMX: sanitize VM86 TSS handling
  2017-02-17 12:03 [PATCH v2] x86/VMX: sanitize VM86 TSS handling Jan Beulich
  2017-02-20 11:01 ` Tim Deegan
  2017-02-21  3:20 ` Tian, Kevin
@ 2017-02-22 10:45 ` Andrew Cooper
  2017-02-22 11:28   ` Jan Beulich
  2 siblings, 1 reply; 7+ messages in thread
From: Andrew Cooper @ 2017-02-22 10:45 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, George Dunlap,
	Tim Deegan, Ian Jackson, Jun Nakajima, Roger Pau Monne

On 17/02/17 12:03, Jan Beulich wrote:
> @@ -4267,6 +4336,12 @@ static int hvmop_get_param(
>      case HVM_PARAM_ACPI_S_STATE:
>          a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
>          break;
> +
> +    case HVM_PARAM_VM86_TSS:
> +        a.value = (uint32_t)d->arch.hvm_domain.params
> +                                [HVM_PARAM_VM86_TSS_SIZED];
> +        break;

HVM_PARAM_VM86_TSS_SIZED needs to have VM86_TSS_UPDATED masked out on a
read, or the guest and toolstack will observe a crazy size if they read
the param back before CR0.PE is cleared.

Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> +
>      case HVM_PARAM_X87_FIP_WIDTH:
>          a.value = d->arch.x87_fip_width;
>          break;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/VMX: sanitize VM86 TSS handling
  2017-02-22 10:45 ` Andrew Cooper
@ 2017-02-22 11:28   ` Jan Beulich
  0 siblings, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2017-02-22 11:28 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: KevinTian, Stefano Stabellini, Wei Liu, George Dunlap, Tim Deegan,
	Ian Jackson, Jun Nakajima, xen-devel, Roger Pau Monne

>>> On 22.02.17 at 11:45, <andrew.cooper3@citrix.com> wrote:
> On 17/02/17 12:03, Jan Beulich wrote:
>> @@ -4267,6 +4336,12 @@ static int hvmop_get_param(
>>      case HVM_PARAM_ACPI_S_STATE:
>>          a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
>>          break;
>> +
>> +    case HVM_PARAM_VM86_TSS:
>> +        a.value = (uint32_t)d->arch.hvm_domain.params
>> +                                [HVM_PARAM_VM86_TSS_SIZED];
>> +        break;
> 
> HVM_PARAM_VM86_TSS_SIZED needs to have VM86_TSS_UPDATED masked out on a
> read, or the guest and toolstack will observe a crazy size if they read
> the param back before CR0.PE is cleared.

Oops, of course. My mental note to do this must have gone lost
with the various other adjustments that were needed from v1.

> Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-02-22 11:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-17 12:03 [PATCH v2] x86/VMX: sanitize VM86 TSS handling Jan Beulich
2017-02-20 11:01 ` Tim Deegan
2017-02-20 11:17   ` Jan Beulich
2017-02-20 11:22     ` Tim Deegan
2017-02-21  3:20 ` Tian, Kevin
2017-02-22 10:45 ` Andrew Cooper
2017-02-22 11:28   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).