xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Xen.org security team <security@xen.org>
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
Cc: "Xen.org security team" <security@xen.org>
Subject: Xen Security Advisory 187 (CVE-2016-7094) - x86 HVM: Overflow of sh_ctxt->seg_reg[]
Date: Thu, 08 Sep 2016 12:05:00 +0000	[thread overview]
Message-ID: <E1bhy4i-0001oa-Fi@xenbits.xenproject.org> (raw)

[-- Attachment #1: Type: text/plain, Size: 5795 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2016-7094 / XSA-187
                              version 3

                x86 HVM: Overflow of sh_ctxt->seg_reg[]

UPDATES IN VERSION 3
====================

Fix the backports xsa187-4.6-0002-*.patch and xsa187-4.4-0002-*.patch.
In v1 and v2 these did not compile in debug builds.  (Debug builds
should not be used in production.)

Public release.

ISSUE DESCRIPTION
=================

x86 HVM guests running with shadow paging use a subset of the x86 emulator to
handle the guest writing to its own pagetables.  There are situations a guest
can provoke which result in exceeding the space allocated for internal state.


IMPACT
======

A malicious HVM guest administrator can cause Xen to fail a bug check,
causing a denial of service to the host.


VULNERABLE SYSTEMS
==================

All versions of Xen are vulnerable.

The vulnerability is only exposed to HVM guests on x86 hardware, which are
configured to run with shadow paging.

The vulnerability is not exposed to x86 PV guests, x86 HVM guests running with
hardware assisted paging, or ARM guests.


x86 HVM guests run in HAP mode by default on modern CPUs.

To discover whether your HVM guests are using HAP, or shadow page
tables: request debug key `q' (from the Xen console, or with
`xl debug-keys q').  This will print (to the console, and visible in
`xl dmesg'), debug information for every domain, containing something
like this:

  (XEN) General information for domain 2:
  (XEN)     refcnt=1 dying=2 pause_count=2
  (XEN)     nr_pages=2 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=262400
  (XEN)     handle=ef58ef1a-784d-4e59-8079-42bdee87f219 vm_assist=00000000
  (XEN)     paging assistance: hap refcounts translate external
                               ^^^
The presence of `hap' here indicates that the host is not
vulnerable to this domain.  For an HVM domain the presence of `shadow'
indicates that the domain can exploit the vulnerability.


MITIGATION
==========

Running only PV guests will avoid this vulnerability.

On hardware which supports Hardware Assisted Paging, configuring the
guests to not run with shadow paging will avoid this vulnerability.


CREDITS
=======

This issue was discovered by Andrew Cooper of Citrix.

RESOLUTION
==========

Applying the first patch will resolve this issue.

The second patch provides additional assurance that the vulnerability
is truly eliminated and that there are no related problems.

If hotpatching, applying only the first patch is recommended since the
second patch is awkward for hotpatching.  If deploying new builds,
applying both patches is recommended.

Xen version     First patch               Second patch
 xen-unstable:   xsa187-0001-*.patch       xsa187-0002-*.patch
 Xen 4.7.x:      xsa187-4.7-0001-*.patch   xsa187-4.7-0002-*.patch
 Xen 4.6.x:      xsa187-4.7-0001-*.patch   xsa187-4.6-0002-*.patch
 Xen 4.5.x:      xsa187-4.7-0001-*.patch   xsa187-4.6-0002-*.patch
 Xen 4.4.x:      xsa187-4.7-0001-*.patch   xsa187-4.4-0002-*.patch

$ sha256sum xsa187*
65205ee195699d65884af04083ffb86c6ddbc96cbca3141c87f6b2d671de45a3  xsa187-0001-x86-shadow-Avoid-overflowing-sh_ctxt-seg_reg.patch
f90e6d13385fb9219e1e26e3a148d1670aefc7130e0639415d08bbb6a1d9efee  xsa187-0002-x86-segment-Bounds-check-accesses-to-emulation-ctxt-.patch
727b18ae83001f7ea04613aa7199ada3e6a84939aa44516f7c426e609d383b2a  xsa187-4.4-0002-x86-segment-Bounds-check-accesses-to-emulation-ctx.patch
b96731379ea77d49ffff31d969f4742dde985ef7a86af9422dcac8327c2a1916  xsa187-4.6-0002-x86-segment-Bounds-check-accesses-to-emulation-ctx.patch
be9fe85d36c2c1fbca246c1f4d834c3ef11b6ab3d5467da0ac8c079aa5a68de9  xsa187-4.7-0001-x86-shadow-Avoid-overflowing-sh_ctxt-seg.patch
36b22d6a168be39f31a1c1304f708269a2a10fe5105f7da4a06877d6059f1cd6  xsa187-4.7-0002-x86-segment-Bounds-check-accesses-to-emulation-ctx.patch
$


DEPLOYMENT DURING EMBARGO
=========================

Deployment of the "reconfigure to use HAP" MITIGATION is NOT permitted
(except where all the affected systems and VMs are administered and
used only by organisations which are members of the Xen Project
Security Issues Predisclosure List).  Specifically, deployment on
public cloud systems is NOT permitted.

This is because the mitigation result in guest-visible changes.

Deployment of this mitigation is permitted only AFTER the embargo
ends.


Deployment of the PATCHES described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).


Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJX0VPlAAoJEIP+FMlX6CvZeIQIALJEH1stiHZLs2Kc8AKTTbUh
ZM3pGSUYC9AFMpn3ovuXgOJjYfX1YY89nAOKlDKuiPaaUz01mmqWXGBDBmqZ1IU0
9BjPfnnMYoqeHcrjz0+KajSO6kf/iepUdUx1IiduA48k/7VUvqU8P/s/UiutXerF
YvHcc+a6lLIPPcXjWW6ftSEagGHUmB+qcHh4aptxVq/xEdIQysAGtUU7MMG9ihsN
VY3MN6aQiIFRr56ICZJ7K+s9Rw+xWfOVXj8FZAoq2mDPns6pQT0obyCrIbfQcywA
6GMXWXVf4vSR/dgQm8xD95/3gFPd1IfuwxjOvHkfJlqxxCfa8JT6oVsQfZskhhs=
=/Mbl
-----END PGP SIGNATURE-----

[-- Attachment #2: xsa187-0001-x86-shadow-Avoid-overflowing-sh_ctxt-seg_reg.patch --]
[-- Type: application/octet-stream, Size: 1887 bytes --]

From 5eb7bdc1598f8fdd6d53ec8e54ee5ac894192e25 Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 1 Jul 2016 01:02:04 +0100
Subject: [PATCH 1/2] x86/shadow: Avoid overflowing sh_ctxt->seg_reg[]

hvm_get_seg_reg() does not perform a range check on its input segment, calls
hvm_get_segment_register() and writes straight into sh_ctxt->seg_reg[].

x86_seg_none is outside the bounds of sh_ctxt->seg_reg[], and will hit a BUG()
in {vmx,svm}_get_segment_register().

HVM guests running with shadow paging can end up performing a virtual to
linear translation with x86_seg_none.  This is used for addresses which are
already linear.  However, none of this is a legitimate pagetable update, so
fail the emulation in such a case.

This is XSA-187

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/common.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index c22362f..b40bf71 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -140,9 +140,18 @@ static int hvm_translate_linear_addr(
     struct sh_emulate_ctxt *sh_ctxt,
     unsigned long *paddr)
 {
-    struct segment_register *reg = hvm_get_seg_reg(seg, sh_ctxt);
+    const struct segment_register *reg;
     int okay;
 
+    /*
+     * Can arrive here with non-user segments.  However, no such cirucmstance
+     * is part of a legitimate pagetable update, so fail the emulation.
+     */
+    if ( !is_x86_user_segment(seg) )
+        return X86EMUL_UNHANDLEABLE;
+
+    reg = hvm_get_seg_reg(seg, sh_ctxt);
+
     okay = hvm_virtual_to_linear_addr(
         seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
 
-- 
2.1.4


[-- Attachment #3: xsa187-0002-x86-segment-Bounds-check-accesses-to-emulation-ctxt-.patch --]
[-- Type: application/octet-stream, Size: 6481 bytes --]

From ddf9b1bf26a1a442b61f29a17343745d6dddb4de Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 1 Jul 2016 01:02:04 +0100
Subject: [PATCH 2/2] x86/segment: Bounds check accesses to emulation
 ctxt->seg_reg[]

HVM HAP codepaths have space for all segment registers in the seg_reg[]
cache (with x86_seg_none still risking an array overrun), while the shadow
codepaths only have space for the user segments.

Range check the input segment of *_get_seg_reg() against the size of the array
used to cache the results, to avoid overruns in the case that the callers
don't filter their input suitably.

Subsume the is_x86_user_segment(seg) checks from the shadow code, which were
an incomplete attempt at range checking, and are now superceeded.  Make
hvm_get_seg_reg() static, as it is not used outside of shadow/common.c

No functional change, but far easier to reason that no overflow is possible.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/emulate.c        | 16 ++++++++++++++++
 xen/arch/x86/mm/shadow/common.c   | 27 ++++++++++++++-------------
 xen/arch/x86/mm/shadow/private.h  |  2 --
 xen/include/asm-x86/hvm/emulate.h |  1 +
 4 files changed, 31 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index c55ad7b..0eb7a4d 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -535,6 +535,8 @@ static int hvmemul_virtual_to_linear(
     *reps = min_t(unsigned long, *reps, max_reps);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     if ( (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1) )
     {
@@ -1430,6 +1432,10 @@ static int hvmemul_read_segment(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(reg, sreg, sizeof(struct segment_register));
     return X86EMUL_OKAY;
 }
@@ -1443,6 +1449,9 @@ static int hvmemul_write_segment(
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
 
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(sreg, reg, sizeof(struct segment_register));
     __set_bit(seg, &hvmemul_ctxt->seg_reg_dirty);
 
@@ -1995,10 +2004,17 @@ void hvm_emulate_writeback(
     }
 }
 
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
 struct segment_register *hvmemul_get_seg_reg(
     enum x86_segment seg,
     struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
+    if ( seg < 0 || seg >= ARRAY_SIZE(hvmemul_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
     if ( !__test_and_set_bit(seg, &hvmemul_ctxt->seg_reg_accessed) )
         hvm_get_segment_register(current, seg, &hvmemul_ctxt->seg_reg[seg]);
     return &hvmemul_ctxt->seg_reg[seg];
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index b40bf71..91c114d 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -123,10 +123,19 @@ __initcall(shadow_audit_key_init);
 /* x86 emulator support for the shadow code
  */
 
-struct segment_register *hvm_get_seg_reg(
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
+static struct segment_register *hvm_get_seg_reg(
     enum x86_segment seg, struct sh_emulate_ctxt *sh_ctxt)
 {
-    struct segment_register *seg_reg = &sh_ctxt->seg_reg[seg];
+    struct segment_register *seg_reg;
+
+    if ( seg < 0 || seg >= ARRAY_SIZE(sh_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
+    seg_reg = &sh_ctxt->seg_reg[seg];
     if ( !__test_and_set_bit(seg, &sh_ctxt->valid_seg_regs) )
         hvm_get_segment_register(current, seg, seg_reg);
     return seg_reg;
@@ -143,14 +152,9 @@ static int hvm_translate_linear_addr(
     const struct segment_register *reg;
     int okay;
 
-    /*
-     * Can arrive here with non-user segments.  However, no such cirucmstance
-     * is part of a legitimate pagetable update, so fail the emulation.
-     */
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     reg = hvm_get_seg_reg(seg, sh_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     okay = hvm_virtual_to_linear_addr(
         seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
@@ -253,9 +257,6 @@ hvm_emulate_write(enum x86_segment seg,
     unsigned long addr;
     int rc;
 
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     /* How many emulations could we save if we unshadowed on stack writes? */
     if ( seg == x86_seg_ss )
         perfc_incr(shadow_fault_emulate_stack);
@@ -283,7 +284,7 @@ hvm_emulate_cmpxchg(enum x86_segment seg,
     unsigned long addr, old, new;
     int rc;
 
-    if ( !is_x86_user_segment(seg) || bytes > sizeof(long) )
+    if ( bytes > sizeof(long) )
         return X86EMUL_UNHANDLEABLE;
 
     rc = hvm_translate_linear_addr(
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 824796f..f0b0ed4 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -740,8 +740,6 @@ const struct x86_emulate_ops *shadow_init_emulation(
     struct sh_emulate_ctxt *sh_ctxt, struct cpu_user_regs *regs);
 void shadow_continue_emulation(
     struct sh_emulate_ctxt *sh_ctxt, struct cpu_user_regs *regs);
-struct segment_register *hvm_get_seg_reg(
-    enum x86_segment seg, struct sh_emulate_ctxt *sh_ctxt);
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB)
 /**************************************************************************/
diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
index 142d1b6..3aabcbe 100644
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -13,6 +13,7 @@
 #define __ASM_X86_HVM_EMULATE_H__
 
 #include <xen/config.h>
+#include <xen/err.h>
 #include <asm/hvm/hvm.h>
 #include <asm/x86_emulate.h>
 
-- 
2.1.4


[-- Attachment #4: xsa187-4.4-0002-x86-segment-Bounds-check-accesses-to-emulation-ctx.patch --]
[-- Type: application/octet-stream, Size: 5065 bytes --]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/segment: Bounds check accesses to emulation ctxt->seg_reg[]

HVM HAP codepaths have space for all segment registers in the seg_reg[]
cache (with x86_seg_none still risking an array overrun), while the shadow
codepaths only have space for the user segments.

Range check the input segment of *_get_seg_reg() against the size of the array
used to cache the results, to avoid overruns in the case that the callers
don't filter their input suitably.

Subsume the is_x86_user_segment(seg) checks from the shadow code, which were
an incomplete attempt at range checking, and are now superceeded.  Make
hvm_get_seg_reg() static, as it is not used outside of shadow/common.c

No functional change, but far easier to reason that no overflow is possible.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -436,6 +436,8 @@ static int hvmemul_virtual_to_linear(
     *reps = min_t(unsigned long, *reps, 4096);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     if ( (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1) )
     {
@@ -926,6 +928,10 @@ static int hvmemul_read_segment(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(reg, sreg, sizeof(struct segment_register));
     return X86EMUL_OKAY;
 }
@@ -939,6 +945,9 @@ static int hvmemul_write_segment(
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
 
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(sreg, reg, sizeof(struct segment_register));
     __set_bit(seg, &hvmemul_ctxt->seg_reg_dirty);
 
@@ -1302,10 +1311,17 @@ void hvm_emulate_writeback(
     }
 }
 
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
 struct segment_register *hvmemul_get_seg_reg(
     enum x86_segment seg,
     struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
+    if ( seg < 0 || seg >= ARRAY_SIZE(hvmemul_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
     if ( !__test_and_set_bit(seg, &hvmemul_ctxt->seg_reg_accessed) )
         hvm_get_segment_register(current, seg, &hvmemul_ctxt->seg_reg[seg]);
     return &hvmemul_ctxt->seg_reg[seg];
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -120,10 +120,19 @@ __initcall(shadow_audit_key_init);
 /* x86 emulator support for the shadow code
  */
 
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
 struct segment_register *hvm_get_seg_reg(
     enum x86_segment seg, struct sh_emulate_ctxt *sh_ctxt)
 {
-    struct segment_register *seg_reg = &sh_ctxt->seg_reg[seg];
+    struct segment_register *seg_reg;
+
+    if ( seg < 0 || seg >= ARRAY_SIZE(sh_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
+    seg_reg = &sh_ctxt->seg_reg[seg];
     if ( !__test_and_set_bit(seg, &sh_ctxt->valid_seg_regs) )
         hvm_get_segment_register(current, seg, seg_reg);
     return seg_reg;
@@ -140,14 +149,9 @@ static int hvm_translate_linear_addr(
     struct segment_register *reg;
     int okay;
 
-    /*
-     * Can arrive here with non-user segments.  However, no such cirucmstance
-     * is part of a legitimate pagetable update, so fail the emulation.
-     */
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     reg = hvm_get_seg_reg(seg, sh_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     okay = hvm_virtual_to_linear_addr(
         seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
@@ -249,9 +253,6 @@ hvm_emulate_write(enum x86_segment seg,
     unsigned long addr;
     int rc;
 
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     /* How many emulations could we save if we unshadowed on stack writes? */
     if ( seg == x86_seg_ss )
         perfc_incr(shadow_fault_emulate_stack);
@@ -279,9 +280,6 @@ hvm_emulate_cmpxchg(enum x86_segment seg
     unsigned long addr, old[2], new[2];
     int rc;
 
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     rc = hvm_translate_linear_addr(
         seg, offset, bytes, hvm_access_write, sh_ctxt, &addr);
     if ( rc )
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -13,6 +13,7 @@
 #define __ASM_X86_HVM_EMULATE_H__
 
 #include <xen/config.h>
+#include <xen/err.h>
 #include <asm/x86_emulate.h>
 
 struct hvm_emulate_ctxt {

[-- Attachment #5: xsa187-4.6-0002-x86-segment-Bounds-check-accesses-to-emulation-ctx.patch --]
[-- Type: application/octet-stream, Size: 5061 bytes --]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/segment: Bounds check accesses to emulation ctxt->seg_reg[]

HVM HAP codepaths have space for all segment registers in the seg_reg[]
cache (with x86_seg_none still risking an array overrun), while the shadow
codepaths only have space for the user segments.

Range check the input segment of *_get_seg_reg() against the size of the array
used to cache the results, to avoid overruns in the case that the callers
don't filter their input suitably.

Subsume the is_x86_user_segment(seg) checks from the shadow code, which were
an incomplete attempt at range checking, and are now superceeded.  Make
hvm_get_seg_reg() static, as it is not used outside of shadow/common.c

No functional change, but far easier to reason that no overflow is possible.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -526,6 +526,8 @@ static int hvmemul_virtual_to_linear(
                            ? 1 : 4096);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     if ( (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1) )
     {
@@ -1360,6 +1362,10 @@ static int hvmemul_read_segment(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(reg, sreg, sizeof(struct segment_register));
     return X86EMUL_OKAY;
 }
@@ -1373,6 +1379,9 @@ static int hvmemul_write_segment(
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
 
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(sreg, reg, sizeof(struct segment_register));
     __set_bit(seg, &hvmemul_ctxt->seg_reg_dirty);
 
@@ -1911,10 +1920,17 @@ void hvm_emulate_writeback(
     }
 }
 
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
 struct segment_register *hvmemul_get_seg_reg(
     enum x86_segment seg,
     struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
+    if ( seg < 0 || seg >= ARRAY_SIZE(hvmemul_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
     if ( !__test_and_set_bit(seg, &hvmemul_ctxt->seg_reg_accessed) )
         hvm_get_segment_register(current, seg, &hvmemul_ctxt->seg_reg[seg]);
     return &hvmemul_ctxt->seg_reg[seg];
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -125,10 +125,19 @@ __initcall(shadow_audit_key_init);
 /* x86 emulator support for the shadow code
  */
 
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
 struct segment_register *hvm_get_seg_reg(
     enum x86_segment seg, struct sh_emulate_ctxt *sh_ctxt)
 {
-    struct segment_register *seg_reg = &sh_ctxt->seg_reg[seg];
+    struct segment_register *seg_reg;
+
+    if ( seg < 0 || seg >= ARRAY_SIZE(sh_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
+    seg_reg = &sh_ctxt->seg_reg[seg];
     if ( !__test_and_set_bit(seg, &sh_ctxt->valid_seg_regs) )
         hvm_get_segment_register(current, seg, seg_reg);
     return seg_reg;
@@ -145,14 +154,9 @@ static int hvm_translate_linear_addr(
     struct segment_register *reg;
     int okay;
 
-    /*
-     * Can arrive here with non-user segments.  However, no such cirucmstance
-     * is part of a legitimate pagetable update, so fail the emulation.
-     */
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     reg = hvm_get_seg_reg(seg, sh_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     okay = hvm_virtual_to_linear_addr(
         seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
@@ -254,9 +258,6 @@ hvm_emulate_write(enum x86_segment seg,
     unsigned long addr;
     int rc;
 
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     /* How many emulations could we save if we unshadowed on stack writes? */
     if ( seg == x86_seg_ss )
         perfc_incr(shadow_fault_emulate_stack);
@@ -284,9 +285,6 @@ hvm_emulate_cmpxchg(enum x86_segment seg
     unsigned long addr, old[2], new[2];
     int rc;
 
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     rc = hvm_translate_linear_addr(
         seg, offset, bytes, hvm_access_write, sh_ctxt, &addr);
     if ( rc )
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -13,6 +13,7 @@
 #define __ASM_X86_HVM_EMULATE_H__
 
 #include <xen/config.h>
+#include <xen/err.h>
 #include <asm/hvm/hvm.h>
 #include <asm/x86_emulate.h>
 

[-- Attachment #6: xsa187-4.7-0001-x86-shadow-Avoid-overflowing-sh_ctxt-seg.patch --]
[-- Type: application/octet-stream, Size: 1538 bytes --]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/shadow: Avoid overflowing sh_ctxt->seg_reg[]

hvm_get_seg_reg() does not perform a range check on its input segment, calls
hvm_get_segment_register() and writes straight into sh_ctxt->seg_reg[].

x86_seg_none is outside the bounds of sh_ctxt->seg_reg[], and will hit a BUG()
in {vmx,svm}_get_segment_register().

HVM guests running with shadow paging can end up performing a virtual to
linear translation with x86_seg_none.  This is used for addresses which are
already linear.  However, none of this is a legitimate pagetable update, so
fail the emulation in such a case.

This is XSA-187

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -140,9 +140,18 @@ static int hvm_translate_linear_addr(
     struct sh_emulate_ctxt *sh_ctxt,
     unsigned long *paddr)
 {
-    struct segment_register *reg = hvm_get_seg_reg(seg, sh_ctxt);
+    struct segment_register *reg;
     int okay;
 
+    /*
+     * Can arrive here with non-user segments.  However, no such cirucmstance
+     * is part of a legitimate pagetable update, so fail the emulation.
+     */
+    if ( !is_x86_user_segment(seg) )
+        return X86EMUL_UNHANDLEABLE;
+
+    reg = hvm_get_seg_reg(seg, sh_ctxt);
+
     okay = hvm_virtual_to_linear_addr(
         seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
 

[-- Attachment #7: xsa187-4.7-0002-x86-segment-Bounds-check-accesses-to-emulation-ctx.patch --]
[-- Type: application/octet-stream, Size: 5632 bytes --]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/segment: Bounds check accesses to emulation ctxt->seg_reg[]

HVM HAP codepaths have space for all segment registers in the seg_reg[]
cache (with x86_seg_none still risking an array overrun), while the shadow
codepaths only have space for the user segments.

Range check the input segment of *_get_seg_reg() against the size of the array
used to cache the results, to avoid overruns in the case that the callers
don't filter their input suitably.

Subsume the is_x86_user_segment(seg) checks from the shadow code, which were
an incomplete attempt at range checking, and are now superceeded.  Make
hvm_get_seg_reg() static, as it is not used outside of shadow/common.c

No functional change, but far easier to reason that no overflow is possible.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -534,6 +534,8 @@ static int hvmemul_virtual_to_linear(
     *reps = min_t(unsigned long, *reps, max_reps);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     if ( (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1) )
     {
@@ -1369,6 +1371,10 @@ static int hvmemul_read_segment(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
+
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(reg, sreg, sizeof(struct segment_register));
     return X86EMUL_OKAY;
 }
@@ -1382,6 +1388,9 @@ static int hvmemul_write_segment(
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct segment_register *sreg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
 
+    if ( IS_ERR(sreg) )
+         return -PTR_ERR(sreg);
+
     memcpy(sreg, reg, sizeof(struct segment_register));
     __set_bit(seg, &hvmemul_ctxt->seg_reg_dirty);
 
@@ -1934,10 +1943,17 @@ void hvm_emulate_writeback(
     }
 }
 
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
 struct segment_register *hvmemul_get_seg_reg(
     enum x86_segment seg,
     struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
+    if ( seg < 0 || seg >= ARRAY_SIZE(hvmemul_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
     if ( !__test_and_set_bit(seg, &hvmemul_ctxt->seg_reg_accessed) )
         hvm_get_segment_register(current, seg, &hvmemul_ctxt->seg_reg[seg]);
     return &hvmemul_ctxt->seg_reg[seg];
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -123,10 +123,19 @@ __initcall(shadow_audit_key_init);
 /* x86 emulator support for the shadow code
  */
 
-struct segment_register *hvm_get_seg_reg(
+/*
+ * Callers which pass a known in-range x86_segment can rely on the return
+ * pointer being valid.  Other callers must explicitly check for errors.
+ */
+static struct segment_register *hvm_get_seg_reg(
     enum x86_segment seg, struct sh_emulate_ctxt *sh_ctxt)
 {
-    struct segment_register *seg_reg = &sh_ctxt->seg_reg[seg];
+    struct segment_register *seg_reg;
+
+    if ( seg < 0 || seg >= ARRAY_SIZE(sh_ctxt->seg_reg) )
+        return ERR_PTR(-X86EMUL_UNHANDLEABLE);
+
+    seg_reg = &sh_ctxt->seg_reg[seg];
     if ( !__test_and_set_bit(seg, &sh_ctxt->valid_seg_regs) )
         hvm_get_segment_register(current, seg, seg_reg);
     return seg_reg;
@@ -143,14 +152,9 @@ static int hvm_translate_linear_addr(
     struct segment_register *reg;
     int okay;
 
-    /*
-     * Can arrive here with non-user segments.  However, no such cirucmstance
-     * is part of a legitimate pagetable update, so fail the emulation.
-     */
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     reg = hvm_get_seg_reg(seg, sh_ctxt);
+    if ( IS_ERR(reg) )
+        return -PTR_ERR(reg);
 
     okay = hvm_virtual_to_linear_addr(
         seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
@@ -253,9 +257,6 @@ hvm_emulate_write(enum x86_segment seg,
     unsigned long addr;
     int rc;
 
-    if ( !is_x86_user_segment(seg) )
-        return X86EMUL_UNHANDLEABLE;
-
     /* How many emulations could we save if we unshadowed on stack writes? */
     if ( seg == x86_seg_ss )
         perfc_incr(shadow_fault_emulate_stack);
@@ -283,7 +284,7 @@ hvm_emulate_cmpxchg(enum x86_segment seg
     unsigned long addr, old, new;
     int rc;
 
-    if ( !is_x86_user_segment(seg) || bytes > sizeof(long) )
+    if ( bytes > sizeof(long) )
         return X86EMUL_UNHANDLEABLE;
 
     rc = hvm_translate_linear_addr(
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -740,8 +740,6 @@ const struct x86_emulate_ops *shadow_ini
     struct sh_emulate_ctxt *sh_ctxt, struct cpu_user_regs *regs);
 void shadow_continue_emulation(
     struct sh_emulate_ctxt *sh_ctxt, struct cpu_user_regs *regs);
-struct segment_register *hvm_get_seg_reg(
-    enum x86_segment seg, struct sh_emulate_ctxt *sh_ctxt);
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB)
 /**************************************************************************/
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -13,6 +13,7 @@
 #define __ASM_X86_HVM_EMULATE_H__
 
 #include <xen/config.h>
+#include <xen/err.h>
 #include <asm/hvm/hvm.h>
 #include <asm/x86_emulate.h>
 

[-- Attachment #8: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

                 reply	other threads:[~2016-09-08 12:05 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E1bhy4i-0001oa-Fi@xenbits.xenproject.org \
    --to=security@xen.org \
    --cc=oss-security@lists.openwall.com \
    --cc=xen-announce@lists.xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=xen-users@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).