xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Xen.org security team <security@xen.org>
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
Cc: "Xen.org security team" <security@xen.org>
Subject: Xen Security Advisory 170 (CVE-2016-2271) - VMX: guest user mode may crash guest with non-canonical RIP
Date: Wed, 17 Feb 2016 12:28:04 +0000	[thread overview]
Message-ID: <E1aW1DA-00065m-A8@xenbits.xen.org> (raw)

[-- Attachment #1: Type: text/plain, Size: 3609 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2016-2271 / XSA-170
                              version 3

      VMX: guest user mode may crash guest with non-canonical RIP

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

VMX refuses attempts to enter a guest with an instruction pointer which
doesn't satisfy certain requirements.  In particular, the instruction
pointer needs to be canonical when entering a guest currently in 64-bit
mode.  This is the case even if the VM entry information specifies an
exception to be injected immediately (in which case the bad instruction
pointer would possibly never get used for other than pushing onto the
exception handler's stack).  Provided the guest OS allows user mode to
map the virtual memory space immediately below the canonical/non-
canonical address boundary, a non-canonical instruction pointer can
result even from normal user mode execution. VM entry failure, however,
is fatal to the guest.

IMPACT
======

Malicious HVM guest user mode code may be able to crash the guest.

VULNERABLE SYSTEMS
==================

All Xen versions are affected.

Only systems using Intel or Cyrix CPUs are affected. ARM and AMD
systems are unaffected.

Only HVM guests are affected.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

Running HVM guests on only AMD hardware will also avoid this
vulnerability.

CREDITS
=======

This issue was discovered by Ling Liu of Qihoo 360 Inc.

RESOLUTION
==========

Applying the appropriate attached patch works around this issue.  Note
that it does so in a way which isn't architecturally correct, but no
better solution has been found (nor suggested by Intel).

xsa170.patch           xen-unstable, Xen 4.6.x
xsa170-4.5.patch       Xen 4.5.x, Xen 4.4.x
xsa170-4.3.patch       Xen 4.3.x

$ sha256sum xsa170*
77b4b14b2c93da5f68e724cf74e1616f7df2e78305f66d164b3de2d980221a9a  xsa170.patch
b35679bf7a35615d827efafff8d13c35ceec1184212e3c8ba110722b9ae8426f  xsa170-4.3.patch
1df068fb439c7edc1e86dfa9ea3b9ae99b58cdc3ac874b96cdf63b26ef9a6b98  xsa170-4.5.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJWxGa0AAoJEIP+FMlX6CvZ3rkIAIo+pvKqkNbHjalgGpP4BVe7
+7tuVnL74wt5Dt4AuOFyPLnEaHbp5UkIKK++eP/urFCz5+/LbOqcWnfiQdWMLQ/t
17NX2CMSYUCwUAkMMjvbKvGM3W8AJ85naIQho9KQSPbY1/Q51jDS5bLT06B2iRr4
njML2ii2OhOTGAvC2XmnidFNvLGQxlfeeC75O9dbCFENSYn5WbdmHonTnK8qm22H
eEvLlzg4D6yAmEaqHHZJ3bz1qtTw5FDNm/0tdZ1LO7lMuK01nMHSMmWG/Agc7219
lQH22N0+YTtgQKf65QciEThEnvTeDpeq84m64GqVhwzwssl1JrywrSsVkaQOnKA=
=Ca+d
-----END PGP SIGNATURE-----

[-- Attachment #2: xsa170.patch --]
[-- Type: application/octet-stream, Size: 3187 bytes --]

x86/VMX: sanitize rIP before re-entering guest

... to prevent guest user mode arranging for a guest crash (due to
failed VM entry). (On the AMD system I checked, hardware is doing
exactly the canonicalization being added here.)

Note that fixing this in an architecturally correct way would be quite
a bit more involved: Making the x86 instruction emulator check all
branch targets for validity, plus dealing with invalid rIP resulting
from update_guest_eip() or incoming directly during a VM exit. The only
way to get the latter right would be by not having hardware do the
injection.

Note further that there are a two early returns from
vmx_vmexit_handler(): One (through vmx_failed_vmentry()) leads to
domain_crash() anyway, and the other covers real mode only and can
neither occur with a non-canonical rIP nor result in an altered rIP,
so we don't need to force those paths through the checking logic.

This is XSA-170.

Reported-by: 刘令 <liuling-it@360.cn>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2968,7 +2968,7 @@ static int vmx_handle_apic_write(void)
 void vmx_vmexit_handler(struct cpu_user_regs *regs)
 {
     unsigned long exit_qualification, exit_reason, idtv_info, intr_info = 0;
-    unsigned int vector = 0;
+    unsigned int vector = 0, mode;
     struct vcpu *v = current;
 
     __vmread(GUEST_RIP,    &regs->rip);
@@ -3566,6 +3566,41 @@ void vmx_vmexit_handler(struct cpu_user_
 out:
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_idtv_handling();
+
+    /*
+     * VM entry will fail (causing the guest to get crashed) if rIP (and
+     * rFLAGS, but we don't have an issue there) doesn't meet certain
+     * criteria. As we must not allow less than fully privileged mode to have
+     * such an effect on the domain, we correct rIP in that case (accepting
+     * this not being architecturally correct behavior, as the injected #GP
+     * fault will then not see the correct [invalid] return address).
+     * And since we know the guest will crash, we crash it right away if it
+     * already is in most privileged mode.
+     */
+    mode = vmx_guest_x86_mode(v);
+    if ( mode == 8 ? !is_canonical_address(regs->rip)
+                   : regs->rip != regs->_eip )
+    {
+        struct segment_register ss;
+
+        gprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode);
+
+        vmx_get_segment_register(v, x86_seg_ss, &ss);
+        if ( ss.attr.fields.dpl )
+        {
+            __vmread(VM_ENTRY_INTR_INFO, &intr_info);
+            if ( !(intr_info & INTR_INFO_VALID_MASK) )
+                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            /* Need to fix rIP nevertheless. */
+            if ( mode == 8 )
+                regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >>
+                            (64 - VADDR_BITS);
+            else
+                regs->rip = regs->_eip;
+        }
+        else
+            domain_crash(v->domain);
+    }
 }
 
 void vmx_vmenter_helper(const struct cpu_user_regs *regs)

[-- Attachment #3: xsa170-4.3.patch --]
[-- Type: application/octet-stream, Size: 3139 bytes --]

x86/VMX: sanitize rIP before re-entering guest

... to prevent guest user mode arranging for a guest crash (due to
failed VM entry). (On the AMD system I checked, hardware is doing
exactly the canonicalization being added here.)

Note that fixing this in an architecturally correct way would be quite
a bit more involved: Making the x86 instruction emulator check all
branch targets for validity, plus dealing with invalid rIP resulting
from update_guest_eip() or incoming directly during a VM exit. The only
way to get the latter right would be by not having hardware do the
injection.

Note further that there are a two early returns from
vmx_vmexit_handler(): One (through vmx_failed_vmentry()) leads to
domain_crash() anyway, and the other covers real mode only and can
neither occur with a non-canonical rIP nor result in an altered rIP,
so we don't need to force those paths through the checking logic.

This is XSA-170.

Reported-by: 刘令 <liuling-it@360.cn>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2486,7 +2486,7 @@ void vmx_handle_EOI_induced_exit(struct
 
 void vmx_vmexit_handler(struct cpu_user_regs *regs)
 {
-    unsigned int exit_reason, idtv_info, intr_info = 0, vector = 0;
+    unsigned int exit_reason, idtv_info, intr_info = 0, vector = 0, mode;
     unsigned long exit_qualification, inst_len = 0;
     struct vcpu *v = current;
 
@@ -2998,6 +2998,40 @@ void vmx_vmexit_handler(struct cpu_user_
 out:
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_idtv_handling();
+
+    /*
+     * VM entry will fail (causing the guest to get crashed) if rIP (and
+     * rFLAGS, but we don't have an issue there) doesn't meet certain
+     * criteria. As we must not allow less than fully privileged mode to have
+     * such an effect on the domain, we correct rIP in that case (accepting
+     * this not being architecturally correct behavior, as the injected #GP
+     * fault will then not see the correct [invalid] return address).
+     * And since we know the guest will crash, we crash it right away if it
+     * already is in most privileged mode.
+     */
+    mode = vmx_guest_x86_mode(v);
+    if ( mode == 8 ? !is_canonical_address(regs->rip)
+                   : regs->rip != regs->_eip )
+    {
+        struct segment_register ss;
+
+        gdprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode);
+
+        vmx_get_segment_register(v, x86_seg_ss, &ss);
+        if ( ss.attr.fields.dpl )
+        {
+            if ( !(__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
+                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            /* Need to fix rIP nevertheless. */
+            if ( mode == 8 )
+                regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >>
+                            (64 - VADDR_BITS);
+            else
+                regs->rip = regs->_eip;
+        }
+        else
+            domain_crash(v->domain);
+    }
 }
 
 void vmx_vmenter_helper(void)

[-- Attachment #4: xsa170-4.5.patch --]
[-- Type: application/octet-stream, Size: 3189 bytes --]

x86/VMX: sanitize rIP before re-entering guest

... to prevent guest user mode arranging for a guest crash (due to
failed VM entry). (On the AMD system I checked, hardware is doing
exactly the canonicalization being added here.)

Note that fixing this in an architecturally correct way would be quite
a bit more involved: Making the x86 instruction emulator check all
branch targets for validity, plus dealing with invalid rIP resulting
from update_guest_eip() or incoming directly during a VM exit. The only
way to get the latter right would be by not having hardware do the
injection.

Note further that there are a two early returns from
vmx_vmexit_handler(): One (through vmx_failed_vmentry()) leads to
domain_crash() anyway, and the other covers real mode only and can
neither occur with a non-canonical rIP nor result in an altered rIP,
so we don't need to force those paths through the checking logic.

This is XSA-170.

Reported-by: 刘令 <liuling-it@360.cn>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2675,7 +2675,7 @@ void vmx_handle_EOI_induced_exit(struct
 void vmx_vmexit_handler(struct cpu_user_regs *regs)
 {
     unsigned long exit_qualification, exit_reason, idtv_info, intr_info = 0;
-    unsigned int vector = 0;
+    unsigned int vector = 0, mode;
     struct vcpu *v = current;
 
     __vmread(GUEST_RIP,    &regs->rip);
@@ -3219,6 +3219,41 @@ void vmx_vmexit_handler(struct cpu_user_
 out:
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_idtv_handling();
+
+    /*
+     * VM entry will fail (causing the guest to get crashed) if rIP (and
+     * rFLAGS, but we don't have an issue there) doesn't meet certain
+     * criteria. As we must not allow less than fully privileged mode to have
+     * such an effect on the domain, we correct rIP in that case (accepting
+     * this not being architecturally correct behavior, as the injected #GP
+     * fault will then not see the correct [invalid] return address).
+     * And since we know the guest will crash, we crash it right away if it
+     * already is in most privileged mode.
+     */
+    mode = vmx_guest_x86_mode(v);
+    if ( mode == 8 ? !is_canonical_address(regs->rip)
+                   : regs->rip != regs->_eip )
+    {
+        struct segment_register ss;
+
+        gdprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode);
+
+        vmx_get_segment_register(v, x86_seg_ss, &ss);
+        if ( ss.attr.fields.dpl )
+        {
+            __vmread(VM_ENTRY_INTR_INFO, &intr_info);
+            if ( !(intr_info & INTR_INFO_VALID_MASK) )
+                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            /* Need to fix rIP nevertheless. */
+            if ( mode == 8 )
+                regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >>
+                            (64 - VADDR_BITS);
+            else
+                regs->rip = regs->_eip;
+        }
+        else
+            domain_crash(v->domain);
+    }
 }
 
 void vmx_vmenter_helper(const struct cpu_user_regs *regs)

[-- Attachment #5: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

                 reply	other threads:[~2016-02-17 12:28 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E1aW1DA-00065m-A8@xenbits.xen.org \
    --to=security@xen.org \
    --cc=oss-security@lists.openwall.com \
    --cc=xen-announce@lists.xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=xen-users@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).