* Xen Security Advisory 216 (CVE-2017-10911) - blkif responses leak backend stack data
@ 2017-07-07 13:54 Xen.org security team
0 siblings, 0 replies; only message in thread
From: Xen.org security team @ 2017-07-07 13:54 UTC (permalink / raw)
To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team
[-- Attachment #1: Type: text/plain, Size: 4544 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Xen Security Advisory CVE-2017-10911 / XSA-216
version 5
blkif responses leak backend stack data
UPDATES IN VERSION 5
====================
CVE assigned.
ISSUE DESCRIPTION
=================
The block interface response structure has some discontiguous fields.
Certain backends populate the structure fields of an otherwise
uninitialized instance of this structure on their stacks, leaking
data through the (internal or trailing) padding field.
IMPACT
======
A malicious unprivileged guest may be able to obtain sensitive
information from the host or other guests.
VULNERABLE SYSTEMS
==================
All Linux versions supporting the xen-blkback, blkback, or blktap
drivers are vulnerable.
FreeBSD, NetBSD and Windows (with or without PV drivers) are not
vulnerable (either because they do not have backends at all, or
because they use a different implementation technique which does not
suffer from this problem).
All qemu versions supporting the Xen block backend are vulnerable. The
qemu-xen-traditional code base does not include such code, so is not
vulnerable. Note that an instance of qemu will be spawned to provide
the backend for most non-raw-format disks; so you may need to apply the
patch to qemu even if you use only PV guests.
MITIGATION
==========
There's no mitigation available for x86 PV and ARM guests.
For x86 HVM guests it may be possible to change the guest
configuaration such that a fully virtualized disk is being made
available instead. However, this would normally entail changes inside
the guest itself.
CREDITS
=======
This issue was discovered by Anthony Perard of Citrix.
RESOLUTION
==========
Applying the appropriate attached patch resolves this issue.
xsa216-linux-4.11.patch Linux 4.5 ... 4.11
xsa216-linux-4.4.patch Linux 3.3 ... 4.4
xsa216-qemuu.patch qemu-upstream master, 4.8
xsa216-qemuu-4.7.patch qemu-upstream 4.7, 4.6
xsa216-qemuu-4.5.patch qemu-upstream 4.5
xsa216-linux-2.6.18-xen.patch linux-2.6.18-xen.hg
$ sha256sum xsa216*
d316e16f8da2078966e9d7d516dd0a9ed5a29c3bc479974374c8fa778859913d xsa216-linux-2.6.18-xen.patch
4440fe324b61baf0f3f5a73352c4d9ac6f94917e216d8421263a5e67445852db xsa216-linux-4.4.patch
eb24bfc0303e13e08fd3710463aea139a92a3f83db7f35119c4d3831154a6453 xsa216-linux-4.11.patch
b4b8f68fa05d718c5be7023c84d942e43725bcc563ea15556ee9646f6f9bf7e7 xsa216-qemuu.patch
4fc3665ff07ec79fb31ac66a3fd360a45b7ec546c549c04284f0128ad0c5beba xsa216-qemuu-4.5.patch
a0e0dfd5ea2643ae14c220124194388017a3656db3e6ce430913cda800c43aad xsa216-qemuu-4.7.patch
$
DEPLOYMENT DURING EMBARGO
=========================
Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.
However, deployment of the mitigation is NOT permitted (except where
all the affected systems and VMs are administered and used only by
organisations which are members of the Xen Project Security Issues
Predisclosure List). Specifically, deployment on public cloud systems
is NOT permitted. This is because this produces a guest-visible
change which will indicate which component contains the vulnerability.
Additionally, distribution of updated software is prohibited (except to other
members of the predisclosure list).
Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.
(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable. This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)
For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCAAGBQJZX5IiAAoJEIP+FMlX6CvZdK8IALydeCfUgLpTzeVaRidXkO9M
dlChA1fXn5ZRlQxvGGIzatkl2Em99+JfIyW21AoVqFAyIYbYkbV7zmp82HpHAZfB
Ib5tFUS4ki1paXXcBtQSvgsz7Sxh5obZnCzyguOcSthZ0/Ude5mh9ImsnKepNxQi
GbMBY9xsBv+tclRLiaGUIBgKwtNc0AXpQhWAkbAEWjdYSN2CGsS37Z9Hi0GOoID/
Z49g7/shKDyrHxR1ph0uFqZOkCW8Um3qpORzwHIwpsqleY7Y5E9Ib/QXDOV7wJ1m
IDhkSmYf6kXjJ1yhwjRw4UgsGWj/TDyi9d6HxYU9DVHY1b5lWuNjbbyeMuVpR8A=
=18b8
-----END PGP SIGNATURE-----
[-- Attachment #2: xsa216-linux-2.6.18-xen.patch --]
[-- Type: application/octet-stream, Size: 5698 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: blkback/blktap: don't leak stack data via response ring
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other backends do.
Build on the fact that all response structure flavors are actually
identical (the old code did make this assumption too).
This is XSA-216.
Reported-by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
--- a/drivers/xen/blkback/blkback.c
+++ b/drivers/xen/blkback/blkback.c
@@ -606,33 +606,34 @@ static void _dispatch_rw_block_io(blkif_
static void make_response(blkif_t *blkif, u64 id,
unsigned short op, int st)
{
- blkif_response_t resp;
+ blkif_response_t *resp;
unsigned long flags;
blkif_back_rings_t *blk_rings = &blkif->blk_rings;
int notify;
- resp.id = id;
- resp.operation = op;
- resp.status = st;
-
spin_lock_irqsave(&blkif->blk_ring_lock, flags);
/* Place on the response ring for the relevant domain. */
switch (blkif->blk_protocol) {
case BLKIF_PROTOCOL_NATIVE:
- memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->native,
+ blk_rings->native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_32,
+ blk_rings->x86_32.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_64,
+ blk_rings->x86_64.rsp_prod_pvt);
break;
default:
BUG();
}
+
+ resp->id = id;
+ resp->operation = op;
+ resp->status = st;
+
blk_rings->common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
--- a/drivers/xen/blktap/blktap.c
+++ b/drivers/xen/blktap/blktap.c
@@ -1678,36 +1678,34 @@ static void dispatch_rw_block_io(blkif_t
static void make_response(blkif_t *blkif, u64 id,
unsigned short op, int st)
{
- blkif_response_t resp;
+ blkif_response_t *resp;
unsigned long flags;
blkif_back_rings_t *blk_rings = &blkif->blk_rings;
int notify;
- resp.id = id;
- resp.operation = op;
- resp.status = st;
-
spin_lock_irqsave(&blkif->blk_ring_lock, flags);
/* Place on the response ring for the relevant domain. */
switch (blkif->blk_protocol) {
case BLKIF_PROTOCOL_NATIVE:
- memcpy(RING_GET_RESPONSE(&blk_rings->native,
- blk_rings->native.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->native,
+ blk_rings->native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_32,
- blk_rings->x86_32.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_32,
+ blk_rings->x86_32.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_64,
- blk_rings->x86_64.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_64,
+ blk_rings->x86_64.rsp_prod_pvt);
break;
default:
BUG();
}
+
+ resp->id = id;
+ resp->operation = op;
+ resp->status = st;
+
blk_rings->common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
--- a/include/xen/blkif.h
+++ b/include/xen/blkif.h
@@ -32,9 +32,6 @@
struct blkif_common_request {
char dummy;
};
-struct blkif_common_response {
- char dummy;
-};
/* i386 protocol version */
#pragma pack(push, 4)
@@ -46,13 +43,7 @@ union blkif_x86_32_union {
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
-struct blkif_x86_32_response {
- uint64_t id; /* copied from request */
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_32_request blkif_x86_32_request_t;
-typedef struct blkif_x86_32_response blkif_x86_32_response_t;
#pragma pack(pop)
/* x86_64 protocol version */
@@ -64,18 +55,15 @@ union blkif_x86_64_union {
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
-struct blkif_x86_64_response {
- uint64_t __attribute__((__aligned__(8))) id;
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_64_request blkif_x86_64_request_t;
-typedef struct blkif_x86_64_response blkif_x86_64_response_t;
#define blkif_native_sring blkif_sring
-DEFINE_RING_TYPES(blkif_common, struct blkif_common_request, struct blkif_common_response);
-DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request, struct blkif_x86_32_response);
-DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request, struct blkif_x86_64_response);
+DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
+ struct blkif_response);
+DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
+ struct blkif_response __attribute__((__packed__)));
+DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
+ struct blkif_response);
union blkif_back_rings {
blkif_back_ring_t native;
[-- Attachment #3: xsa216-linux-4.4.patch --]
[-- Type: application/octet-stream, Size: 3699 bytes --]
xen-blkback: don't leak stack data via response ring
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other backends do.
Build on the fact that all response structure flavors are actually
identical (the old code did make this assumption too).
This is XSA-216.
Reported-by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1410,33 +1410,34 @@ static int dispatch_rw_block_io(struct x
static void make_response(struct xen_blkif *blkif, u64 id,
unsigned short op, int st)
{
- struct blkif_response resp;
+ struct blkif_response *resp;
unsigned long flags;
union blkif_back_rings *blk_rings = &blkif->blk_rings;
int notify;
- resp.id = id;
- resp.operation = op;
- resp.status = st;
-
spin_lock_irqsave(&blkif->blk_ring_lock, flags);
/* Place on the response ring for the relevant domain. */
switch (blkif->blk_protocol) {
case BLKIF_PROTOCOL_NATIVE:
- memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->native,
+ blk_rings->native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_32,
+ blk_rings->x86_32.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_64,
+ blk_rings->x86_64.rsp_prod_pvt);
break;
default:
BUG();
}
+
+ resp->id = id;
+ resp->operation = op;
+ resp->status = st;
+
blk_rings->common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -74,9 +74,8 @@ extern unsigned int xen_blkif_max_ring_o
struct blkif_common_request {
char dummy;
};
-struct blkif_common_response {
- char dummy;
-};
+
+/* i386 protocol version */
struct blkif_x86_32_request_rw {
uint8_t nr_segments; /* number of segments */
@@ -128,14 +127,6 @@ struct blkif_x86_32_request {
} u;
} __attribute__((__packed__));
-/* i386 protocol version */
-#pragma pack(push, 4)
-struct blkif_x86_32_response {
- uint64_t id; /* copied from request */
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
-#pragma pack(pop)
/* x86_64 protocol version */
struct blkif_x86_64_request_rw {
@@ -192,18 +183,12 @@ struct blkif_x86_64_request {
} u;
} __attribute__((__packed__));
-struct blkif_x86_64_response {
- uint64_t __attribute__((__aligned__(8))) id;
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
-
DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
- struct blkif_common_response);
+ struct blkif_response);
DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
- struct blkif_x86_32_response);
+ struct blkif_response __packed);
DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
- struct blkif_x86_64_response);
+ struct blkif_response);
union blkif_back_rings {
struct blkif_back_ring native;
[-- Attachment #4: xsa216-linux-4.11.patch --]
[-- Type: application/octet-stream, Size: 3764 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: xen-blkback: don't leak stack data via response ring
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other backends do.
Build on the fact that all response structure flavors are actually
identical (the old code did make this assumption too).
This is XSA-216.
Reported-by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1436,34 +1436,35 @@ static int dispatch_rw_block_io(struct x
static void make_response(struct xen_blkif_ring *ring, u64 id,
unsigned short op, int st)
{
- struct blkif_response resp;
+ struct blkif_response *resp;
unsigned long flags;
union blkif_back_rings *blk_rings;
int notify;
- resp.id = id;
- resp.operation = op;
- resp.status = st;
-
spin_lock_irqsave(&ring->blk_ring_lock, flags);
blk_rings = &ring->blk_rings;
/* Place on the response ring for the relevant domain. */
switch (ring->blkif->blk_protocol) {
case BLKIF_PROTOCOL_NATIVE:
- memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->native,
+ blk_rings->native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_32,
+ blk_rings->x86_32.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_pvt),
- &resp, sizeof(resp));
+ resp = RING_GET_RESPONSE(&blk_rings->x86_64,
+ blk_rings->x86_64.rsp_prod_pvt);
break;
default:
BUG();
}
+
+ resp->id = id;
+ resp->operation = op;
+ resp->status = st;
+
blk_rings->common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
spin_unlock_irqrestore(&ring->blk_ring_lock, flags);
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -75,9 +75,8 @@ extern unsigned int xenblk_max_queues;
struct blkif_common_request {
char dummy;
};
-struct blkif_common_response {
- char dummy;
-};
+
+/* i386 protocol version */
struct blkif_x86_32_request_rw {
uint8_t nr_segments; /* number of segments */
@@ -129,14 +128,6 @@ struct blkif_x86_32_request {
} u;
} __attribute__((__packed__));
-/* i386 protocol version */
-#pragma pack(push, 4)
-struct blkif_x86_32_response {
- uint64_t id; /* copied from request */
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
-#pragma pack(pop)
/* x86_64 protocol version */
struct blkif_x86_64_request_rw {
@@ -193,18 +184,12 @@ struct blkif_x86_64_request {
} u;
} __attribute__((__packed__));
-struct blkif_x86_64_response {
- uint64_t __attribute__((__aligned__(8))) id;
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
-
DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
- struct blkif_common_response);
+ struct blkif_response);
DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
- struct blkif_x86_32_response);
+ struct blkif_response __packed);
DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
- struct blkif_x86_64_response);
+ struct blkif_response);
union blkif_back_rings {
struct blkif_back_ring native;
[-- Attachment #5: xsa216-qemuu.patch --]
[-- Type: application/octet-stream, Size: 4455 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: xen/disk: don't leak stack data via response ring
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other (Linux)
backends do. Build on the fact that all response structure flavors are
actually identical (the old code did make this assumption too).
This is XSA-216.
Reported-by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Add QEMU_PACKED to fix handling 32-bit guests by 64-bit qemu.
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -14,9 +14,6 @@
struct blkif_common_request {
char dummy;
};
-struct blkif_common_response {
- char dummy;
-};
/* i386 protocol version */
#pragma pack(push, 4)
@@ -36,13 +33,7 @@ struct blkif_x86_32_request_discard {
blkif_sector_t sector_number; /* start sector idx on disk (r/w only) */
uint64_t nr_sectors; /* # of contiguous sectors to discard */
};
-struct blkif_x86_32_response {
- uint64_t id; /* copied from request */
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_32_request blkif_x86_32_request_t;
-typedef struct blkif_x86_32_response blkif_x86_32_response_t;
#pragma pack(pop)
/* x86_64 protocol version */
@@ -62,20 +53,14 @@ struct blkif_x86_64_request_discard {
blkif_sector_t sector_number; /* start sector idx on disk (r/w only) */
uint64_t nr_sectors; /* # of contiguous sectors to discard */
};
-struct blkif_x86_64_response {
- uint64_t __attribute__((__aligned__(8))) id;
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_64_request blkif_x86_64_request_t;
-typedef struct blkif_x86_64_response blkif_x86_64_response_t;
DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
- struct blkif_common_response);
+ struct blkif_response);
DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
- struct blkif_x86_32_response);
+ struct blkif_response QEMU_PACKED);
DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
- struct blkif_x86_64_response);
+ struct blkif_response);
union blkif_back_rings {
blkif_back_ring_t native;
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -769,31 +769,30 @@ static int blk_send_response_one(struct
struct XenBlkDev *blkdev = ioreq->blkdev;
int send_notify = 0;
int have_requests = 0;
- blkif_response_t resp;
- void *dst;
-
- resp.id = ioreq->req.id;
- resp.operation = ioreq->req.operation;
- resp.status = ioreq->status;
+ blkif_response_t *resp;
/* Place on the response ring for the relevant domain. */
switch (blkdev->protocol) {
case BLKIF_PROTOCOL_NATIVE:
- dst = RING_GET_RESPONSE(&blkdev->rings.native, blkdev->rings.native.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.native,
+ blkdev->rings.native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- dst = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
- blkdev->rings.x86_32_part.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
+ blkdev->rings.x86_32_part.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- dst = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
- blkdev->rings.x86_64_part.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
+ blkdev->rings.x86_64_part.rsp_prod_pvt);
break;
default:
- dst = NULL;
return 0;
}
- memcpy(dst, &resp, sizeof(resp));
+
+ resp->id = ioreq->req.id;
+ resp->operation = ioreq->req.operation;
+ resp->status = ioreq->status;
+
blkdev->rings.common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blkdev->rings.common, send_notify);
[-- Attachment #6: xsa216-qemuu-4.5.patch --]
[-- Type: application/octet-stream, Size: 4431 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: xen/disk: don't leak stack data via response ring
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other (Linux)
backends do. Build on the fact that all response structure flavors are
actually identical (the old code did make this assumption too).
This is XSA-216.
Reported-by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -12,9 +12,6 @@
struct blkif_common_request {
char dummy;
};
-struct blkif_common_response {
- char dummy;
-};
/* i386 protocol version */
#pragma pack(push, 4)
@@ -26,13 +23,7 @@ struct blkif_x86_32_request {
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
-struct blkif_x86_32_response {
- uint64_t id; /* copied from request */
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_32_request blkif_x86_32_request_t;
-typedef struct blkif_x86_32_response blkif_x86_32_response_t;
#pragma pack(pop)
/* x86_64 protocol version */
@@ -44,17 +35,14 @@ struct blkif_x86_64_request {
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
-struct blkif_x86_64_response {
- uint64_t __attribute__((__aligned__(8))) id;
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_64_request blkif_x86_64_request_t;
-typedef struct blkif_x86_64_response blkif_x86_64_response_t;
-DEFINE_RING_TYPES(blkif_common, struct blkif_common_request, struct blkif_common_response);
-DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request, struct blkif_x86_32_response);
-DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request, struct blkif_x86_64_response);
+DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
+ struct blkif_response);
+DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
+ struct blkif_response QEMU_PACKED);
+DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
+ struct blkif_response);
union blkif_back_rings {
blkif_back_ring_t native;
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -609,30 +609,30 @@ static int blk_send_response_one(struct
struct XenBlkDev *blkdev = ioreq->blkdev;
int send_notify = 0;
int have_requests = 0;
- blkif_response_t resp;
- void *dst;
-
- resp.id = ioreq->req.id;
- resp.operation = ioreq->req.operation;
- resp.status = ioreq->status;
+ blkif_response_t *resp;
/* Place on the response ring for the relevant domain. */
switch (blkdev->protocol) {
case BLKIF_PROTOCOL_NATIVE:
- dst = RING_GET_RESPONSE(&blkdev->rings.native, blkdev->rings.native.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.native,
+ blkdev->rings.native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- dst = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
- blkdev->rings.x86_32_part.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
+ blkdev->rings.x86_32_part.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- dst = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
- blkdev->rings.x86_64_part.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
+ blkdev->rings.x86_64_part.rsp_prod_pvt);
break;
default:
- dst = NULL;
+ return 0;
}
- memcpy(dst, &resp, sizeof(resp));
+
+ resp->id = ioreq->req.id;
+ resp->operation = ioreq->req.operation;
+ resp->status = ioreq->status;
+
blkdev->rings.common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blkdev->rings.common, send_notify);
[-- Attachment #7: xsa216-qemuu-4.7.patch --]
[-- Type: application/octet-stream, Size: 4431 bytes --]
From: Jan Beulich <jbeulich@suse.com>
Subject: xen/disk: don't leak stack data via response ring
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other (Linux)
backends do. Build on the fact that all response structure flavors are
actually identical (the old code did make this assumption too).
This is XSA-216.
Reported-by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -12,9 +12,6 @@
struct blkif_common_request {
char dummy;
};
-struct blkif_common_response {
- char dummy;
-};
/* i386 protocol version */
#pragma pack(push, 4)
@@ -26,13 +23,7 @@ struct blkif_x86_32_request {
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
-struct blkif_x86_32_response {
- uint64_t id; /* copied from request */
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_32_request blkif_x86_32_request_t;
-typedef struct blkif_x86_32_response blkif_x86_32_response_t;
#pragma pack(pop)
/* x86_64 protocol version */
@@ -44,17 +35,14 @@ struct blkif_x86_64_request {
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};
-struct blkif_x86_64_response {
- uint64_t __attribute__((__aligned__(8))) id;
- uint8_t operation; /* copied from request */
- int16_t status; /* BLKIF_RSP_??? */
-};
typedef struct blkif_x86_64_request blkif_x86_64_request_t;
-typedef struct blkif_x86_64_response blkif_x86_64_response_t;
-DEFINE_RING_TYPES(blkif_common, struct blkif_common_request, struct blkif_common_response);
-DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request, struct blkif_x86_32_response);
-DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request, struct blkif_x86_64_response);
+DEFINE_RING_TYPES(blkif_common, struct blkif_common_request,
+ struct blkif_response);
+DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request,
+ struct blkif_response QEMU_PACKED);
+DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
+ struct blkif_response);
union blkif_back_rings {
blkif_back_ring_t native;
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -614,31 +614,30 @@ static int blk_send_response_one(struct
struct XenBlkDev *blkdev = ioreq->blkdev;
int send_notify = 0;
int have_requests = 0;
- blkif_response_t resp;
- void *dst;
-
- resp.id = ioreq->req.id;
- resp.operation = ioreq->req.operation;
- resp.status = ioreq->status;
+ blkif_response_t *resp;
/* Place on the response ring for the relevant domain. */
switch (blkdev->protocol) {
case BLKIF_PROTOCOL_NATIVE:
- dst = RING_GET_RESPONSE(&blkdev->rings.native, blkdev->rings.native.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.native,
+ blkdev->rings.native.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_32:
- dst = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
- blkdev->rings.x86_32_part.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
+ blkdev->rings.x86_32_part.rsp_prod_pvt);
break;
case BLKIF_PROTOCOL_X86_64:
- dst = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
- blkdev->rings.x86_64_part.rsp_prod_pvt);
+ resp = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
+ blkdev->rings.x86_64_part.rsp_prod_pvt);
break;
default:
- dst = NULL;
return 0;
}
- memcpy(dst, &resp, sizeof(resp));
+
+ resp->id = ioreq->req.id;
+ resp->operation = ioreq->req.operation;
+ resp->status = ioreq->status;
+
blkdev->rings.common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blkdev->rings.common, send_notify);
[-- Attachment #8: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2017-07-07 13:54 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-07 13:54 Xen Security Advisory 216 (CVE-2017-10911) - blkif responses leak backend stack data Xen.org security team
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).