xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Xen Security Advisory 155 (CVE-2015-8550) - paravirtualized drivers incautious about shared memory contents
@ 2015-12-17 12:42 Xen.org security team
  0 siblings, 0 replies; 2+ messages in thread
From: Xen.org security team @ 2015-12-17 12:42 UTC (permalink / raw)
  To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team

[-- Attachment #1: Type: text/plain, Size: 8058 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2015-8550 / XSA-155
                              version 5

    paravirtualized drivers incautious about shared memory contents

UPDATES IN VERSION 5
====================

Public release.

ISSUE DESCRIPTION
=================

The compiler can emit optimizations in the PV backend drivers which
can lead to double fetch vulnerabilities. Specifically the shared
memory between the frontend and backend can be fetched twice (during
which time the frontend can alter the contents) possibly leading to
arbitrary code execution in backend.

IMPACT
======

Malicious guest administrators can cause denial of service.  If driver
domains are not in use, the impact can be a host crash, or privilege escalation.

VULNERABLE SYSTEMS
==================

Systems running PV or HVM guests are vulnerable.

ARM and x86 systems are vulnerable.

All OSes providing PV backends are susceptible, this includes
Linux and NetBSD. By default the Linux distributions compile kernels
with optimizations.

MITIGATION
==========

There is no mitigation.

CREDITS
=======

This issue was discovered by Felix Wilhelm of ERNW.

RESOLUTION
==========

Applying the appropriate attached patches should fix the problem for
PV backends.  Note only that PV backends are fixed; PV frontend
patches will be developed and released (publicly) after the embargo
date.

Please note that there is a bug in some versions of gcc,
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 which can cause the
construct used in RING_COPY_REQUEST() to be ineffective in some
circumstances. We have determined that this is only the case when the
structure being copied consists purely of bitfields. The Xen PV
protocols updated here do not use bitfields in this way and therefore
these patches are not subject to that bug. However authors of third
party PV protocols should take this into consideration.

Linux v4.4:
xsa155-linux-xsa155-0001-xen-Add-RING_COPY_REQUEST.patch
xsa155-linux-xsa155-0002-xen-netback-don-t-use-last-request-to-determine-mini.patch
xsa155-linux-xsa155-0003-xen-netback-use-RING_COPY_REQUEST-throughout.patch
xsa155-linux-xsa155-0004-xen-blkback-only-read-request-operation-from-shared-.patch
xsa155-linux-xsa155-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
xsa155-linux-xsa155-0006-xen-scsiback-safely-copy-requests.patch
xsa155-linux-xsa155-0007-xen-pciback-Save-xen_pci_op-commands-before-processi.patch
Linux v4.[0,1,2,3]
All the above patches except #5 will apply, please use:
xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
Linux v3.19:
All the above patches except #5 and #6 will apply, please use:
xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
xsa155-linux319-0006-xen-scsiback-safely-copy-requests.patch

qemu-xen:
xsa155-qemu-qdisk-double-access.patch
xsa155-qemu-xenfb.patch

qemu-traditional:
xsa155-qemut-qdisk-double-access.patch
xsa155-qemut-xenfb.patch

NetBSD 7.0:
xsa155-netbsd-xsa155-0001-netbsd-xen-Add-RING_COPY_REQUEST.patch
xsa155-netbsd-xsa155-0002-netbsd-netback-Use-RING_COPY_REQUEST-instead-of-RING.patch
xsa155-netbsd-xsa155-0003-netbsd-ring-Add-barrier-to-provide-an-compiler-barri.patch
xsa155-netbsd-xsa155-0004-netbsd-block-only-read-request-operation-from-shared.patch
xsa155-netbsd-xsa155-0005-netbsd-pciback-Operate-on-local-version-of-xen_pci_o.patch

xen:
xsa155-xen-0001-xen-Add-RING_COPY_REQUEST.patch
xsa155-xen-0002-blktap2-Use-RING_COPY_REQUEST.patch
xsa155-xen-0003-libvchan-Read-prod-cons-only-once.patch

xen 4.4:
All patches except #3 will apply, please use:
xsa155-xen44-0003-libvchan-Read-prod-cons-only-once.patch

$ sha256sum xsa155*
d9fbc104ab2ae797971e351ee0e04e7b7e9c7c33385309bb406c7941dc9a33b4  xsa155-linux319-xsa155-0006-xen-scsiback-safely-copy-requests.patch
590656d83ad7b6052b54659eccb3469658b3942c0dc1366423a66f2f5ac643e1  xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
2bd18632178e09394c5cd06aded2c14bcc6b6e360ad6e81827d24860fe3e8ca4  xsa155-linux-xsa155-0001-xen-Add-RING_COPY_REQUEST.patch
cecdeccb8e2551252c81fc5f164a8298005df714a574a7ba18b84e8ed5f2bb70  xsa155-linux-xsa155-0002-xen-netback-don-t-use-last-request-to-determine-mini.patch
3916b847243047f0e1053233ade742c14a7f29243584e60bf5db4842a8068855  xsa155-linux-xsa155-0003-xen-netback-use-RING_COPY_REQUEST-throughout.patch
746c8eb0aeb200d76156c88dfbbd49db79f567b88b07eda70f7c7d095721f05a  xsa155-linux-xsa155-0004-xen-blkback-only-read-request-operation-from-shared-.patch
18517a184a02f7441065b8d3423086320ec4c2345c00d551231f7976381767f5  xsa155-linux-xsa155-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
2e6d556d25b1cc16e71afde665ae3908f4fa8eab7e0d96283fc78400301baf92  xsa155-linux-xsa155-0006-xen-scsiback-safely-copy-requests.patch
5e130d8b61906015c6a94f8edd3cce97b172f96a265d97ecf370e7b45125b73d  xsa155-linux-xsa155-0007-xen-pciback-Save-xen_pci_op-commands-before-processi.patch
08c2d0f95dcc215165afbce623b6972b81dd45b091b5f40017579b00c8612e03  xsa155-netbsd-xsa155-0001-netbsd-xen-Add-RING_COPY_REQUEST.patch
0a66010f736092f91f70bb0fd220685e4395efef1db6d23a3d1eace31d144f51  xsa155-netbsd-xsa155-0002-netbsd-netback-Use-RING_COPY_REQUEST-instead-of-RING.patch
5e913a8427cab6b4d384d1246e05116afc301eb117edd838101eb53a82c2f2ff  xsa155-netbsd-xsa155-0003-netbsd-ring-Add-barrier-to-provide-an-compiler-barri.patch
3b8f14eafaed3a7bc66245753a37af4249acf8129fbedb70653192252dc47dc9  xsa155-netbsd-xsa155-0004-netbsd-block-only-read-request-operation-from-shared.patch
81ae5fa998243a78dad749fc561be647dc1dc1be799e8f18484fdf0989469705  xsa155-netbsd-xsa155-0005-netbsd-pciback-Operate-on-local-version-of-xen_pci_o.patch
044ff74fa048df820d528f64f2791ec9cb3940bd313c1179020bd49a6cde2ca3  xsa155-qemu-qdisk-double-access.patch
1150504589eb7bfa108c80ce63395e57d0e627b12d9201219d968fdd026919a6  xsa155-qemut-qdisk-double-access.patch
63186246ab6913b54bfef5f09f33e815935ac40ff821c27a3efda62339bbbd5f  xsa155-qemut-xenfb.patch
e53b4ac298648cde79344192d5a58ca8d8724344f5105bec7c09eef095c668f6  xsa155-qemu-xenfb.patch
e52467fcec73bcc86d3e96d06f8ca8085ae56a83d2c42a30c16bc3dc630d8f8a  xsa155-xen-0001-xen-Add-RING_COPY_REQUEST.patch
eae34c8ccc096ad93a74190506b3d55020a88afb0cc504a3a514590e9fd746fd  xsa155-xen-0002-blktap2-Use-RING_COPY_REQUEST.patch
42780265014085a4221ad32b026214693d751789eb5219e2e83862c0006c66f4  xsa155-xen-0003-libvchan-Read-prod-cons-only-once.patch
dfcaddb8a908a4fc1b048a43187e885117e67dc566f5c841037ee366dcd437d1  xsa155-xen44-0003-libvchan-Read-prod-cons-only-once.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJWcqy6AAoJEIP+FMlX6CvZeBQH/ReZbtQjtRmlvHyu72GPZfGm
fI3Ji5NMczuAu/2aopqOl+dUudO91lHEDmKNuBKHFAb2hOjTd003mCig0JP2D3js
0Ca8ab7VDgSlNKTl99XAizKFYMJEDRdAxYHktNj+1ok9381e7xquEJ77GfSk2S1e
gKDoSYkseSEcrThsgsohYiEvIe/odf8gn4gKq7CTK2sAf45wxWwP/QtgbAidJR3s
hQKuv++cyf11csSuVBX4cp0YN8lRWPmygD1si6D/y2TUvn3sAw2EzDkdSfryvtFV
/PJTtaQKtyvwOu3kJedguPL0yYmdAPQLAwYWum/NfSBB4g94ydxJ30amp3q37lY=
=9VP6
-----END PGP SIGNATURE-----

[-- Attachment #2: xsa155-linux319-xsa155-0006-xen-scsiback-safely-copy-requests.patch --]
[-- Type: application/octet-stream, Size: 1159 bytes --]

From f9c71e892d5142a314481df6baa26b34e6a6ba45 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 16 Nov 2015 18:02:32 +0000
Subject: [PATCH] xen-scsiback: safely copy requests

The copy of the ring request was lacking a following barrier(),
potentially allowing the compiler to optimize the copy away.

Use RING_COPY_REQUEST() to ensure the request is copied to local
memory.

This is XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: This is a against v3.19
---
 drivers/xen/xen-scsiback.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index e999496e..d86f6e1 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -734,7 +734,7 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info)
 		if (!pending_req)
 			return 1;
 
-		ring_req = RING_GET_REQUEST(ring, rc);
+		RING_COPY_REQUEST(ring, rc, &ring_req);
 		ring->req_cons = ++rc;
 
 		act = ring_req->act;
-- 
2.1.0


[-- Attachment #3: xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch --]
[-- Type: application/octet-stream, Size: 2339 bytes --]

From d52f00960c1070c683809faddd35a2223e2b8a6e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Tue, 3 Nov 2015 16:40:43 +0000
Subject: [PATCH 6/7] xen-blkback: read from indirect descriptors only once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Since indirect descriptors are in memory shared with the frontend, the
frontend could alter the first_sect and last_sect values after they have
been validated but before they are recorded in the request.  This may
result in I/O requests that overflow the foreign page, possibly
overwriting local pages when the I/O request is executed.

When parsing indirect descriptors, only read first_sect and last_sect
once.

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
----
v2: This is against v4.3
---
 drivers/block/xen-blkback/blkback.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6a685ae..f2e7a38 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -950,6 +950,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 		goto unmap;
 
 	for (n = 0, i = 0; n < nseg; n++) {
+		uint8_t first_sect, last_sect;
+
 		if ((n % SEGS_PER_INDIRECT_FRAME) == 0) {
 			/* Map indirect segments */
 			if (segments)
@@ -958,14 +960,14 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 		}
 		i = n % SEGS_PER_INDIRECT_FRAME;
 		pending_req->segments[n]->gref = segments[i].gref;
-		seg[n].nsec = segments[i].last_sect -
-			segments[i].first_sect + 1;
-		seg[n].offset = (segments[i].first_sect << 9);
-		if ((segments[i].last_sect >= (PAGE_SIZE >> 9)) ||
-		    (segments[i].last_sect < segments[i].first_sect)) {
+		first_sect = READ_ONCE(segments[i].first_sect);
+		last_sect = READ_ONCE(segments[i].last_sect);
+		if (last_sect >= (PAGE_SIZE >> 9) || last_sect < first_sect) {
 			rc = -EINVAL;
 			goto unmap;
 		}
+		seg[n].nsec = last_sect - first_sect + 1;
+		seg[n].offset = first_sect << 9;
 		preq->nr_sects += seg[n].nsec;
 	}
 
-- 
2.1.0


[-- Attachment #4: xsa155-linux-xsa155-0001-xen-Add-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2057 bytes --]

From 4e2bc423e0cef0a42f93d989c0980301df1bd462 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 30 Oct 2015 14:58:08 +0000
Subject: [PATCH 1/7] xen: Add RING_COPY_REQUEST()

Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a request
generally requires taking a local copy.

Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: Update about GCC and bitfields.
---
 include/xen/interface/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7d28aff..7dc685b 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -181,6 +181,20 @@ struct __name##_back_ring {						\
 #define RING_GET_REQUEST(_r, _idx)					\
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+/*
+ * Get a local copy of a request.
+ *
+ * Use this in preference to RING_GET_REQUEST() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _req is a struct which consists of only bitfields.
+ */
+#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+	/* Use volatile to force the copy into _req. */			\
+	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+} while (0)
+
 #define RING_GET_RESPONSE(_r, _idx)					\
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
-- 
2.1.0


[-- Attachment #5: xsa155-linux-xsa155-0002-xen-netback-don-t-use-last-request-to-determine-mini.patch --]
[-- Type: application/octet-stream, Size: 1568 bytes --]

From 100ac372a0e07ccc8c508c3884fa9020cfe08094 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 30 Oct 2015 15:16:01 +0000
Subject: [PATCH 2/7] xen-netback: don't use last request to determine minimum
 Tx credit

The last from guest transmitted request gives no indication about the
minimum amount of credit that the guest might need to send a packet
since the last packet might have been a small one.

Instead allow for the worst case 128 KiB packet.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/net/xen-netback/netback.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e481f37..b683581 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -679,9 +679,7 @@ static void tx_add_credit(struct xenvif_queue *queue)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
-	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, queue->credit_bytes);
+	max_burst = max(131072UL, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
 	max_credit = queue->remaining_credit + queue->credit_bytes;
-- 
2.1.0


[-- Attachment #6: xsa155-linux-xsa155-0003-xen-netback-use-RING_COPY_REQUEST-throughout.patch --]
[-- Type: application/octet-stream, Size: 4383 bytes --]

From 4127e9ccae0eda622421d21132846abdf74f66ed Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 30 Oct 2015 15:17:06 +0000
Subject: [PATCH 3/7] xen-netback: use RING_COPY_REQUEST() throughout

Instead of open-coding memcpy()s and directly accessing Tx and Rx
requests, use the new RING_COPY_REQUEST() that ensures the local copy
is correct.

This is more than is strictly necessary for guest Rx requests since
only the id and gref fields are used and it is harmless if the
frontend modifies these.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/net/xen-netback/netback.c | 30 ++++++++++++++----------------
 1 file changed, 14 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index b683581..1049c34 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -258,18 +258,18 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
-	struct xen_netif_rx_request *req;
+	struct xen_netif_rx_request req;
 
-	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
+	RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
 	meta->gso_size = 0;
 	meta->size = 0;
-	meta->id = req->id;
+	meta->id = req.id;
 
 	npo->copy_off = 0;
-	npo->copy_gref = req->gref;
+	npo->copy_gref = req.gref;
 
 	return meta;
 }
@@ -424,7 +424,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
 	int i;
-	struct xen_netif_rx_request *req;
+	struct xen_netif_rx_request req;
 	struct xenvif_rx_meta *meta;
 	unsigned char *data;
 	int head = 1;
@@ -443,15 +443,15 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
+		RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = skb_shinfo(skb)->gso_size;
 		meta->size = 0;
-		meta->id = req->id;
+		meta->id = req.id;
 	}
 
-	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
+	RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -463,9 +463,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	}
 
 	meta->size = 0;
-	meta->id = req->id;
+	meta->id = req.id;
 	npo->copy_off = 0;
-	npo->copy_gref = req->gref;
+	npo->copy_gref = req.gref;
 
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
@@ -709,7 +709,7 @@ static void xenvif_tx_err(struct xenvif_queue *queue,
 		spin_unlock_irqrestore(&queue->response_lock, flags);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&queue->tx, cons++);
+		RING_COPY_REQUEST(&queue->tx, cons++, txp);
 	} while (1);
 	queue->tx.req_cons = cons;
 }
@@ -776,8 +776,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
-		       sizeof(*txp));
+		RING_COPY_REQUEST(&queue->tx, cons + slots, txp);
 
 		/* If the guest submitted a frame >= 64 KiB then
 		 * first->size overflowed and following slots will
@@ -1110,8 +1109,7 @@ static int xenvif_get_extras(struct xenvif_queue *queue,
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
-		       sizeof(extra));
+		RING_COPY_REQUEST(&queue->tx, cons, &extra);
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
 			queue->tx.req_cons = ++cons;
@@ -1320,7 +1318,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
+		RING_COPY_REQUEST(&queue->tx, idx, &txreq);
 
 		/* Credit-based scheduling. */
 		if (txreq.size > queue->remaining_credit &&
-- 
2.1.0


[-- Attachment #7: xsa155-linux-xsa155-0004-xen-blkback-only-read-request-operation-from-shared-.patch --]
[-- Type: application/octet-stream, Size: 1925 bytes --]

From 084b8c2e77f1ac07e4a3a121ff957c49a9379385 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Tue, 3 Nov 2015 16:34:09 +0000
Subject: [PATCH 4/7] xen-blkback: only read request operation from shared ring
 once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

A compiler may load a switch statement value multiple times, which could
be bad when the value is in memory shared with the frontend.

When converting a non-native request to a native one, ensure that
src->operation is only loaded once by using READ_ONCE().

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/common.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index 68e87a0..c929ae2 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -408,8 +408,8 @@ static inline void blkif_get_x86_32_req(struct blkif_request *dst,
 					struct blkif_x86_32_request *src)
 {
 	int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;
-	dst->operation = src->operation;
-	switch (src->operation) {
+	dst->operation = READ_ONCE(src->operation);
+	switch (dst->operation) {
 	case BLKIF_OP_READ:
 	case BLKIF_OP_WRITE:
 	case BLKIF_OP_WRITE_BARRIER:
@@ -456,8 +456,8 @@ static inline void blkif_get_x86_64_req(struct blkif_request *dst,
 					struct blkif_x86_64_request *src)
 {
 	int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;
-	dst->operation = src->operation;
-	switch (src->operation) {
+	dst->operation = READ_ONCE(src->operation);
+	switch (dst->operation) {
 	case BLKIF_OP_READ:
 	case BLKIF_OP_WRITE:
 	case BLKIF_OP_WRITE_BARRIER:
-- 
2.1.0


[-- Attachment #8: xsa155-linux-xsa155-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch --]
[-- Type: application/octet-stream, Size: 2395 bytes --]

From 06ee7c7beb0b5245b1d879c9753faa2cf5ad9891 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Tue, 3 Nov 2015 16:40:43 +0000
Subject: [PATCH 5/7] xen-blkback: read from indirect descriptors only once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Since indirect descriptors are in memory shared with the frontend, the
frontend could alter the first_sect and last_sect values after they have
been validated but before they are recorded in the request.  This may
result in I/O requests that overflow the foreign page, possibly
overwriting local pages when the I/O request is executed.

When parsing indirect descriptors, only read first_sect and last_sect
once.

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/blkback.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index f909994..41fb1a9 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -950,6 +950,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 		goto unmap;
 
 	for (n = 0, i = 0; n < nseg; n++) {
+		uint8_t first_sect, last_sect;
+
 		if ((n % SEGS_PER_INDIRECT_FRAME) == 0) {
 			/* Map indirect segments */
 			if (segments)
@@ -957,15 +959,18 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 			segments = kmap_atomic(pages[n/SEGS_PER_INDIRECT_FRAME]->page);
 		}
 		i = n % SEGS_PER_INDIRECT_FRAME;
+
 		pending_req->segments[n]->gref = segments[i].gref;
-		seg[n].nsec = segments[i].last_sect -
-			segments[i].first_sect + 1;
-		seg[n].offset = (segments[i].first_sect << 9);
-		if ((segments[i].last_sect >= (XEN_PAGE_SIZE >> 9)) ||
-		    (segments[i].last_sect < segments[i].first_sect)) {
+
+		first_sect = READ_ONCE(segments[i].first_sect);
+		last_sect = READ_ONCE(segments[i].last_sect);
+		if (last_sect >= (XEN_PAGE_SIZE >> 9) || last_sect < first_sect) {
 			rc = -EINVAL;
 			goto unmap;
 		}
+
+		seg[n].nsec = last_sect - first_sect + 1;
+		seg[n].offset = first_sect << 9;
 		preq->nr_sects += seg[n].nsec;
 	}
 
-- 
2.1.0


[-- Attachment #9: xsa155-linux-xsa155-0006-xen-scsiback-safely-copy-requests.patch --]
[-- Type: application/octet-stream, Size: 1176 bytes --]

From 89739c14c72e5c1626a5cd5e09cbb2efeaadb6d8 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 16 Nov 2015 18:02:32 +0000
Subject: [PATCH 6/7] xen-scsiback: safely copy requests

The copy of the ring request was lacking a following barrier(),
potentially allowing the compiler to optimize the copy away.

Use RING_COPY_REQUEST() to ensure the request is copied to local
memory.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/xen-scsiback.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 43bcae8..ad4eb10 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -726,7 +726,7 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info)
 		if (!pending_req)
 			return 1;
 
-		ring_req = *RING_GET_REQUEST(ring, rc);
+		RING_COPY_REQUEST(ring, rc, &ring_req);
 		ring->req_cons = ++rc;
 
 		err = prepare_pending_reqs(info, &ring_req, pending_req);
-- 
2.1.0


[-- Attachment #10: xsa155-linux-xsa155-0007-xen-pciback-Save-xen_pci_op-commands-before-processi.patch --]
[-- Type: application/octet-stream, Size: 2855 bytes --]

From f6f4388c917ce96b075a239a4535b8efc6064d14 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 16 Nov 2015 12:40:48 -0500
Subject: [PATCH 7/7] xen/pciback: Save xen_pci_op commands before processing
 it

Double fetch vulnerabilities that happen when a variable is
fetched twice from shared memory but a security check is only
performed the first time.

The xen_pcibk_do_op function performs a switch statements on the op->cmd
value which is stored in shared memory. Interestingly this can result
in a double fetch vulnerability depending on the performed compiler
optimization.

This patch fixes it by saving the xen_pci_op command before
processing it. We also use 'barrier' to make sure that the
compiler does not perform any optimization.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/xen-pciback/pciback.h     |  1 +
 drivers/xen/xen-pciback/pciback_ops.c | 15 ++++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xen-pciback/pciback.h b/drivers/xen/xen-pciback/pciback.h
index 58e38d5..4d529f3 100644
--- a/drivers/xen/xen-pciback/pciback.h
+++ b/drivers/xen/xen-pciback/pciback.h
@@ -37,6 +37,7 @@ struct xen_pcibk_device {
 	struct xen_pci_sharedinfo *sh_info;
 	unsigned long flags;
 	struct work_struct op_work;
+	struct xen_pci_op op;
 };
 
 struct xen_pcibk_dev_data {
diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
index c4a0666..a0e0e3e 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -298,9 +298,11 @@ void xen_pcibk_do_op(struct work_struct *data)
 		container_of(data, struct xen_pcibk_device, op_work);
 	struct pci_dev *dev;
 	struct xen_pcibk_dev_data *dev_data = NULL;
-	struct xen_pci_op *op = &pdev->sh_info->op;
+	struct xen_pci_op *op = &pdev->op;
 	int test_intx = 0;
 
+	*op = pdev->sh_info->op;
+	barrier();
 	dev = xen_pcibk_get_pci_dev(pdev, op->domain, op->bus, op->devfn);
 
 	if (dev == NULL)
@@ -342,6 +344,17 @@ void xen_pcibk_do_op(struct work_struct *data)
 		if ((dev_data->enable_intx != test_intx))
 			xen_pcibk_control_isr(dev, 0 /* no reset */);
 	}
+	pdev->sh_info->op.err = op->err;
+	pdev->sh_info->op.value = op->value;
+#ifdef CONFIG_PCI_MSI
+	if (op->cmd == XEN_PCI_OP_enable_msix && op->err == 0) {
+		unsigned int i;
+
+		for (i = 0; i < op->value; i++)
+			pdev->sh_info->op.msix_entries[i].vector =
+				op->msix_entries[i].vector;
+	}
+#endif
 	/* Tell the driver domain that we're done. */
 	wmb();
 	clear_bit(_XEN_PCIF_active, (unsigned long *)&pdev->sh_info->flags);
-- 
2.1.0


[-- Attachment #11: xsa155-netbsd-xsa155-0001-netbsd-xen-Add-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2150 bytes --]

From a56456ac3df28432fff44a9a9623e2ddfc826106 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 24 Nov 2015 02:51:56 +0000
Subject: [PATCH 1/5] netbsd/xen: Add RING_COPY_REQUEST()

Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a request
generally requires taking a local copy.

Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is part of XSA155.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: Update comment about GCC bug.
---
 arch/xen/include/xen-public/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/xen/include/xen-public/io/ring.h b/arch/xen/include/xen-public/io/ring.h
index 09c186c..630b80e 100644
--- a/arch/xen/include/xen-public/io/ring.h
+++ b/arch/xen/include/xen-public/io/ring.h
@@ -236,6 +236,20 @@ typedef struct __name##_back_ring __name##_back_ring_t
 #define RING_GET_REQUEST(_r, _idx)                                      \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+/*
+ * Get a local copy of a request.
+ *
+ * Use this in preference to RING_GET_REQUEST() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _req is a struct which consists of only bitfields.
+ */
+#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+	/* Use volatile to force the copy into _req. */			\
+	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+} while (0)
+
 #define RING_GET_RESPONSE(_r, _idx)                                     \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
-- 
2.5.2


[-- Attachment #12: xsa155-netbsd-xsa155-0002-netbsd-netback-Use-RING_COPY_REQUEST-instead-of-RING.patch --]
[-- Type: application/octet-stream, Size: 8263 bytes --]

From 1c697ca76a670b0883cd6a203828c33ccf4ecb1e Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:53:16 +0000
Subject: [PATCH 2/5] netbsd/netback: Use RING_COPY_REQUEST instead of
 RING_REQ_REQUEST

This way we operate on a local copy of the guest Rx. This is more than
neccessary as only the id and gref fields are used and it is harmless
if the frontend modifies these.

For the TX we also copy the request and make sure to use only the
local copy.

This is based off Linux 'xen-netback: use RING_COPY_REQUEST() throughout'
patch.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/xen/xennetback_xenbus.c | 78 ++++++++++++++++++++--------------------
 1 file changed, 40 insertions(+), 38 deletions(-)

diff --git a/arch/xen/xen/xennetback_xenbus.c b/arch/xen/xen/xennetback_xenbus.c
index 7cc14af..0ef2353 100644
--- a/arch/xen/xen/xennetback_xenbus.c
+++ b/arch/xen/xen/xennetback_xenbus.c
@@ -715,7 +715,7 @@ xennetback_evthandler(void *arg)
 {
 	struct xnetback_instance *xneti = arg;
 	struct ifnet *ifp = &xneti->xni_if;
-	netif_tx_request_t *txreq;
+	netif_tx_request_t txreq;
 	struct xni_pkt *pkt;
 	vaddr_t pkt_va;
 	struct mbuf *m;
@@ -733,36 +733,36 @@ xennetback_evthandler(void *arg)
 		    receive_pending);
 		if (receive_pending == 0)
 			break;
-		txreq = RING_GET_REQUEST(&xneti->xni_txring, req_cons);
+		RING_COPY_REQUEST(&xneti->xni_txring, req_cons, &txreq);
 		xen_rmb();
 		XENPRINTF(("%s pkt size %d\n", xneti->xni_if.if_xname,
-		    txreq->size));
+		    txreq.size));
 		req_cons++;
 		if (__predict_false((ifp->if_flags & (IFF_UP | IFF_RUNNING)) !=
 		    (IFF_UP | IFF_RUNNING))) {
 			/* interface not up, drop */
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			continue;
 		}
 		/*
 		 * Do some sanity checks, and map the packet's page.
 		 */
-		if (__predict_false(txreq->size < ETHER_HDR_LEN ||
-		   txreq->size > (ETHER_MAX_LEN - ETHER_CRC_LEN))) {
+		if (__predict_false(txreq.size < ETHER_HDR_LEN ||
+		   txreq.size > (ETHER_MAX_LEN - ETHER_CRC_LEN))) {
 			printf("%s: packet size %d too big\n",
-			    ifp->if_xname, txreq->size);
-			xennetback_tx_response(xneti, txreq->id,
+			    ifp->if_xname, txreq.size);
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_ERROR);
 			ifp->if_ierrors++;
 			continue;
 		}
 		/* don't cross page boundaries */
 		if (__predict_false(
-		    txreq->offset + txreq->size > PAGE_SIZE)) {
+		    txreq.offset + txreq.size > PAGE_SIZE)) {
 			printf("%s: packet cross page boundary\n",
 			    ifp->if_xname);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_ERROR);
 			ifp->if_ierrors++;
 			continue;
@@ -774,15 +774,15 @@ xennetback_evthandler(void *arg)
 			if (ratecheck(&lasttime, &xni_pool_errintvl))
 				printf("%s: mbuf alloc failed\n",
 				    ifp->if_xname);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			ifp->if_ierrors++;
 			continue;
 		}
 
 		XENPRINTF(("%s pkt offset %d size %d id %d req_cons %d\n",
-		    xneti->xni_if.if_xname, txreq->offset,
-		    txreq->size, txreq->id, MASK_NETIF_TX_IDX(req_cons)));
+		    xneti->xni_if.if_xname, txreq.offset,
+		    txreq.size, txreq.id, MASK_NETIF_TX_IDX(req_cons)));
 		
 		pkt = pool_get(&xni_pkt_pool, PR_NOWAIT);
 		if (__predict_false(pkt == NULL)) {
@@ -790,16 +790,16 @@ xennetback_evthandler(void *arg)
 			if (ratecheck(&lasttime, &xni_pool_errintvl))
 				printf("%s: xnbpkt alloc failed\n",
 				    ifp->if_xname);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			ifp->if_ierrors++;
 			m_freem(m);
 			continue;
 		}
-		err = xen_shm_map(1, xneti->xni_domid, &txreq->gref, &pkt_va,
+		err = xen_shm_map(1, xneti->xni_domid, &txreq.gref, &pkt_va,
 		    &pkt->pkt_handle, XSHM_RO);
 		if (__predict_false(err == ENOMEM)) {
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			ifp->if_ierrors++;
 			pool_put(&xni_pkt_pool, pkt);
@@ -810,7 +810,7 @@ xennetback_evthandler(void *arg)
 		if (__predict_false(err)) {
 			printf("%s: mapping foreing page failed: %d\n",
 			    xneti->xni_if.if_xname, err);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_ERROR);
 			ifp->if_ierrors++;
 			pool_put(&xni_pkt_pool, pkt);
@@ -820,13 +820,13 @@ xennetback_evthandler(void *arg)
 
 		if ((ifp->if_flags & IFF_PROMISC) == 0) {
 			struct ether_header *eh =
-			    (void*)(pkt_va + txreq->offset);
+			    (void*)(pkt_va + txreq.offset);
 			if (ETHER_IS_MULTICAST(eh->ether_dhost) == 0 &&
 			    memcmp(CLLADDR(ifp->if_sadl), eh->ether_dhost,
 			    ETHER_ADDR_LEN) != 0) {
 				xni_pkt_unmap(pkt, pkt_va);
 				m_freem(m);
-				xennetback_tx_response(xneti, txreq->id,
+				xennetback_tx_response(xneti, txreq.id,
 				    NETIF_RSP_OKAY);
 				continue; /* packet is not for us */
 			}
@@ -845,31 +845,31 @@ so always copy for now.
 			 * ack it. Delaying it until the mbuf is
 			 * freed will stall transmit.
 			 */
-			m->m_len = min(MHLEN, txreq->size);
+			m->m_len = min(MHLEN, txreq.size);
 			m->m_pkthdr.len = 0;
-			m_copyback(m, 0, txreq->size,
-			    (void *)(pkt_va + txreq->offset));
+			m_copyback(m, 0, txreq.size,
+			    (void *)(pkt_va + txreq.offset));
 			xni_pkt_unmap(pkt, pkt_va);
-			if (m->m_pkthdr.len < txreq->size) {
+			if (m->m_pkthdr.len < txreq.size) {
 				ifp->if_ierrors++;
 				m_freem(m);
-				xennetback_tx_response(xneti, txreq->id,
+				xennetback_tx_response(xneti, txreq.id,
 				    NETIF_RSP_DROPPED);
 				continue;
 			}
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_OKAY);
 		} else {
 
-			pkt->pkt_id = txreq->id;
+			pkt->pkt_id = txreq.id;
 			pkt->pkt_xneti = xneti;
 
-			MEXTADD(m, pkt_va + txreq->offset,
-			    txreq->size, M_DEVBUF, xennetback_tx_free, pkt);
-			m->m_pkthdr.len = m->m_len = txreq->size;
+			MEXTADD(m, pkt_va + txreq.offset,
+			    txreq.size, M_DEVBUF, xennetback_tx_free, pkt);
+			m->m_pkthdr.len = m->m_len = txreq.size;
 			m->m_flags |= M_EXT_ROMAP;
 		}
-		if ((txreq->flags & NETTXF_csum_blank) != 0) {
+		if ((txreq.flags & NETTXF_csum_blank) != 0) {
 			xennet_checksum_fill(&m);
 			if (m == NULL) {
 				ifp->if_ierrors++;
@@ -953,6 +953,7 @@ xennetback_ifsoftstart_transfer(void *arg)
 	mmu_update_t *mmup;
 	multicall_entry_t *mclp;
 	netif_rx_response_t *rxresp;
+	netif_rx_request_t rxreq;
 	RING_IDX req_prod, resp_prod;
 	int do_event = 0;
 	gnttab_transfer_t *gop;
@@ -1028,10 +1029,10 @@ xennetback_ifsoftstart_transfer(void *arg)
 				nppitems++;
 			}
 			/* start filling ring */
-			gop->ref = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->gref;
-			id = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->id;
+			RING_COPY_REQUEST(&xneti->xni_rxring,
+			    xneti->xni_rxring.req_cons, &rxreq);
+			gop->ref = rxreq.gref;
+			id = rxreq.id;
 			xen_rmb();
 			xneti->xni_rxring.req_cons++;
 			rxresp = RING_GET_RESPONSE(&xneti->xni_rxring,
@@ -1198,6 +1199,7 @@ xennetback_ifsoftstart_copy(void *arg)
 	paddr_t xmit_ma;
 	int i, j;
 	netif_rx_response_t *rxresp;
+	netif_rx_request_t rxreq;
 	RING_IDX req_prod, resp_prod;
 	int do_event = 0;
 	gnttab_copy_t *gop;
@@ -1309,16 +1311,16 @@ xennetback_ifsoftstart_copy(void *arg)
 			gop->source.domid = DOMID_SELF;
 			gop->source.u.gmfn = xmit_ma >> PAGE_SHIFT;
 
-			gop->dest.u.ref = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->gref;
+			RING_COPY_REQUEST(&xneti->xni_rxring,
+			    xneti->xni_rxring.req_cons, &rxreq);
+			gop->dest.u.ref = rxreq.gref;
 			gop->dest.offset = 0;
 			gop->dest.domid = xneti->xni_domid;
 
 			gop->len = m->m_pkthdr.len;
 			gop++;
 
-			id = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->id;
+			id = rxreq.id;
 			xen_rmb();
 			xneti->xni_rxring.req_cons++;
 			rxresp = RING_GET_RESPONSE(&xneti->xni_rxring,
-- 
2.5.2


[-- Attachment #13: xsa155-netbsd-xsa155-0003-netbsd-ring-Add-barrier-to-provide-an-compiler-barri.patch --]
[-- Type: application/octet-stream, Size: 1272 bytes --]

From b367cdba0cc3e2de4237ca74f31043141deda892 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:54:45 +0000
Subject: [PATCH 3/5] netbsd/ring: Add 'barrier' to provide an compiler
 barrier.

We need an mechanism to disable the compiler from generating to much
optimization. Using the 'barrier' macro will make the compiler not
optimize variables past the 'barrier' (as in, re-use the registers
or only read part of a value from a memory).

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/include/xen-public/io/ring.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/xen/include/xen-public/io/ring.h b/arch/xen/include/xen-public/io/ring.h
index 316bceb..5278d06 100644
--- a/arch/xen/include/xen-public/io/ring.h
+++ b/arch/xen/include/xen-public/io/ring.h
@@ -35,6 +35,7 @@
 #define xen_mb()  mb()
 #define xen_rmb() rmb()
 #define xen_wmb() wmb()
+#define barrier()     __asm__ __volatile__("": : :"memory")
 #endif
 #endif
 
@@ -42,6 +43,7 @@
 #define xen_mb()  x86_mfence()
 #define xen_rmb() x86_lfence()
 #define xen_wmb() x86_sfence()
+#define barrier()     __asm__ __volatile__("": : :"memory")
 #endif
 
 typedef unsigned int RING_IDX;
-- 
2.5.2


[-- Attachment #14: xsa155-netbsd-xsa155-0004-netbsd-block-only-read-request-operation-from-shared.patch --]
[-- Type: application/octet-stream, Size: 1182 bytes --]

From a0c5282aff51d5e6520caa904207b973567d920d Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:56:07 +0000
Subject: [PATCH 4/5] netbsd/block: only read request operation from shared
 ring once

The compiler may load a switch statement multiple times from the shared
space. This could lead to the frontend manipulating the backend into
unforseen branches.

We want to ensure that the req->operation is only read once and we
do that by using an compiler barrier.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/xen/xbdback_xenbus.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/xen/xen/xbdback_xenbus.c b/arch/xen/xen/xbdback_xenbus.c
index 9ee0758..3d18021 100644
--- a/arch/xen/xen/xbdback_xenbus.c
+++ b/arch/xen/xen/xbdback_xenbus.c
@@ -1022,6 +1022,7 @@ xbdback_co_main_loop(struct xbdback_instance *xbdi, void *obj)
 			req->sector_number = req64->sector_number;
 			break;
 		}
+		barrier();
 		XENPRINTF(("xbdback op %d req_cons 0x%x req_prod 0x%x "
 		    "resp_prod 0x%x id %" PRIu64 "\n", req->operation,
 			xbdi->xbdi_ring.ring_n.req_cons,
-- 
2.5.2


[-- Attachment #15: xsa155-netbsd-xsa155-0005-netbsd-pciback-Operate-on-local-version-of-xen_pci_o.patch --]
[-- Type: application/octet-stream, Size: 2391 bytes --]

From 3f39e051b234b4bd8e36b820a932591afd6413b1 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:57:22 +0000
Subject: [PATCH 5/5] netbsd/pciback: Operate on local version of xen_pci_op

Double fetch vulnerabilities that happen when a variable is
fetched twice from shared memory but a security check is only
performed the first time.

The pciback_xenbus_evthandler function performs a switch statements on the
op->size and op->cmd value which is stored in shared memory.
Interestingly this can result in a double fetch vulnerability depending on
the performed compiler optimization.

This patch fixes it by saving the xen_pci_op command before
processing it. We also use 'barrier' to make sure that the
compiler does not perform any optimization.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/xen/pciback.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/xen/xen/pciback.c b/arch/xen/xen/pciback.c
index 042c8c9..46c821c 100644
--- a/arch/xen/xen/pciback.c
+++ b/arch/xen/xen/pciback.c
@@ -188,6 +188,7 @@ struct pb_xenbus_instance {
 	/* communication with the domU */
         unsigned int pbx_evtchn; /* our even channel */
         struct xen_pci_sharedinfo *pbx_sh_info;
+        struct xen_pci_op op;
         grant_handle_t pbx_shinfo_handle; /* to unmap shared page */
 };
 
@@ -721,13 +722,16 @@ pciback_xenbus_evthandler(void * arg)
 {
 	struct pb_xenbus_instance *pbxi = arg;
 	struct pciback_pci_dev *pbd;
-	struct xen_pci_op *op = &pbxi->pbx_sh_info->op;
+	struct xen_pci_op *op = &pbxi->op;
 	u_int bus, dev, func;
 
 	hypervisor_clear_event(pbxi->pbx_evtchn);
 	if (xen_atomic_test_bit(&pbxi->pbx_sh_info->flags,
 	    _XEN_PCIF_active) == 0)
 		return 0;
+
+	memcpy(op, &pbxi->pbx_sh_info->op, sizeof (struct xen_pci_op));
+	barrier();
 	if (op->domain != 0) {
 		aprint_error("pciback: domain %d != 0", op->domain);
 		op->err = XEN_PCI_ERR_dev_not_found;
@@ -794,6 +798,8 @@ pciback_xenbus_evthandler(void * arg)
 		aprint_error("pciback: unknown cmd %d\n", op->cmd);
 		op->err = XEN_PCI_ERR_not_implemented;
 	}
+	pbxi->pbx_sh_info->op.value = op->value;
+	pbxi->pbx_sh_info->op.err = op->err;
 end:
 	xen_atomic_clear_bit(&pbxi->pbx_sh_info->flags, _XEN_PCIF_active);
 	hypervisor_notify_via_evtchn(pbxi->pbx_evtchn);
-- 
2.5.2


[-- Attachment #16: xsa155-qemu-qdisk-double-access.patch --]
[-- Type: application/octet-stream, Size: 1402 bytes --]

xen/blkif: Avoid double access to src->nr_segments

src is stored in shared memory and src->nr_segments is dereferenced
twice at the end of the function.  If a compiler decides to compile this
into two separate memory accesses then the size limitation could be
bypassed.

Fix it by removing the double access to src->nr_segments.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index 711b692..9e71e00 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -85,8 +85,10 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 		d->nr_sectors = s->nr_sectors;
 		return;
 	}
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	barrier();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }
@@ -106,8 +108,10 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 		d->nr_sectors = s->nr_sectors;
 		return;
 	}
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	barrier();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }

[-- Attachment #17: xsa155-qemut-qdisk-double-access.patch --]
[-- Type: application/octet-stream, Size: 1804 bytes --]

From 27942b0cb2327e93deb12326bbe7b36c81f9fa7b Mon Sep 17 00:00:00 2001
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 20 Nov 2015 10:56:00 -0500
Subject: [PATCH] blkif: Avoid double access to src->nr_segments

src is stored in shared memory and src->nr_segments is dereferenced
twice at the end of the function.  If a compiler decides to compile this
into two separate memory accesses then the size limitation could be
bypassed.

Fix it by removing the double access to src->nr_segments.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 hw/xen_blkif.h | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/xen_blkif.h b/hw/xen_blkif.h
index ca3a65b..eb29cb1 100644
--- a/hw/xen_blkif.h
+++ b/hw/xen_blkif.h
@@ -79,8 +79,10 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	xen_mb();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }
@@ -94,8 +96,10 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	xen_mb();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }
-- 
2.4.3


[-- Attachment #18: xsa155-qemut-xenfb.patch --]
[-- Type: application/octet-stream, Size: 1563 bytes --]

From 0ffd4547665d2fec648ab2c9ff856c5d9db9b07c Mon Sep 17 00:00:00 2001
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 20 Nov 2015 10:37:08 -0500
Subject: [PATCH 2/2] xenfb: avoid reading twice the same fields from the
 shared page

Reading twice the same field could give the guest an attack of
opportunity. In the case of event->type, gcc could compile the switch
statement into a jump table, effectively ending up reading the type
field multiple times.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 hw/xenfb.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/hw/xenfb.c b/hw/xenfb.c
index 75b2bc2..369d45d 100644
--- a/hw/xenfb.c
+++ b/hw/xenfb.c
@@ -827,18 +827,20 @@ static void xenfb_invalidate(void *opaque)
 
 static void xenfb_handle_events(struct XenFB *xenfb)
 {
-    uint32_t prod, cons;
+    uint32_t prod, cons, out_cons;
     struct xenfb_page *page = xenfb->c.page;
 
     prod = page->out_prod;
-    if (prod == page->out_cons)
+    out_cons = page->out_cons;
+    if (prod == out_cons)
 	return;
     xen_rmb();		/* ensure we see ring contents up to prod */
-    for (cons = page->out_cons; cons != prod; cons++) {
+    for (cons = out_cons; cons != prod; cons++) {
 	union xenfb_out_event *event = &XENFB_OUT_RING_REF(page, cons);
+        uint8_t type = event->type;
 	int x, y, w, h;
 
-	switch (event->type) {
+	switch (type) {
 	case XENFB_TYPE_UPDATE:
 	    if (xenfb->up_count == UP_QUEUE)
 		xenfb->up_fullscreen = 1;
-- 
2.1.0


[-- Attachment #19: xsa155-qemu-xenfb.patch --]
[-- Type: application/octet-stream, Size: 1313 bytes --]

xenfb: avoid reading twice the same fields from the shared page

Reading twice the same field could give the guest an attack of
opportunity. In the case of event->type, gcc could compile the switch
statement into a jump table, effectively ending up reading the type
field multiple times.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 5e324ef..4e2a27a 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -784,18 +784,20 @@ static void xenfb_invalidate(void *opaque)
 
 static void xenfb_handle_events(struct XenFB *xenfb)
 {
-    uint32_t prod, cons;
+    uint32_t prod, cons, out_cons;
     struct xenfb_page *page = xenfb->c.page;
 
     prod = page->out_prod;
-    if (prod == page->out_cons)
+    out_cons = page->out_cons;
+    if (prod == out_cons)
 	return;
     xen_rmb();		/* ensure we see ring contents up to prod */
-    for (cons = page->out_cons; cons != prod; cons++) {
+    for (cons = out_cons; cons != prod; cons++) {
 	union xenfb_out_event *event = &XENFB_OUT_RING_REF(page, cons);
+        uint8_t type = event->type;
 	int x, y, w, h;
 
-	switch (event->type) {
+	switch (type) {
 	case XENFB_TYPE_UPDATE:
 	    if (xenfb->up_count == UP_QUEUE)
 		xenfb->up_fullscreen = 1;

[-- Attachment #20: xsa155-xen-0001-xen-Add-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2095 bytes --]

From 12b11658a9d6a654a1e7acbf2f2d56ce9a396c86 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 20 Nov 2015 11:59:05 -0500
Subject: [PATCH 1/3] xen: Add RING_COPY_REQUEST()

Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a request
generally requires taking a local copy.

Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is part of XSA155.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: Add comment about GCC bug.
---
 xen/include/public/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/include/public/io/ring.h b/xen/include/public/io/ring.h
index ba9401b..801c0da 100644
--- a/xen/include/public/io/ring.h
+++ b/xen/include/public/io/ring.h
@@ -212,6 +212,20 @@ typedef struct __name##_back_ring __name##_back_ring_t
 #define RING_GET_REQUEST(_r, _idx)                                      \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+/*
+ * Get a local copy of a request.
+ *
+ * Use this in preference to RING_GET_REQUEST() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _req is a struct which consists of only bitfields.
+ */
+#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+	/* Use volatile to force the copy into _req. */			\
+	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+} while (0)
+
 #define RING_GET_RESPONSE(_r, _idx)                                     \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
-- 
2.1.0


[-- Attachment #21: xsa155-xen-0002-blktap2-Use-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2439 bytes --]

From 851ffb4eea917e2708c912291dea4d133026c0ac Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 20 Nov 2015 12:16:02 -0500
Subject: [PATCH 2/3] blktap2: Use RING_COPY_REQUEST

Instead of RING_GET_REQUEST. Using a local copy of the
ring (and also with proper memory barriers) will mean
we can do not have to worry about the compiler optimizing
the code and doing a double-fetch in the shared memory space.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
v2: Fix compile issues with tapdisk-vbd
---
 tools/blktap2/drivers/block-log.c   | 3 ++-
 tools/blktap2/drivers/tapdisk-vbd.c | 8 ++++----
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/tools/blktap2/drivers/block-log.c b/tools/blktap2/drivers/block-log.c
index 5330cdc..5f3bd35 100644
--- a/tools/blktap2/drivers/block-log.c
+++ b/tools/blktap2/drivers/block-log.c
@@ -494,11 +494,12 @@ static int ctl_kick(struct tdlog_state* s, int fd)
   reqstart = s->bring.req_cons;
   reqend = s->sring->req_prod;
 
+  xen_mb();
   BDPRINTF("ctl: ring kicked (start = %u, end = %u)", reqstart, reqend);
 
   while (reqstart != reqend) {
     /* XXX actually submit these! */
-    memcpy(&req, RING_GET_REQUEST(&s->bring, reqstart), sizeof(req));
+    RING_COPY_REQUEST(&s->bring, reqstart, &req);
     BDPRINTF("ctl: read request %"PRIu64":%u", req.sector, req.count);
     s->bring.req_cons = ++reqstart;
 
diff --git a/tools/blktap2/drivers/tapdisk-vbd.c b/tools/blktap2/drivers/tapdisk-vbd.c
index 6d1d94a..89ef9ed 100644
--- a/tools/blktap2/drivers/tapdisk-vbd.c
+++ b/tools/blktap2/drivers/tapdisk-vbd.c
@@ -1555,7 +1555,7 @@ tapdisk_vbd_pull_ring_requests(td_vbd_t *vbd)
 	int idx;
 	RING_IDX rp, rc;
 	td_ring_t *ring;
-	blkif_request_t *req;
+	blkif_request_t req;
 	td_vbd_request_t *vreq;
 
 	ring = &vbd->ring;
@@ -1566,16 +1566,16 @@ tapdisk_vbd_pull_ring_requests(td_vbd_t *vbd)
 	xen_rmb();
 
 	for (rc = ring->fe_ring.req_cons; rc != rp; rc++) {
-		req = RING_GET_REQUEST(&ring->fe_ring, rc);
+		RING_COPY_REQUEST(&ring->fe_ring, rc, &req);
 		++ring->fe_ring.req_cons;
 
-		idx  = req->id;
+		idx  = req.id;
 		vreq = &vbd->request_list[idx];
 
 		ASSERT(list_empty(&vreq->next));
 		ASSERT(vreq->secs_pending == 0);
 
-		memcpy(&vreq->req, req, sizeof(blkif_request_t));
+		memcpy(&vreq->req, &req, sizeof(blkif_request_t));
 		vbd->received++;
 		vreq->vbd = vbd;
 
-- 
2.1.4


[-- Attachment #22: xsa155-xen-0003-libvchan-Read-prod-cons-only-once.patch --]
[-- Type: application/octet-stream, Size: 1578 bytes --]

From c1fce65e2b720684ea6ba76ae59921542bd154bb Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 20 Nov 2015 12:22:14 -0500
Subject: [PATCH 3/3] libvchan: Read prod/cons only once.

We must ensure that the prod/cons are only read once and that
the compiler won't try to optimize the reads. That is split
the read of these in multiple instructions influencing later
branch code. As such insert barriers when fetching the cons
and prod index.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 tools/libvchan/io.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libvchan/io.c b/tools/libvchan/io.c
index 8a9629b..381cc05 100644
--- a/tools/libvchan/io.c
+++ b/tools/libvchan/io.c
@@ -117,6 +117,7 @@ static inline int send_notify(struct libxenvchan *ctrl, uint8_t bit)
 static inline int raw_get_data_ready(struct libxenvchan *ctrl)
 {
 	uint32_t ready = rd_prod(ctrl) - rd_cons(ctrl);
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready > rd_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
@@ -158,6 +159,7 @@ int libxenvchan_data_ready(struct libxenvchan *ctrl)
 static inline int raw_get_buffer_space(struct libxenvchan *ctrl)
 {
 	uint32_t ready = wr_ring_size(ctrl) - (wr_prod(ctrl) - wr_cons(ctrl));
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready > wr_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
-- 
2.1.0


[-- Attachment #23: xsa155-xen44-0003-libvchan-Read-prod-cons-only-once.patch --]
[-- Type: application/octet-stream, Size: 1575 bytes --]

From ef86ad0b60fe179b1a6fa390e05c339fb44b9cc9 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 20 Nov 2015 12:22:14 -0500
Subject: [PATCH] libvchan: Read prod/cons only once.

We must ensure that the prod/cons are only read once and that
the compiler won't try to optimize the reads. That is split
the read of these in multiple instructions influencing later
branch code. As such insert barriers when fetching the cons
and prod index.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 tools/libvchan/io.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libvchan/io.c b/tools/libvchan/io.c
index 804c63c..8b33f40 100644
--- a/tools/libvchan/io.c
+++ b/tools/libvchan/io.c
@@ -118,6 +118,7 @@ static inline int send_notify(struct libxenvchan *ctrl, uint8_t bit)
 static inline int raw_get_data_ready(struct libxenvchan *ctrl)
 {
 	uint32_t ready = rd_prod(ctrl) - rd_cons(ctrl);
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready >= rd_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
@@ -159,6 +160,7 @@ int libxenvchan_data_ready(struct libxenvchan *ctrl)
 static inline int raw_get_buffer_space(struct libxenvchan *ctrl)
 {
 	uint32_t ready = wr_ring_size(ctrl) - (wr_prod(ctrl) - wr_cons(ctrl));
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready > wr_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
-- 
2.1.4


[-- Attachment #24: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Xen Security Advisory 155 (CVE-2015-8550) - paravirtualized drivers incautious about shared memory contents
@ 2015-12-17 13:36 Xen.org security team
  0 siblings, 0 replies; 2+ messages in thread
From: Xen.org security team @ 2015-12-17 13:36 UTC (permalink / raw)
  To: xen-announce, xen-devel, xen-users, oss-security; +Cc: Xen.org security team

[-- Attachment #1: Type: text/plain, Size: 8106 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2015-8550 / XSA-155
                              version 6

    paravirtualized drivers incautious about shared memory contents

UPDATES IN VERSION 6
====================

Correct CREDITS section.

ISSUE DESCRIPTION
=================

The compiler can emit optimizations in the PV backend drivers which
can lead to double fetch vulnerabilities. Specifically the shared
memory between the frontend and backend can be fetched twice (during
which time the frontend can alter the contents) possibly leading to
arbitrary code execution in backend.

IMPACT
======

Malicious guest administrators can cause denial of service.  If driver
domains are not in use, the impact can be a host crash, or privilege escalation.

VULNERABLE SYSTEMS
==================

Systems running PV or HVM guests are vulnerable.

ARM and x86 systems are vulnerable.

All OSes providing PV backends are susceptible, this includes
Linux and NetBSD. By default the Linux distributions compile kernels
with optimizations.

MITIGATION
==========

There is no mitigation.

CREDITS
=======

This issue was discovered by Felix Wilhelm (ERNW Research, KIT /
Operating Systems Group).

RESOLUTION
==========

Applying the appropriate attached patches should fix the problem for
PV backends.  Note only that PV backends are fixed; PV frontend
patches will be developed and released (publicly) after the embargo
date.

Please note that there is a bug in some versions of gcc,
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 which can cause the
construct used in RING_COPY_REQUEST() to be ineffective in some
circumstances. We have determined that this is only the case when the
structure being copied consists purely of bitfields. The Xen PV
protocols updated here do not use bitfields in this way and therefore
these patches are not subject to that bug. However authors of third
party PV protocols should take this into consideration.

Linux v4.4:
xsa155-linux-xsa155-0001-xen-Add-RING_COPY_REQUEST.patch
xsa155-linux-xsa155-0002-xen-netback-don-t-use-last-request-to-determine-mini.patch
xsa155-linux-xsa155-0003-xen-netback-use-RING_COPY_REQUEST-throughout.patch
xsa155-linux-xsa155-0004-xen-blkback-only-read-request-operation-from-shared-.patch
xsa155-linux-xsa155-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
xsa155-linux-xsa155-0006-xen-scsiback-safely-copy-requests.patch
xsa155-linux-xsa155-0007-xen-pciback-Save-xen_pci_op-commands-before-processi.patch
Linux v4.[0,1,2,3]
All the above patches except #5 will apply, please use:
xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
Linux v3.19:
All the above patches except #5 and #6 will apply, please use:
xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
xsa155-linux319-0006-xen-scsiback-safely-copy-requests.patch

qemu-xen:
xsa155-qemu-qdisk-double-access.patch
xsa155-qemu-xenfb.patch

qemu-traditional:
xsa155-qemut-qdisk-double-access.patch
xsa155-qemut-xenfb.patch

NetBSD 7.0:
xsa155-netbsd-xsa155-0001-netbsd-xen-Add-RING_COPY_REQUEST.patch
xsa155-netbsd-xsa155-0002-netbsd-netback-Use-RING_COPY_REQUEST-instead-of-RING.patch
xsa155-netbsd-xsa155-0003-netbsd-ring-Add-barrier-to-provide-an-compiler-barri.patch
xsa155-netbsd-xsa155-0004-netbsd-block-only-read-request-operation-from-shared.patch
xsa155-netbsd-xsa155-0005-netbsd-pciback-Operate-on-local-version-of-xen_pci_o.patch

xen:
xsa155-xen-0001-xen-Add-RING_COPY_REQUEST.patch
xsa155-xen-0002-blktap2-Use-RING_COPY_REQUEST.patch
xsa155-xen-0003-libvchan-Read-prod-cons-only-once.patch

xen 4.4:
All patches except #3 will apply, please use:
xsa155-xen44-0003-libvchan-Read-prod-cons-only-once.patch

$ sha256sum xsa155*
d9fbc104ab2ae797971e351ee0e04e7b7e9c7c33385309bb406c7941dc9a33b4  xsa155-linux319-xsa155-0006-xen-scsiback-safely-copy-requests.patch
590656d83ad7b6052b54659eccb3469658b3942c0dc1366423a66f2f5ac643e1  xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
2bd18632178e09394c5cd06aded2c14bcc6b6e360ad6e81827d24860fe3e8ca4  xsa155-linux-xsa155-0001-xen-Add-RING_COPY_REQUEST.patch
cecdeccb8e2551252c81fc5f164a8298005df714a574a7ba18b84e8ed5f2bb70  xsa155-linux-xsa155-0002-xen-netback-don-t-use-last-request-to-determine-mini.patch
3916b847243047f0e1053233ade742c14a7f29243584e60bf5db4842a8068855  xsa155-linux-xsa155-0003-xen-netback-use-RING_COPY_REQUEST-throughout.patch
746c8eb0aeb200d76156c88dfbbd49db79f567b88b07eda70f7c7d095721f05a  xsa155-linux-xsa155-0004-xen-blkback-only-read-request-operation-from-shared-.patch
18517a184a02f7441065b8d3423086320ec4c2345c00d551231f7976381767f5  xsa155-linux-xsa155-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch
2e6d556d25b1cc16e71afde665ae3908f4fa8eab7e0d96283fc78400301baf92  xsa155-linux-xsa155-0006-xen-scsiback-safely-copy-requests.patch
5e130d8b61906015c6a94f8edd3cce97b172f96a265d97ecf370e7b45125b73d  xsa155-linux-xsa155-0007-xen-pciback-Save-xen_pci_op-commands-before-processi.patch
08c2d0f95dcc215165afbce623b6972b81dd45b091b5f40017579b00c8612e03  xsa155-netbsd-xsa155-0001-netbsd-xen-Add-RING_COPY_REQUEST.patch
0a66010f736092f91f70bb0fd220685e4395efef1db6d23a3d1eace31d144f51  xsa155-netbsd-xsa155-0002-netbsd-netback-Use-RING_COPY_REQUEST-instead-of-RING.patch
5e913a8427cab6b4d384d1246e05116afc301eb117edd838101eb53a82c2f2ff  xsa155-netbsd-xsa155-0003-netbsd-ring-Add-barrier-to-provide-an-compiler-barri.patch
3b8f14eafaed3a7bc66245753a37af4249acf8129fbedb70653192252dc47dc9  xsa155-netbsd-xsa155-0004-netbsd-block-only-read-request-operation-from-shared.patch
81ae5fa998243a78dad749fc561be647dc1dc1be799e8f18484fdf0989469705  xsa155-netbsd-xsa155-0005-netbsd-pciback-Operate-on-local-version-of-xen_pci_o.patch
044ff74fa048df820d528f64f2791ec9cb3940bd313c1179020bd49a6cde2ca3  xsa155-qemu-qdisk-double-access.patch
1150504589eb7bfa108c80ce63395e57d0e627b12d9201219d968fdd026919a6  xsa155-qemut-qdisk-double-access.patch
63186246ab6913b54bfef5f09f33e815935ac40ff821c27a3efda62339bbbd5f  xsa155-qemut-xenfb.patch
e53b4ac298648cde79344192d5a58ca8d8724344f5105bec7c09eef095c668f6  xsa155-qemu-xenfb.patch
e52467fcec73bcc86d3e96d06f8ca8085ae56a83d2c42a30c16bc3dc630d8f8a  xsa155-xen-0001-xen-Add-RING_COPY_REQUEST.patch
eae34c8ccc096ad93a74190506b3d55020a88afb0cc504a3a514590e9fd746fd  xsa155-xen-0002-blktap2-Use-RING_COPY_REQUEST.patch
42780265014085a4221ad32b026214693d751789eb5219e2e83862c0006c66f4  xsa155-xen-0003-libvchan-Read-prod-cons-only-once.patch
dfcaddb8a908a4fc1b048a43187e885117e67dc566f5c841037ee366dcd437d1  xsa155-xen44-0003-libvchan-Read-prod-cons-only-once.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJWcrpdAAoJEIP+FMlX6CvZ9soIALqQ/GHP6bZn2LqJTD9DIzsm
zVB4yCPiVfDqHSOq9QNCzBzqpvOX+RhKTzRH1jsZczr8CSnkePxaCrmZgH8SAygB
hFcF9xJGlJDjs647sgpQmYs++3mgD/57uml7IW/8NX46tXUelVByW7muNgUN2xlm
kjeD8auJEs+jK1iwpt/hOmYe4moRx3+3ujfgqMCNAWtqZz9D9wM5tao+p6yKYlhM
u8hSi1V3b7sAbf92mwzpzfpbwdgg25xeHtZ/oJxp/ZY0FhqDEsTxV+h8HjD/Eink
GwqPS19O77tMmz9fUUTyJDSsU7ayFRI0HyYmXju4eJktJkhXagjAdCSyGky9z5g=
=FlX2
-----END PGP SIGNATURE-----

[-- Attachment #2: xsa155-linux319-xsa155-0006-xen-scsiback-safely-copy-requests.patch --]
[-- Type: application/octet-stream, Size: 1159 bytes --]

From f9c71e892d5142a314481df6baa26b34e6a6ba45 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 16 Nov 2015 18:02:32 +0000
Subject: [PATCH] xen-scsiback: safely copy requests

The copy of the ring request was lacking a following barrier(),
potentially allowing the compiler to optimize the copy away.

Use RING_COPY_REQUEST() to ensure the request is copied to local
memory.

This is XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: This is a against v3.19
---
 drivers/xen/xen-scsiback.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index e999496e..d86f6e1 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -734,7 +734,7 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info)
 		if (!pending_req)
 			return 1;
 
-		ring_req = RING_GET_REQUEST(ring, rc);
+		RING_COPY_REQUEST(ring, rc, &ring_req);
 		ring->req_cons = ++rc;
 
 		act = ring_req->act;
-- 
2.1.0


[-- Attachment #3: xsa155-linux43-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch --]
[-- Type: application/octet-stream, Size: 2339 bytes --]

From d52f00960c1070c683809faddd35a2223e2b8a6e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Tue, 3 Nov 2015 16:40:43 +0000
Subject: [PATCH 6/7] xen-blkback: read from indirect descriptors only once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Since indirect descriptors are in memory shared with the frontend, the
frontend could alter the first_sect and last_sect values after they have
been validated but before they are recorded in the request.  This may
result in I/O requests that overflow the foreign page, possibly
overwriting local pages when the I/O request is executed.

When parsing indirect descriptors, only read first_sect and last_sect
once.

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
----
v2: This is against v4.3
---
 drivers/block/xen-blkback/blkback.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6a685ae..f2e7a38 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -950,6 +950,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 		goto unmap;
 
 	for (n = 0, i = 0; n < nseg; n++) {
+		uint8_t first_sect, last_sect;
+
 		if ((n % SEGS_PER_INDIRECT_FRAME) == 0) {
 			/* Map indirect segments */
 			if (segments)
@@ -958,14 +960,14 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 		}
 		i = n % SEGS_PER_INDIRECT_FRAME;
 		pending_req->segments[n]->gref = segments[i].gref;
-		seg[n].nsec = segments[i].last_sect -
-			segments[i].first_sect + 1;
-		seg[n].offset = (segments[i].first_sect << 9);
-		if ((segments[i].last_sect >= (PAGE_SIZE >> 9)) ||
-		    (segments[i].last_sect < segments[i].first_sect)) {
+		first_sect = READ_ONCE(segments[i].first_sect);
+		last_sect = READ_ONCE(segments[i].last_sect);
+		if (last_sect >= (PAGE_SIZE >> 9) || last_sect < first_sect) {
 			rc = -EINVAL;
 			goto unmap;
 		}
+		seg[n].nsec = last_sect - first_sect + 1;
+		seg[n].offset = first_sect << 9;
 		preq->nr_sects += seg[n].nsec;
 	}
 
-- 
2.1.0


[-- Attachment #4: xsa155-linux-xsa155-0001-xen-Add-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2057 bytes --]

From 4e2bc423e0cef0a42f93d989c0980301df1bd462 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 30 Oct 2015 14:58:08 +0000
Subject: [PATCH 1/7] xen: Add RING_COPY_REQUEST()

Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a request
generally requires taking a local copy.

Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: Update about GCC and bitfields.
---
 include/xen/interface/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7d28aff..7dc685b 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -181,6 +181,20 @@ struct __name##_back_ring {						\
 #define RING_GET_REQUEST(_r, _idx)					\
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+/*
+ * Get a local copy of a request.
+ *
+ * Use this in preference to RING_GET_REQUEST() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _req is a struct which consists of only bitfields.
+ */
+#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+	/* Use volatile to force the copy into _req. */			\
+	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+} while (0)
+
 #define RING_GET_RESPONSE(_r, _idx)					\
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
-- 
2.1.0


[-- Attachment #5: xsa155-linux-xsa155-0002-xen-netback-don-t-use-last-request-to-determine-mini.patch --]
[-- Type: application/octet-stream, Size: 1568 bytes --]

From 100ac372a0e07ccc8c508c3884fa9020cfe08094 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 30 Oct 2015 15:16:01 +0000
Subject: [PATCH 2/7] xen-netback: don't use last request to determine minimum
 Tx credit

The last from guest transmitted request gives no indication about the
minimum amount of credit that the guest might need to send a packet
since the last packet might have been a small one.

Instead allow for the worst case 128 KiB packet.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/net/xen-netback/netback.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e481f37..b683581 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -679,9 +679,7 @@ static void tx_add_credit(struct xenvif_queue *queue)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
-	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, queue->credit_bytes);
+	max_burst = max(131072UL, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
 	max_credit = queue->remaining_credit + queue->credit_bytes;
-- 
2.1.0


[-- Attachment #6: xsa155-linux-xsa155-0003-xen-netback-use-RING_COPY_REQUEST-throughout.patch --]
[-- Type: application/octet-stream, Size: 4383 bytes --]

From 4127e9ccae0eda622421d21132846abdf74f66ed Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 30 Oct 2015 15:17:06 +0000
Subject: [PATCH 3/7] xen-netback: use RING_COPY_REQUEST() throughout

Instead of open-coding memcpy()s and directly accessing Tx and Rx
requests, use the new RING_COPY_REQUEST() that ensures the local copy
is correct.

This is more than is strictly necessary for guest Rx requests since
only the id and gref fields are used and it is harmless if the
frontend modifies these.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/net/xen-netback/netback.c | 30 ++++++++++++++----------------
 1 file changed, 14 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index b683581..1049c34 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -258,18 +258,18 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
-	struct xen_netif_rx_request *req;
+	struct xen_netif_rx_request req;
 
-	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
+	RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
 	meta->gso_size = 0;
 	meta->size = 0;
-	meta->id = req->id;
+	meta->id = req.id;
 
 	npo->copy_off = 0;
-	npo->copy_gref = req->gref;
+	npo->copy_gref = req.gref;
 
 	return meta;
 }
@@ -424,7 +424,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
 	int i;
-	struct xen_netif_rx_request *req;
+	struct xen_netif_rx_request req;
 	struct xenvif_rx_meta *meta;
 	unsigned char *data;
 	int head = 1;
@@ -443,15 +443,15 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
+		RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = skb_shinfo(skb)->gso_size;
 		meta->size = 0;
-		meta->id = req->id;
+		meta->id = req.id;
 	}
 
-	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
+	RING_COPY_REQUEST(&queue->rx, queue->rx.req_cons++, &req);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -463,9 +463,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	}
 
 	meta->size = 0;
-	meta->id = req->id;
+	meta->id = req.id;
 	npo->copy_off = 0;
-	npo->copy_gref = req->gref;
+	npo->copy_gref = req.gref;
 
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
@@ -709,7 +709,7 @@ static void xenvif_tx_err(struct xenvif_queue *queue,
 		spin_unlock_irqrestore(&queue->response_lock, flags);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&queue->tx, cons++);
+		RING_COPY_REQUEST(&queue->tx, cons++, txp);
 	} while (1);
 	queue->tx.req_cons = cons;
 }
@@ -776,8 +776,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
-		       sizeof(*txp));
+		RING_COPY_REQUEST(&queue->tx, cons + slots, txp);
 
 		/* If the guest submitted a frame >= 64 KiB then
 		 * first->size overflowed and following slots will
@@ -1110,8 +1109,7 @@ static int xenvif_get_extras(struct xenvif_queue *queue,
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
-		       sizeof(extra));
+		RING_COPY_REQUEST(&queue->tx, cons, &extra);
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
 			queue->tx.req_cons = ++cons;
@@ -1320,7 +1318,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
+		RING_COPY_REQUEST(&queue->tx, idx, &txreq);
 
 		/* Credit-based scheduling. */
 		if (txreq.size > queue->remaining_credit &&
-- 
2.1.0


[-- Attachment #7: xsa155-linux-xsa155-0004-xen-blkback-only-read-request-operation-from-shared-.patch --]
[-- Type: application/octet-stream, Size: 1925 bytes --]

From 084b8c2e77f1ac07e4a3a121ff957c49a9379385 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Tue, 3 Nov 2015 16:34:09 +0000
Subject: [PATCH 4/7] xen-blkback: only read request operation from shared ring
 once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

A compiler may load a switch statement value multiple times, which could
be bad when the value is in memory shared with the frontend.

When converting a non-native request to a native one, ensure that
src->operation is only loaded once by using READ_ONCE().

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/common.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index 68e87a0..c929ae2 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -408,8 +408,8 @@ static inline void blkif_get_x86_32_req(struct blkif_request *dst,
 					struct blkif_x86_32_request *src)
 {
 	int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;
-	dst->operation = src->operation;
-	switch (src->operation) {
+	dst->operation = READ_ONCE(src->operation);
+	switch (dst->operation) {
 	case BLKIF_OP_READ:
 	case BLKIF_OP_WRITE:
 	case BLKIF_OP_WRITE_BARRIER:
@@ -456,8 +456,8 @@ static inline void blkif_get_x86_64_req(struct blkif_request *dst,
 					struct blkif_x86_64_request *src)
 {
 	int i, n = BLKIF_MAX_SEGMENTS_PER_REQUEST, j;
-	dst->operation = src->operation;
-	switch (src->operation) {
+	dst->operation = READ_ONCE(src->operation);
+	switch (dst->operation) {
 	case BLKIF_OP_READ:
 	case BLKIF_OP_WRITE:
 	case BLKIF_OP_WRITE_BARRIER:
-- 
2.1.0


[-- Attachment #8: xsa155-linux-xsa155-0005-xen-blkback-read-from-indirect-descriptors-only-once.patch --]
[-- Type: application/octet-stream, Size: 2395 bytes --]

From 06ee7c7beb0b5245b1d879c9753faa2cf5ad9891 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Date: Tue, 3 Nov 2015 16:40:43 +0000
Subject: [PATCH 5/7] xen-blkback: read from indirect descriptors only once
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Since indirect descriptors are in memory shared with the frontend, the
frontend could alter the first_sect and last_sect values after they have
been validated but before they are recorded in the request.  This may
result in I/O requests that overflow the foreign page, possibly
overwriting local pages when the I/O request is executed.

When parsing indirect descriptors, only read first_sect and last_sect
once.

This is part of XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/blkback.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index f909994..41fb1a9 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -950,6 +950,8 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 		goto unmap;
 
 	for (n = 0, i = 0; n < nseg; n++) {
+		uint8_t first_sect, last_sect;
+
 		if ((n % SEGS_PER_INDIRECT_FRAME) == 0) {
 			/* Map indirect segments */
 			if (segments)
@@ -957,15 +959,18 @@ static int xen_blkbk_parse_indirect(struct blkif_request *req,
 			segments = kmap_atomic(pages[n/SEGS_PER_INDIRECT_FRAME]->page);
 		}
 		i = n % SEGS_PER_INDIRECT_FRAME;
+
 		pending_req->segments[n]->gref = segments[i].gref;
-		seg[n].nsec = segments[i].last_sect -
-			segments[i].first_sect + 1;
-		seg[n].offset = (segments[i].first_sect << 9);
-		if ((segments[i].last_sect >= (XEN_PAGE_SIZE >> 9)) ||
-		    (segments[i].last_sect < segments[i].first_sect)) {
+
+		first_sect = READ_ONCE(segments[i].first_sect);
+		last_sect = READ_ONCE(segments[i].last_sect);
+		if (last_sect >= (XEN_PAGE_SIZE >> 9) || last_sect < first_sect) {
 			rc = -EINVAL;
 			goto unmap;
 		}
+
+		seg[n].nsec = last_sect - first_sect + 1;
+		seg[n].offset = first_sect << 9;
 		preq->nr_sects += seg[n].nsec;
 	}
 
-- 
2.1.0


[-- Attachment #9: xsa155-linux-xsa155-0006-xen-scsiback-safely-copy-requests.patch --]
[-- Type: application/octet-stream, Size: 1176 bytes --]

From 89739c14c72e5c1626a5cd5e09cbb2efeaadb6d8 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 16 Nov 2015 18:02:32 +0000
Subject: [PATCH 6/7] xen-scsiback: safely copy requests

The copy of the ring request was lacking a following barrier(),
potentially allowing the compiler to optimize the copy away.

Use RING_COPY_REQUEST() to ensure the request is copied to local
memory.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/xen-scsiback.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index 43bcae8..ad4eb10 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -726,7 +726,7 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info)
 		if (!pending_req)
 			return 1;
 
-		ring_req = *RING_GET_REQUEST(ring, rc);
+		RING_COPY_REQUEST(ring, rc, &ring_req);
 		ring->req_cons = ++rc;
 
 		err = prepare_pending_reqs(info, &ring_req, pending_req);
-- 
2.1.0


[-- Attachment #10: xsa155-linux-xsa155-0007-xen-pciback-Save-xen_pci_op-commands-before-processi.patch --]
[-- Type: application/octet-stream, Size: 2855 bytes --]

From f6f4388c917ce96b075a239a4535b8efc6064d14 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 16 Nov 2015 12:40:48 -0500
Subject: [PATCH 7/7] xen/pciback: Save xen_pci_op commands before processing
 it

Double fetch vulnerabilities that happen when a variable is
fetched twice from shared memory but a security check is only
performed the first time.

The xen_pcibk_do_op function performs a switch statements on the op->cmd
value which is stored in shared memory. Interestingly this can result
in a double fetch vulnerability depending on the performed compiler
optimization.

This patch fixes it by saving the xen_pci_op command before
processing it. We also use 'barrier' to make sure that the
compiler does not perform any optimization.

This is part of XSA155.

CC: stable@vger.kernel.org
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/xen-pciback/pciback.h     |  1 +
 drivers/xen/xen-pciback/pciback_ops.c | 15 ++++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xen-pciback/pciback.h b/drivers/xen/xen-pciback/pciback.h
index 58e38d5..4d529f3 100644
--- a/drivers/xen/xen-pciback/pciback.h
+++ b/drivers/xen/xen-pciback/pciback.h
@@ -37,6 +37,7 @@ struct xen_pcibk_device {
 	struct xen_pci_sharedinfo *sh_info;
 	unsigned long flags;
 	struct work_struct op_work;
+	struct xen_pci_op op;
 };
 
 struct xen_pcibk_dev_data {
diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
index c4a0666..a0e0e3e 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -298,9 +298,11 @@ void xen_pcibk_do_op(struct work_struct *data)
 		container_of(data, struct xen_pcibk_device, op_work);
 	struct pci_dev *dev;
 	struct xen_pcibk_dev_data *dev_data = NULL;
-	struct xen_pci_op *op = &pdev->sh_info->op;
+	struct xen_pci_op *op = &pdev->op;
 	int test_intx = 0;
 
+	*op = pdev->sh_info->op;
+	barrier();
 	dev = xen_pcibk_get_pci_dev(pdev, op->domain, op->bus, op->devfn);
 
 	if (dev == NULL)
@@ -342,6 +344,17 @@ void xen_pcibk_do_op(struct work_struct *data)
 		if ((dev_data->enable_intx != test_intx))
 			xen_pcibk_control_isr(dev, 0 /* no reset */);
 	}
+	pdev->sh_info->op.err = op->err;
+	pdev->sh_info->op.value = op->value;
+#ifdef CONFIG_PCI_MSI
+	if (op->cmd == XEN_PCI_OP_enable_msix && op->err == 0) {
+		unsigned int i;
+
+		for (i = 0; i < op->value; i++)
+			pdev->sh_info->op.msix_entries[i].vector =
+				op->msix_entries[i].vector;
+	}
+#endif
 	/* Tell the driver domain that we're done. */
 	wmb();
 	clear_bit(_XEN_PCIF_active, (unsigned long *)&pdev->sh_info->flags);
-- 
2.1.0


[-- Attachment #11: xsa155-netbsd-xsa155-0001-netbsd-xen-Add-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2150 bytes --]

From a56456ac3df28432fff44a9a9623e2ddfc826106 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 24 Nov 2015 02:51:56 +0000
Subject: [PATCH 1/5] netbsd/xen: Add RING_COPY_REQUEST()

Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a request
generally requires taking a local copy.

Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is part of XSA155.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: Update comment about GCC bug.
---
 arch/xen/include/xen-public/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/xen/include/xen-public/io/ring.h b/arch/xen/include/xen-public/io/ring.h
index 09c186c..630b80e 100644
--- a/arch/xen/include/xen-public/io/ring.h
+++ b/arch/xen/include/xen-public/io/ring.h
@@ -236,6 +236,20 @@ typedef struct __name##_back_ring __name##_back_ring_t
 #define RING_GET_REQUEST(_r, _idx)                                      \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+/*
+ * Get a local copy of a request.
+ *
+ * Use this in preference to RING_GET_REQUEST() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _req is a struct which consists of only bitfields.
+ */
+#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+	/* Use volatile to force the copy into _req. */			\
+	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+} while (0)
+
 #define RING_GET_RESPONSE(_r, _idx)                                     \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
-- 
2.5.2


[-- Attachment #12: xsa155-netbsd-xsa155-0002-netbsd-netback-Use-RING_COPY_REQUEST-instead-of-RING.patch --]
[-- Type: application/octet-stream, Size: 8263 bytes --]

From 1c697ca76a670b0883cd6a203828c33ccf4ecb1e Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:53:16 +0000
Subject: [PATCH 2/5] netbsd/netback: Use RING_COPY_REQUEST instead of
 RING_REQ_REQUEST

This way we operate on a local copy of the guest Rx. This is more than
neccessary as only the id and gref fields are used and it is harmless
if the frontend modifies these.

For the TX we also copy the request and make sure to use only the
local copy.

This is based off Linux 'xen-netback: use RING_COPY_REQUEST() throughout'
patch.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/xen/xennetback_xenbus.c | 78 ++++++++++++++++++++--------------------
 1 file changed, 40 insertions(+), 38 deletions(-)

diff --git a/arch/xen/xen/xennetback_xenbus.c b/arch/xen/xen/xennetback_xenbus.c
index 7cc14af..0ef2353 100644
--- a/arch/xen/xen/xennetback_xenbus.c
+++ b/arch/xen/xen/xennetback_xenbus.c
@@ -715,7 +715,7 @@ xennetback_evthandler(void *arg)
 {
 	struct xnetback_instance *xneti = arg;
 	struct ifnet *ifp = &xneti->xni_if;
-	netif_tx_request_t *txreq;
+	netif_tx_request_t txreq;
 	struct xni_pkt *pkt;
 	vaddr_t pkt_va;
 	struct mbuf *m;
@@ -733,36 +733,36 @@ xennetback_evthandler(void *arg)
 		    receive_pending);
 		if (receive_pending == 0)
 			break;
-		txreq = RING_GET_REQUEST(&xneti->xni_txring, req_cons);
+		RING_COPY_REQUEST(&xneti->xni_txring, req_cons, &txreq);
 		xen_rmb();
 		XENPRINTF(("%s pkt size %d\n", xneti->xni_if.if_xname,
-		    txreq->size));
+		    txreq.size));
 		req_cons++;
 		if (__predict_false((ifp->if_flags & (IFF_UP | IFF_RUNNING)) !=
 		    (IFF_UP | IFF_RUNNING))) {
 			/* interface not up, drop */
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			continue;
 		}
 		/*
 		 * Do some sanity checks, and map the packet's page.
 		 */
-		if (__predict_false(txreq->size < ETHER_HDR_LEN ||
-		   txreq->size > (ETHER_MAX_LEN - ETHER_CRC_LEN))) {
+		if (__predict_false(txreq.size < ETHER_HDR_LEN ||
+		   txreq.size > (ETHER_MAX_LEN - ETHER_CRC_LEN))) {
 			printf("%s: packet size %d too big\n",
-			    ifp->if_xname, txreq->size);
-			xennetback_tx_response(xneti, txreq->id,
+			    ifp->if_xname, txreq.size);
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_ERROR);
 			ifp->if_ierrors++;
 			continue;
 		}
 		/* don't cross page boundaries */
 		if (__predict_false(
-		    txreq->offset + txreq->size > PAGE_SIZE)) {
+		    txreq.offset + txreq.size > PAGE_SIZE)) {
 			printf("%s: packet cross page boundary\n",
 			    ifp->if_xname);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_ERROR);
 			ifp->if_ierrors++;
 			continue;
@@ -774,15 +774,15 @@ xennetback_evthandler(void *arg)
 			if (ratecheck(&lasttime, &xni_pool_errintvl))
 				printf("%s: mbuf alloc failed\n",
 				    ifp->if_xname);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			ifp->if_ierrors++;
 			continue;
 		}
 
 		XENPRINTF(("%s pkt offset %d size %d id %d req_cons %d\n",
-		    xneti->xni_if.if_xname, txreq->offset,
-		    txreq->size, txreq->id, MASK_NETIF_TX_IDX(req_cons)));
+		    xneti->xni_if.if_xname, txreq.offset,
+		    txreq.size, txreq.id, MASK_NETIF_TX_IDX(req_cons)));
 		
 		pkt = pool_get(&xni_pkt_pool, PR_NOWAIT);
 		if (__predict_false(pkt == NULL)) {
@@ -790,16 +790,16 @@ xennetback_evthandler(void *arg)
 			if (ratecheck(&lasttime, &xni_pool_errintvl))
 				printf("%s: xnbpkt alloc failed\n",
 				    ifp->if_xname);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			ifp->if_ierrors++;
 			m_freem(m);
 			continue;
 		}
-		err = xen_shm_map(1, xneti->xni_domid, &txreq->gref, &pkt_va,
+		err = xen_shm_map(1, xneti->xni_domid, &txreq.gref, &pkt_va,
 		    &pkt->pkt_handle, XSHM_RO);
 		if (__predict_false(err == ENOMEM)) {
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_DROPPED);
 			ifp->if_ierrors++;
 			pool_put(&xni_pkt_pool, pkt);
@@ -810,7 +810,7 @@ xennetback_evthandler(void *arg)
 		if (__predict_false(err)) {
 			printf("%s: mapping foreing page failed: %d\n",
 			    xneti->xni_if.if_xname, err);
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_ERROR);
 			ifp->if_ierrors++;
 			pool_put(&xni_pkt_pool, pkt);
@@ -820,13 +820,13 @@ xennetback_evthandler(void *arg)
 
 		if ((ifp->if_flags & IFF_PROMISC) == 0) {
 			struct ether_header *eh =
-			    (void*)(pkt_va + txreq->offset);
+			    (void*)(pkt_va + txreq.offset);
 			if (ETHER_IS_MULTICAST(eh->ether_dhost) == 0 &&
 			    memcmp(CLLADDR(ifp->if_sadl), eh->ether_dhost,
 			    ETHER_ADDR_LEN) != 0) {
 				xni_pkt_unmap(pkt, pkt_va);
 				m_freem(m);
-				xennetback_tx_response(xneti, txreq->id,
+				xennetback_tx_response(xneti, txreq.id,
 				    NETIF_RSP_OKAY);
 				continue; /* packet is not for us */
 			}
@@ -845,31 +845,31 @@ so always copy for now.
 			 * ack it. Delaying it until the mbuf is
 			 * freed will stall transmit.
 			 */
-			m->m_len = min(MHLEN, txreq->size);
+			m->m_len = min(MHLEN, txreq.size);
 			m->m_pkthdr.len = 0;
-			m_copyback(m, 0, txreq->size,
-			    (void *)(pkt_va + txreq->offset));
+			m_copyback(m, 0, txreq.size,
+			    (void *)(pkt_va + txreq.offset));
 			xni_pkt_unmap(pkt, pkt_va);
-			if (m->m_pkthdr.len < txreq->size) {
+			if (m->m_pkthdr.len < txreq.size) {
 				ifp->if_ierrors++;
 				m_freem(m);
-				xennetback_tx_response(xneti, txreq->id,
+				xennetback_tx_response(xneti, txreq.id,
 				    NETIF_RSP_DROPPED);
 				continue;
 			}
-			xennetback_tx_response(xneti, txreq->id,
+			xennetback_tx_response(xneti, txreq.id,
 			    NETIF_RSP_OKAY);
 		} else {
 
-			pkt->pkt_id = txreq->id;
+			pkt->pkt_id = txreq.id;
 			pkt->pkt_xneti = xneti;
 
-			MEXTADD(m, pkt_va + txreq->offset,
-			    txreq->size, M_DEVBUF, xennetback_tx_free, pkt);
-			m->m_pkthdr.len = m->m_len = txreq->size;
+			MEXTADD(m, pkt_va + txreq.offset,
+			    txreq.size, M_DEVBUF, xennetback_tx_free, pkt);
+			m->m_pkthdr.len = m->m_len = txreq.size;
 			m->m_flags |= M_EXT_ROMAP;
 		}
-		if ((txreq->flags & NETTXF_csum_blank) != 0) {
+		if ((txreq.flags & NETTXF_csum_blank) != 0) {
 			xennet_checksum_fill(&m);
 			if (m == NULL) {
 				ifp->if_ierrors++;
@@ -953,6 +953,7 @@ xennetback_ifsoftstart_transfer(void *arg)
 	mmu_update_t *mmup;
 	multicall_entry_t *mclp;
 	netif_rx_response_t *rxresp;
+	netif_rx_request_t rxreq;
 	RING_IDX req_prod, resp_prod;
 	int do_event = 0;
 	gnttab_transfer_t *gop;
@@ -1028,10 +1029,10 @@ xennetback_ifsoftstart_transfer(void *arg)
 				nppitems++;
 			}
 			/* start filling ring */
-			gop->ref = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->gref;
-			id = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->id;
+			RING_COPY_REQUEST(&xneti->xni_rxring,
+			    xneti->xni_rxring.req_cons, &rxreq);
+			gop->ref = rxreq.gref;
+			id = rxreq.id;
 			xen_rmb();
 			xneti->xni_rxring.req_cons++;
 			rxresp = RING_GET_RESPONSE(&xneti->xni_rxring,
@@ -1198,6 +1199,7 @@ xennetback_ifsoftstart_copy(void *arg)
 	paddr_t xmit_ma;
 	int i, j;
 	netif_rx_response_t *rxresp;
+	netif_rx_request_t rxreq;
 	RING_IDX req_prod, resp_prod;
 	int do_event = 0;
 	gnttab_copy_t *gop;
@@ -1309,16 +1311,16 @@ xennetback_ifsoftstart_copy(void *arg)
 			gop->source.domid = DOMID_SELF;
 			gop->source.u.gmfn = xmit_ma >> PAGE_SHIFT;
 
-			gop->dest.u.ref = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->gref;
+			RING_COPY_REQUEST(&xneti->xni_rxring,
+			    xneti->xni_rxring.req_cons, &rxreq);
+			gop->dest.u.ref = rxreq.gref;
 			gop->dest.offset = 0;
 			gop->dest.domid = xneti->xni_domid;
 
 			gop->len = m->m_pkthdr.len;
 			gop++;
 
-			id = RING_GET_REQUEST(&xneti->xni_rxring,
-			    xneti->xni_rxring.req_cons)->id;
+			id = rxreq.id;
 			xen_rmb();
 			xneti->xni_rxring.req_cons++;
 			rxresp = RING_GET_RESPONSE(&xneti->xni_rxring,
-- 
2.5.2


[-- Attachment #13: xsa155-netbsd-xsa155-0003-netbsd-ring-Add-barrier-to-provide-an-compiler-barri.patch --]
[-- Type: application/octet-stream, Size: 1272 bytes --]

From b367cdba0cc3e2de4237ca74f31043141deda892 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:54:45 +0000
Subject: [PATCH 3/5] netbsd/ring: Add 'barrier' to provide an compiler
 barrier.

We need an mechanism to disable the compiler from generating to much
optimization. Using the 'barrier' macro will make the compiler not
optimize variables past the 'barrier' (as in, re-use the registers
or only read part of a value from a memory).

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/include/xen-public/io/ring.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/xen/include/xen-public/io/ring.h b/arch/xen/include/xen-public/io/ring.h
index 316bceb..5278d06 100644
--- a/arch/xen/include/xen-public/io/ring.h
+++ b/arch/xen/include/xen-public/io/ring.h
@@ -35,6 +35,7 @@
 #define xen_mb()  mb()
 #define xen_rmb() rmb()
 #define xen_wmb() wmb()
+#define barrier()     __asm__ __volatile__("": : :"memory")
 #endif
 #endif
 
@@ -42,6 +43,7 @@
 #define xen_mb()  x86_mfence()
 #define xen_rmb() x86_lfence()
 #define xen_wmb() x86_sfence()
+#define barrier()     __asm__ __volatile__("": : :"memory")
 #endif
 
 typedef unsigned int RING_IDX;
-- 
2.5.2


[-- Attachment #14: xsa155-netbsd-xsa155-0004-netbsd-block-only-read-request-operation-from-shared.patch --]
[-- Type: application/octet-stream, Size: 1182 bytes --]

From a0c5282aff51d5e6520caa904207b973567d920d Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:56:07 +0000
Subject: [PATCH 4/5] netbsd/block: only read request operation from shared
 ring once

The compiler may load a switch statement multiple times from the shared
space. This could lead to the frontend manipulating the backend into
unforseen branches.

We want to ensure that the req->operation is only read once and we
do that by using an compiler barrier.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/xen/xbdback_xenbus.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/xen/xen/xbdback_xenbus.c b/arch/xen/xen/xbdback_xenbus.c
index 9ee0758..3d18021 100644
--- a/arch/xen/xen/xbdback_xenbus.c
+++ b/arch/xen/xen/xbdback_xenbus.c
@@ -1022,6 +1022,7 @@ xbdback_co_main_loop(struct xbdback_instance *xbdi, void *obj)
 			req->sector_number = req64->sector_number;
 			break;
 		}
+		barrier();
 		XENPRINTF(("xbdback op %d req_cons 0x%x req_prod 0x%x "
 		    "resp_prod 0x%x id %" PRIu64 "\n", req->operation,
 			xbdi->xbdi_ring.ring_n.req_cons,
-- 
2.5.2


[-- Attachment #15: xsa155-netbsd-xsa155-0005-netbsd-pciback-Operate-on-local-version-of-xen_pci_o.patch --]
[-- Type: application/octet-stream, Size: 2391 bytes --]

From 3f39e051b234b4bd8e36b820a932591afd6413b1 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 24 Nov 2015 02:57:22 +0000
Subject: [PATCH 5/5] netbsd/pciback: Operate on local version of xen_pci_op

Double fetch vulnerabilities that happen when a variable is
fetched twice from shared memory but a security check is only
performed the first time.

The pciback_xenbus_evthandler function performs a switch statements on the
op->size and op->cmd value which is stored in shared memory.
Interestingly this can result in a double fetch vulnerability depending on
the performed compiler optimization.

This patch fixes it by saving the xen_pci_op command before
processing it. We also use 'barrier' to make sure that the
compiler does not perform any optimization.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/xen/xen/pciback.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/xen/xen/pciback.c b/arch/xen/xen/pciback.c
index 042c8c9..46c821c 100644
--- a/arch/xen/xen/pciback.c
+++ b/arch/xen/xen/pciback.c
@@ -188,6 +188,7 @@ struct pb_xenbus_instance {
 	/* communication with the domU */
         unsigned int pbx_evtchn; /* our even channel */
         struct xen_pci_sharedinfo *pbx_sh_info;
+        struct xen_pci_op op;
         grant_handle_t pbx_shinfo_handle; /* to unmap shared page */
 };
 
@@ -721,13 +722,16 @@ pciback_xenbus_evthandler(void * arg)
 {
 	struct pb_xenbus_instance *pbxi = arg;
 	struct pciback_pci_dev *pbd;
-	struct xen_pci_op *op = &pbxi->pbx_sh_info->op;
+	struct xen_pci_op *op = &pbxi->op;
 	u_int bus, dev, func;
 
 	hypervisor_clear_event(pbxi->pbx_evtchn);
 	if (xen_atomic_test_bit(&pbxi->pbx_sh_info->flags,
 	    _XEN_PCIF_active) == 0)
 		return 0;
+
+	memcpy(op, &pbxi->pbx_sh_info->op, sizeof (struct xen_pci_op));
+	barrier();
 	if (op->domain != 0) {
 		aprint_error("pciback: domain %d != 0", op->domain);
 		op->err = XEN_PCI_ERR_dev_not_found;
@@ -794,6 +798,8 @@ pciback_xenbus_evthandler(void * arg)
 		aprint_error("pciback: unknown cmd %d\n", op->cmd);
 		op->err = XEN_PCI_ERR_not_implemented;
 	}
+	pbxi->pbx_sh_info->op.value = op->value;
+	pbxi->pbx_sh_info->op.err = op->err;
 end:
 	xen_atomic_clear_bit(&pbxi->pbx_sh_info->flags, _XEN_PCIF_active);
 	hypervisor_notify_via_evtchn(pbxi->pbx_evtchn);
-- 
2.5.2


[-- Attachment #16: xsa155-qemu-qdisk-double-access.patch --]
[-- Type: application/octet-stream, Size: 1402 bytes --]

xen/blkif: Avoid double access to src->nr_segments

src is stored in shared memory and src->nr_segments is dereferenced
twice at the end of the function.  If a compiler decides to compile this
into two separate memory accesses then the size limitation could be
bypassed.

Fix it by removing the double access to src->nr_segments.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index 711b692..9e71e00 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -85,8 +85,10 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 		d->nr_sectors = s->nr_sectors;
 		return;
 	}
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	barrier();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }
@@ -106,8 +108,10 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 		d->nr_sectors = s->nr_sectors;
 		return;
 	}
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	barrier();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }

[-- Attachment #17: xsa155-qemut-qdisk-double-access.patch --]
[-- Type: application/octet-stream, Size: 1804 bytes --]

From 27942b0cb2327e93deb12326bbe7b36c81f9fa7b Mon Sep 17 00:00:00 2001
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 20 Nov 2015 10:56:00 -0500
Subject: [PATCH] blkif: Avoid double access to src->nr_segments

src is stored in shared memory and src->nr_segments is dereferenced
twice at the end of the function.  If a compiler decides to compile this
into two separate memory accesses then the size limitation could be
bypassed.

Fix it by removing the double access to src->nr_segments.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 hw/xen_blkif.h | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/xen_blkif.h b/hw/xen_blkif.h
index ca3a65b..eb29cb1 100644
--- a/hw/xen_blkif.h
+++ b/hw/xen_blkif.h
@@ -79,8 +79,10 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	xen_mb();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }
@@ -94,8 +96,10 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
-	if (n > src->nr_segments)
-		n = src->nr_segments;
+	/* prevent the compiler from optimizing the code and using src->nr_segments instead */
+	xen_mb();
+	if (n > dst->nr_segments)
+		n = dst->nr_segments;
 	for (i = 0; i < n; i++)
 		dst->seg[i] = src->seg[i];
 }
-- 
2.4.3


[-- Attachment #18: xsa155-qemut-xenfb.patch --]
[-- Type: application/octet-stream, Size: 1563 bytes --]

From 0ffd4547665d2fec648ab2c9ff856c5d9db9b07c Mon Sep 17 00:00:00 2001
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 20 Nov 2015 10:37:08 -0500
Subject: [PATCH 2/2] xenfb: avoid reading twice the same fields from the
 shared page

Reading twice the same field could give the guest an attack of
opportunity. In the case of event->type, gcc could compile the switch
statement into a jump table, effectively ending up reading the type
field multiple times.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 hw/xenfb.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/hw/xenfb.c b/hw/xenfb.c
index 75b2bc2..369d45d 100644
--- a/hw/xenfb.c
+++ b/hw/xenfb.c
@@ -827,18 +827,20 @@ static void xenfb_invalidate(void *opaque)
 
 static void xenfb_handle_events(struct XenFB *xenfb)
 {
-    uint32_t prod, cons;
+    uint32_t prod, cons, out_cons;
     struct xenfb_page *page = xenfb->c.page;
 
     prod = page->out_prod;
-    if (prod == page->out_cons)
+    out_cons = page->out_cons;
+    if (prod == out_cons)
 	return;
     xen_rmb();		/* ensure we see ring contents up to prod */
-    for (cons = page->out_cons; cons != prod; cons++) {
+    for (cons = out_cons; cons != prod; cons++) {
 	union xenfb_out_event *event = &XENFB_OUT_RING_REF(page, cons);
+        uint8_t type = event->type;
 	int x, y, w, h;
 
-	switch (event->type) {
+	switch (type) {
 	case XENFB_TYPE_UPDATE:
 	    if (xenfb->up_count == UP_QUEUE)
 		xenfb->up_fullscreen = 1;
-- 
2.1.0


[-- Attachment #19: xsa155-qemu-xenfb.patch --]
[-- Type: application/octet-stream, Size: 1313 bytes --]

xenfb: avoid reading twice the same fields from the shared page

Reading twice the same field could give the guest an attack of
opportunity. In the case of event->type, gcc could compile the switch
statement into a jump table, effectively ending up reading the type
field multiple times.

This is part of XSA-155.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 5e324ef..4e2a27a 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -784,18 +784,20 @@ static void xenfb_invalidate(void *opaque)
 
 static void xenfb_handle_events(struct XenFB *xenfb)
 {
-    uint32_t prod, cons;
+    uint32_t prod, cons, out_cons;
     struct xenfb_page *page = xenfb->c.page;
 
     prod = page->out_prod;
-    if (prod == page->out_cons)
+    out_cons = page->out_cons;
+    if (prod == out_cons)
 	return;
     xen_rmb();		/* ensure we see ring contents up to prod */
-    for (cons = page->out_cons; cons != prod; cons++) {
+    for (cons = out_cons; cons != prod; cons++) {
 	union xenfb_out_event *event = &XENFB_OUT_RING_REF(page, cons);
+        uint8_t type = event->type;
 	int x, y, w, h;
 
-	switch (event->type) {
+	switch (type) {
 	case XENFB_TYPE_UPDATE:
 	    if (xenfb->up_count == UP_QUEUE)
 		xenfb->up_fullscreen = 1;

[-- Attachment #20: xsa155-xen-0001-xen-Add-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2095 bytes --]

From 12b11658a9d6a654a1e7acbf2f2d56ce9a396c86 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 20 Nov 2015 11:59:05 -0500
Subject: [PATCH 1/3] xen: Add RING_COPY_REQUEST()

Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a request
generally requires taking a local copy.

Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is part of XSA155.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v2: Add comment about GCC bug.
---
 xen/include/public/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/include/public/io/ring.h b/xen/include/public/io/ring.h
index ba9401b..801c0da 100644
--- a/xen/include/public/io/ring.h
+++ b/xen/include/public/io/ring.h
@@ -212,6 +212,20 @@ typedef struct __name##_back_ring __name##_back_ring_t
 #define RING_GET_REQUEST(_r, _idx)                                      \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+/*
+ * Get a local copy of a request.
+ *
+ * Use this in preference to RING_GET_REQUEST() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _req is a struct which consists of only bitfields.
+ */
+#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
+	/* Use volatile to force the copy into _req. */			\
+	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+} while (0)
+
 #define RING_GET_RESPONSE(_r, _idx)                                     \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
-- 
2.1.0


[-- Attachment #21: xsa155-xen-0002-blktap2-Use-RING_COPY_REQUEST.patch --]
[-- Type: application/octet-stream, Size: 2439 bytes --]

From 851ffb4eea917e2708c912291dea4d133026c0ac Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 20 Nov 2015 12:16:02 -0500
Subject: [PATCH 2/3] blktap2: Use RING_COPY_REQUEST

Instead of RING_GET_REQUEST. Using a local copy of the
ring (and also with proper memory barriers) will mean
we can do not have to worry about the compiler optimizing
the code and doing a double-fetch in the shared memory space.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
v2: Fix compile issues with tapdisk-vbd
---
 tools/blktap2/drivers/block-log.c   | 3 ++-
 tools/blktap2/drivers/tapdisk-vbd.c | 8 ++++----
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/tools/blktap2/drivers/block-log.c b/tools/blktap2/drivers/block-log.c
index 5330cdc..5f3bd35 100644
--- a/tools/blktap2/drivers/block-log.c
+++ b/tools/blktap2/drivers/block-log.c
@@ -494,11 +494,12 @@ static int ctl_kick(struct tdlog_state* s, int fd)
   reqstart = s->bring.req_cons;
   reqend = s->sring->req_prod;
 
+  xen_mb();
   BDPRINTF("ctl: ring kicked (start = %u, end = %u)", reqstart, reqend);
 
   while (reqstart != reqend) {
     /* XXX actually submit these! */
-    memcpy(&req, RING_GET_REQUEST(&s->bring, reqstart), sizeof(req));
+    RING_COPY_REQUEST(&s->bring, reqstart, &req);
     BDPRINTF("ctl: read request %"PRIu64":%u", req.sector, req.count);
     s->bring.req_cons = ++reqstart;
 
diff --git a/tools/blktap2/drivers/tapdisk-vbd.c b/tools/blktap2/drivers/tapdisk-vbd.c
index 6d1d94a..89ef9ed 100644
--- a/tools/blktap2/drivers/tapdisk-vbd.c
+++ b/tools/blktap2/drivers/tapdisk-vbd.c
@@ -1555,7 +1555,7 @@ tapdisk_vbd_pull_ring_requests(td_vbd_t *vbd)
 	int idx;
 	RING_IDX rp, rc;
 	td_ring_t *ring;
-	blkif_request_t *req;
+	blkif_request_t req;
 	td_vbd_request_t *vreq;
 
 	ring = &vbd->ring;
@@ -1566,16 +1566,16 @@ tapdisk_vbd_pull_ring_requests(td_vbd_t *vbd)
 	xen_rmb();
 
 	for (rc = ring->fe_ring.req_cons; rc != rp; rc++) {
-		req = RING_GET_REQUEST(&ring->fe_ring, rc);
+		RING_COPY_REQUEST(&ring->fe_ring, rc, &req);
 		++ring->fe_ring.req_cons;
 
-		idx  = req->id;
+		idx  = req.id;
 		vreq = &vbd->request_list[idx];
 
 		ASSERT(list_empty(&vreq->next));
 		ASSERT(vreq->secs_pending == 0);
 
-		memcpy(&vreq->req, req, sizeof(blkif_request_t));
+		memcpy(&vreq->req, &req, sizeof(blkif_request_t));
 		vbd->received++;
 		vreq->vbd = vbd;
 
-- 
2.1.4


[-- Attachment #22: xsa155-xen-0003-libvchan-Read-prod-cons-only-once.patch --]
[-- Type: application/octet-stream, Size: 1578 bytes --]

From c1fce65e2b720684ea6ba76ae59921542bd154bb Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 20 Nov 2015 12:22:14 -0500
Subject: [PATCH 3/3] libvchan: Read prod/cons only once.

We must ensure that the prod/cons are only read once and that
the compiler won't try to optimize the reads. That is split
the read of these in multiple instructions influencing later
branch code. As such insert barriers when fetching the cons
and prod index.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 tools/libvchan/io.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libvchan/io.c b/tools/libvchan/io.c
index 8a9629b..381cc05 100644
--- a/tools/libvchan/io.c
+++ b/tools/libvchan/io.c
@@ -117,6 +117,7 @@ static inline int send_notify(struct libxenvchan *ctrl, uint8_t bit)
 static inline int raw_get_data_ready(struct libxenvchan *ctrl)
 {
 	uint32_t ready = rd_prod(ctrl) - rd_cons(ctrl);
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready > rd_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
@@ -158,6 +159,7 @@ int libxenvchan_data_ready(struct libxenvchan *ctrl)
 static inline int raw_get_buffer_space(struct libxenvchan *ctrl)
 {
 	uint32_t ready = wr_ring_size(ctrl) - (wr_prod(ctrl) - wr_cons(ctrl));
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready > wr_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
-- 
2.1.0


[-- Attachment #23: xsa155-xen44-0003-libvchan-Read-prod-cons-only-once.patch --]
[-- Type: application/octet-stream, Size: 1575 bytes --]

From ef86ad0b60fe179b1a6fa390e05c339fb44b9cc9 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 20 Nov 2015 12:22:14 -0500
Subject: [PATCH] libvchan: Read prod/cons only once.

We must ensure that the prod/cons are only read once and that
the compiler won't try to optimize the reads. That is split
the read of these in multiple instructions influencing later
branch code. As such insert barriers when fetching the cons
and prod index.

This is part of XSA155.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 tools/libvchan/io.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libvchan/io.c b/tools/libvchan/io.c
index 804c63c..8b33f40 100644
--- a/tools/libvchan/io.c
+++ b/tools/libvchan/io.c
@@ -118,6 +118,7 @@ static inline int send_notify(struct libxenvchan *ctrl, uint8_t bit)
 static inline int raw_get_data_ready(struct libxenvchan *ctrl)
 {
 	uint32_t ready = rd_prod(ctrl) - rd_cons(ctrl);
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready >= rd_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
@@ -159,6 +160,7 @@ int libxenvchan_data_ready(struct libxenvchan *ctrl)
 static inline int raw_get_buffer_space(struct libxenvchan *ctrl)
 {
 	uint32_t ready = wr_ring_size(ctrl) - (wr_prod(ctrl) - wr_cons(ctrl));
+	xen_mb(); /* Ensure 'ready' is read only once. */
 	if (ready > wr_ring_size(ctrl))
 		/* We have no way to return errors.  Locking up the ring is
 		 * better than the alternatives. */
-- 
2.1.4


[-- Attachment #24: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-12-17 13:36 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-17 13:36 Xen Security Advisory 155 (CVE-2015-8550) - paravirtualized drivers incautious about shared memory contents Xen.org security team
  -- strict thread matches above, loose matches on Subject: below --
2015-12-17 12:42 Xen.org security team

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).