From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49206) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c9G5H-0000XI-P8 for qemu-devel@nongnu.org; Tue, 22 Nov 2016 13:46:24 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c9G5C-0001P5-Ss for qemu-devel@nongnu.org; Tue, 22 Nov 2016 13:46:23 -0500 Received: from mail.kernel.org ([198.145.29.136]:46762) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c9G5C-0001Og-Mz for qemu-devel@nongnu.org; Tue, 22 Nov 2016 13:46:18 -0500 From: Stefano Stabellini Date: Tue, 22 Nov 2016 10:46:05 -0800 Message-Id: <1479840369-19503-1-git-send-email-sstabellini@kernel.org> In-Reply-To: References: Subject: [Qemu-devel] [PULL 1/5] xen: fix ioreq handling List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: stefanha@gmail.com Cc: stefanha@redhat.com, peter.maydell@linaro.org, sstabellini@kernel.org, anthony.perard@citrix.com, xen-devel@lists.xenproject.org, qemu-devel@nongnu.org, Jan Beulich , Jan Beulich From: Jan Beulich Avoid double fetches and bounds check size to avoid overflowing internal variables. This is CVE-2016-9381 / XSA-197. Reported-by: yanghongke Signed-off-by: Jan Beulich Reviewed-by: Stefano Stabellini Signed-off-by: Stefano Stabellini --- xen-hvm.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/xen-hvm.c b/xen-hvm.c index 150c7e7..99b8ee8 100644 --- a/xen-hvm.c +++ b/xen-hvm.c @@ -810,6 +810,10 @@ static void cpu_ioreq_pio(ioreq_t *req) trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr, req->data, req->count, req->size); + if (req->size > sizeof(uint32_t)) { + hw_error("PIO: bad size (%u)", req->size); + } + if (req->dir == IOREQ_READ) { if (!req->data_is_ptr) { req->data = do_inp(req->addr, req->size); @@ -846,6 +850,10 @@ static void cpu_ioreq_move(ioreq_t *req) trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr, req->data, req->count, req->size); + if (req->size > sizeof(req->data)) { + hw_error("MMIO: bad size (%u)", req->size); + } + if (!req->data_is_ptr) { if (req->dir == IOREQ_READ) { for (i = 0; i < req->count; i++) { @@ -1010,11 +1018,13 @@ static int handle_buffered_iopage(XenIOState *state) req.df = 1; req.type = buf_req->type; req.data_is_ptr = 0; + xen_rmb(); qw = (req.size == 8); if (qw) { buf_req = &buf_page->buf_ioreq[(rdptr + 1) % IOREQ_BUFFER_SLOT_NUM]; req.data |= ((uint64_t)buf_req->data) << 32; + xen_rmb(); } handle_ioreq(state, &req); @@ -1045,7 +1055,11 @@ static void cpu_handle_ioreq(void *opaque) handle_buffered_iopage(state); if (req) { - handle_ioreq(state, req); + ioreq_t copy = *req; + + xen_rmb(); + handle_ioreq(state, ©); + req->data = copy.data; if (req->state != STATE_IOREQ_INPROCESS) { fprintf(stderr, "Badness in I/O request ... not in service?!: " -- 1.9.1