From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37277) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZyD0A-0001on-7i for qemu-devel@nongnu.org; Mon, 16 Nov 2015 01:10:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZyD09-00071Q-8X for qemu-devel@nongnu.org; Mon, 16 Nov 2015 01:10:54 -0500 From: Fam Zheng Date: Mon, 16 Nov 2015 14:10:36 +0800 Message-Id: <1447654236-2979-1-git-send-email-famz@redhat.com> Subject: [Qemu-devel] [PATCH v2] virtio-blk: Fix double completion for werror=stop List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , lvivier@redhat.com, qemu-block@nongnu.org, pl@kamp.de, qemu-stable@nongnu.org, Stefan Hajnoczi , pbonzini@redhat.com, dgibson@redhat.com When a request R is absorbed by request M, it is appended to the "mr_next" queue led by M, and is completed together with the completion of M, in virtio_blk_rw_complete. With error policy equals stop, if M has an I/O error, now R also gets prepended to the per device DMA restart queue, which will be retried when VM resumes. It leads to a double completion (in symptoms of memory corruption or use after free). Adding R to the queue is superfluous, only M needs to be in the queue. Fix this by marking request R as "merged" and skipping it in virtio_blk_handle_rw_error. Cc: qemu-stable@nongnu.org Signed-off-by: Fam Zheng --- v2: Don't lose the request in migration. [Paolo] --- hw/block/virtio-blk.c | 7 +++++++ include/hw/virtio/virtio-blk.h | 1 + 2 files changed, 8 insertions(+) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index e70fccf..5cdb06f 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -36,6 +36,7 @@ VirtIOBlockReq *virtio_blk_alloc_request(VirtIOBlock *s) req->in_len = 0; req->next = NULL; req->mr_next = NULL; + req->merged = false; return req; } @@ -344,6 +345,7 @@ static inline void submit_requests(BlockBackend *blk, MultiReqBuffer *mrb, for (i = start + 1; i < start + num_reqs; i++) { qemu_iovec_concat(qiov, &mrb->reqs[i]->qiov, 0, mrb->reqs[i]->qiov.size); + mrb->reqs[i]->merged = true; mrb->reqs[i - 1]->mr_next = mrb->reqs[i]; nb_sectors += mrb->reqs[i]->qiov.size / BDRV_SECTOR_SIZE; } @@ -511,6 +513,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) - sizeof(struct virtio_blk_inhdr); iov_discard_back(in_iov, &in_num, sizeof(struct virtio_blk_inhdr)); + if (req->merged) { + /* Enough for restarting a (migrated) merged request, no need to + * actually submit I/O. */ + return; + } type = virtio_ldl_p(VIRTIO_DEVICE(req->dev), &req->out.type); /* VIRTIO_BLK_T_OUT defines the command direction. VIRTIO_BLK_T_BARRIER diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h index 6bf5905..db4adf4 100644 --- a/include/hw/virtio/virtio-blk.h +++ b/include/hw/virtio/virtio-blk.h @@ -70,6 +70,7 @@ typedef struct VirtIOBlockReq { size_t in_len; struct VirtIOBlockReq *next; struct VirtIOBlockReq *mr_next; + bool merged; BlockAcctCookie acct; } VirtIOBlockReq; -- 2.4.3