From: Paolo Bonzini <pbonzini@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kwolf@redhat.com, famz@redhat.com,
zhang.zhanghailiang@huawei.com, qemu-devel@nongnu.org,
stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH v2 for-2.3] virtio-blk: correctly dirty guest memory
Date: Thu, 02 Apr 2015 19:57:22 +0200 [thread overview]
Message-ID: <551D8302.4080601@redhat.com> (raw)
In-Reply-To: <20150402195335-mutt-send-email-mst@redhat.com>
On 02/04/2015 19:54, Michael S. Tsirkin wrote:
> On Thu, Apr 02, 2015 at 07:50:44PM +0200, Paolo Bonzini wrote:
>> After qemu_iovec_destroy, the QEMUIOVector's size is zeroed and
>> the zero size ultimately is used to compute virtqueue_push's len
>> argument. Therefore, reads from virtio-blk devices did not
>> migrate their results correctly. (Writes were okay).
>>
>> Save the size in virtio_blk_handle_request, and use it when the request
>> is completed.
>>
>> Based on a patch by Wen Congyang.
>>
>> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>
> If you are touching this code anyway, maybe it makes
> sense to merge Rusty's virtio len patches?
> I didn't want them in 2.3 since they aren't ciritical,
> but we are changing these lines anyway, maybe make
> them correct?
My patch is a strict superset of Rusty's patch 2/2. In fact the first
version was very similar to his, but neither my v1 nor his patch covers
SCSI or flush or get-serial requests.
Rusty's patch 1/2 only adds an assertion, I think. It's not critical
for 2.3.
Paolo
>> ---
>> hw/block/dataplane/virtio-blk.c | 3 +--
>> hw/block/virtio-blk.c | 13 ++++++++++++-
>> include/hw/virtio/virtio-blk.h | 1 +
>> 3 files changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
>> index cd41478..3db139b 100644
>> --- a/hw/block/dataplane/virtio-blk.c
>> +++ b/hw/block/dataplane/virtio-blk.c
>> @@ -77,8 +77,7 @@ static void complete_request_vring(VirtIOBlockReq *req, unsigned char status)
>> VirtIOBlockDataPlane *s = req->dev->dataplane;
>> stb_p(&req->in->status, status);
>>
>> - vring_push(s->vdev, &req->dev->dataplane->vring, &req->elem,
>> - req->qiov.size + sizeof(*req->in));
>> + vring_push(s->vdev, &req->dev->dataplane->vring, &req->elem, req->in_len);
>>
>> /* Suppress notification to guest by BH and its scheduled
>> * flag because requests are completed as a batch after io
>> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>> index 000c38d..9546fd2 100644
>> --- a/hw/block/virtio-blk.c
>> +++ b/hw/block/virtio-blk.c
>> @@ -33,6 +33,7 @@ VirtIOBlockReq *virtio_blk_alloc_request(VirtIOBlock *s)
>> VirtIOBlockReq *req = g_slice_new(VirtIOBlockReq);
>> req->dev = s;
>> req->qiov.size = 0;
>> + req->in_len = 0;
>> req->next = NULL;
>> req->mr_next = NULL;
>> return req;
>> @@ -54,7 +55,7 @@ static void virtio_blk_complete_request(VirtIOBlockReq *req,
>> trace_virtio_blk_req_complete(req, status);
>>
>> stb_p(&req->in->status, status);
>> - virtqueue_push(s->vq, &req->elem, req->qiov.size + sizeof(*req->in));
>> + virtqueue_push(s->vq, &req->elem, req->in_len);
>> virtio_notify(vdev, s->vq);
>> }
>>
>> @@ -102,6 +103,14 @@ static void virtio_blk_rw_complete(void *opaque, int ret)
>> if (ret) {
>> int p = virtio_ldl_p(VIRTIO_DEVICE(req->dev), &req->out.type);
>> bool is_read = !(p & VIRTIO_BLK_T_OUT);
>> + /* Note that memory may be dirtied on read failure. If the
>> + * virtio request is not completed here, as is the case for
>> + * BLOCK_ERROR_ACTION_STOP, the memory may not be copied
>> + * correctly during live migration. While this is ugly,
>> + * it is acceptable because the device is free to write to
>> + * the memory until the request is completed (which will
>> + * happen on the other side of the migration).
>> + */
>> if (virtio_blk_handle_rw_error(req, -ret, is_read)) {
>> continue;
>> }
>> @@ -496,6 +505,8 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
>> exit(1);
>> }
>>
>> + /* We always touch the last byte, so just see how big in_iov is. */
>> + req->in_len = iov_size(in_iov, in_num);
>> req->in = (void *)in_iov[in_num - 1].iov_base
>> + in_iov[in_num - 1].iov_len
>> - sizeof(struct virtio_blk_inhdr);
>> diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
>> index b3ffcd9..6bf5905 100644
>> --- a/include/hw/virtio/virtio-blk.h
>> +++ b/include/hw/virtio/virtio-blk.h
>> @@ -67,6 +67,7 @@ typedef struct VirtIOBlockReq {
>> struct virtio_blk_inhdr *in;
>> struct virtio_blk_outhdr out;
>> QEMUIOVector qiov;
>> + size_t in_len;
>> struct VirtIOBlockReq *next;
>> struct VirtIOBlockReq *mr_next;
>> BlockAcctCookie acct;
>> --
>> 2.3.4
next prev parent reply other threads:[~2015-04-02 17:58 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-02 17:50 [Qemu-devel] [PATCH v2 for-2.3] virtio-blk: correctly dirty guest memory Paolo Bonzini
2015-04-02 17:54 ` Michael S. Tsirkin
2015-04-02 17:57 ` Paolo Bonzini [this message]
2015-04-02 18:46 ` Michael S. Tsirkin
2015-04-02 18:53 ` Paolo Bonzini
2015-04-02 19:17 ` Michael S. Tsirkin
2015-04-03 4:13 ` Li Zhijian
2015-04-07 14:11 ` Stefan Hajnoczi
2015-04-08 9:38 ` Stefan Hajnoczi
2015-04-08 9:42 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=551D8302.4080601@redhat.com \
--to=pbonzini@redhat.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).