From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:42762) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Tj3aT-0003qa-VD for qemu-devel@nongnu.org; Thu, 13 Dec 2012 02:52:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Tj3aR-0001zh-6j for qemu-devel@nongnu.org; Thu, 13 Dec 2012 02:52:09 -0500 Received: from mail-ie0-f173.google.com ([209.85.223.173]:43568) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Tj3O5-0006uX-V5 for qemu-devel@nongnu.org; Thu, 13 Dec 2012 02:39:22 -0500 Received: by mail-ie0-f173.google.com with SMTP id e13so3490007iej.4 for ; Wed, 12 Dec 2012 23:39:21 -0800 (PST) Sender: Paolo Bonzini Message-ID: <50C98622.9050605@redhat.com> Date: Thu, 13 Dec 2012 08:39:14 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <20121212144758.GF15555@redhat.com> <50C89C47.7040108@redhat.com> <20121212152508.GB16750@redhat.com> <50C8A855.4050607@redhat.com> <20121212163713.GD17446@redhat.com> <50C8B627.606@redhat.com> <20121212171423.GB18597@redhat.com> <50C8C143.2010503@redhat.com> <20121212192336.GA20201@redhat.com> <50C8F075.4000206@redhat.com> <20121212211926.GA23087@redhat.com> In-Reply-To: <20121212211926.GA23087@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCHv2] virtio: verify that all outstanding buffers are flushed List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Stefan Hajnoczi , Anthony Liguori , Rusty Russell , qemu-devel@nongnu.org, Stefan Hajnoczi Il 12/12/2012 22:19, Michael S. Tsirkin ha scritto: > On Wed, Dec 12, 2012 at 10:00:37PM +0100, Paolo Bonzini wrote: >> Il 12/12/2012 20:23, Michael S. Tsirkin ha scritto: >>>>> Saving inuse counter is useless. We need to know which requests >>>>> are outstanding if we want to retry them on remote. >>>> >>>> And that's what virtio-blk and virtio-scsi have been doing for years. >>> >>> I don't see it - all I see in save is virtio_save. >> >> static void virtio_blk_save(QEMUFile *f, void *opaque) >> { >> VirtIOBlock *s = opaque; >> VirtIOBlockReq *req = s->rq; >> >> virtio_save(&s->vdev, f); >> >> while (req) { >> qemu_put_sbyte(f, 1); >> qemu_put_buffer(f, (unsigned char*)&req->elem, sizeof(req->elem)); > > Ow. Does it really save VirtQueueElement? > > typedef struct VirtQueueElement > { > unsigned int index; > unsigned int out_num; > unsigned int in_num; > hwaddr in_addr[VIRTQUEUE_MAX_SIZE]; > hwaddr out_addr[VIRTQUEUE_MAX_SIZE]; > struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; > struct iovec out_sg[VIRTQUEUE_MAX_SIZE]; > } VirtQueueElement; > > Complete with pointers into qemu memory and all? Yes, that sucks (also the endianness is totally broken), but the pointers into QEMU memory are restored afterwards by remapping the address. Like Anthony, I'm not sure whether reloading the sglist from guest memory is safe. It would require re-validation of everything. We _can_ trust remote. Things like races on connecting to the incoming-migration socket are best handled outside QEMU with a firewall or a separate network that is not guest-accessible. I'm pretty sure that virtio-blk is the latest of our worries here. > Okay, so the only bug is inuse getting negative right? > So all we need to do is fix up the inuse value > after restoring the outstanding requests - basically > count the restored buffers and set inuse accordingly. Yes, I agree. Paolo