From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:45363) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TiqHG-0002eT-7i for qemu-devel@nongnu.org; Wed, 12 Dec 2012 12:39:32 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TiqHA-0000nR-0G for qemu-devel@nongnu.org; Wed, 12 Dec 2012 12:39:26 -0500 Received: from mx1.redhat.com ([209.132.183.28]:4017) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TiqH9-0000nD-O8 for qemu-devel@nongnu.org; Wed, 12 Dec 2012 12:39:19 -0500 Message-ID: <50C8C143.2010503@redhat.com> Date: Wed, 12 Dec 2012 18:39:15 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <20121212135050.GC16270@stefanha-thinkpad.redhat.com> <50C88E53.4080200@redhat.com> <20121212143038.GD15555@redhat.com> <50C8965A.7020004@redhat.com> <20121212144758.GF15555@redhat.com> <50C89C47.7040108@redhat.com> <20121212152508.GB16750@redhat.com> <50C8A855.4050607@redhat.com> <20121212163713.GD17446@redhat.com> <50C8B627.606@redhat.com> <20121212171423.GB18597@redhat.com> In-Reply-To: <20121212171423.GB18597@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCHv2] virtio: verify that all outstanding buffers are flushed List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Stefan Hajnoczi , Anthony Liguori , Rusty Russell , qemu-devel@nongnu.org, Stefan Hajnoczi Il 12/12/2012 18:14, Michael S. Tsirkin ha scritto: > On Wed, Dec 12, 2012 at 05:51:51PM +0100, Paolo Bonzini wrote: >> Il 12/12/2012 17:37, Michael S. Tsirkin ha scritto: >>>> You wrote "the only way to know head 1 is outstanding is because backend >>>> has stored this info somewhere". But the backend _is_ tracking it (by >>>> serializing and then restoring the VirtQueueElement) and no leak happens >>>> because virtqueue_fill/flush will put the head on the used ring sooner >>>> or later. >>> >>> If you did this before save vm inuse would be 0. >> >> No, I won't. I want a simple API that the device can call to keep inuse >> up-to-date. Perhaps a bit ugly compared to just saving inuse, but it >> works. Or are there other bits that need resyncing besides inuse? Bits >> that cannot be recovered from the existing migration data? > > Saving inuse counter is useless. We need to know which requests > are outstanding if we want to retry them on remote. And that's what virtio-blk and virtio-scsi have been doing for years. They store the VirtQueueElement including the index and the sglists. Can you explain *why* the index is not enough to reconstruct the state on the destination? There may be bugs and you may need help from virtio_blk_load, but that's okay. >>> You said that at the point where we save state, >>> some entries are outstanding. It is too late to >>> put head at that point. >> >> I don't want to put head on the source. I want to put it on the >> destination, when the request is completed. Same as it is done now, >> with bugfixes of course. Are there any problems doing so, except that >> inuse will not be up-to-date (easily fixed)? > > You have an outstanding request that is behind last avail index. > You do not want to complete it. You migrate. There is no > way for remote to understand that the request is outstanding. The savevm callbacks know which request is outstanding and pass the information to the destination. See virtio_blk_save and virtio_blk_load. What is not clear, and you haven't explained, is how you get to a bug in the handling of the avail ring. What's wrong with this explanation: A 1 A 2 U 2 A 2 U 2 A 2 U 2 A 2 <--- U 2 where before the point marked with the arrow, the avail ring is 1 2 2 2 vring_avail_idx(vq) == 3 last_avail_idx == 3 and after the point marked with the arrow, the avail ring is 2 2 2 2 vring_avail_idx(vq) == 4 last_avail_idx == 3 ?!? >>>> It's not common, but you cannot block migration because you have an I/O >>>> error. Solving the error may involve migrating the guests away from >>>> that host. >>> >>> No, you should complete with error. >> >> Knowing that the request will fail, the admin will not be able to do >> migration, even if that will solve the error transparently. > > You are saying there's no way to complete all requests? With an error, yes. Transparently after fixing the error (which may involve migration), no. Paolo