From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47175) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWT5L-0004zD-81 for qemu-devel@nongnu.org; Tue, 23 Sep 2014 12:37:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XWT5C-0006dH-6O for qemu-devel@nongnu.org; Tue, 23 Sep 2014 12:37:03 -0400 Received: from mail-la0-x231.google.com ([2a00:1450:4010:c03::231]:51937) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWT5B-0006c7-Po for qemu-devel@nongnu.org; Tue, 23 Sep 2014 12:36:54 -0400 Received: by mail-la0-f49.google.com with SMTP id pn19so9083957lab.36 for ; Tue, 23 Sep 2014 09:36:45 -0700 (PDT) Message-ID: <5421A19A.9090201@gmail.com> Date: Tue, 23 Sep 2014 18:36:42 +0200 From: Walid Nouri MIME-Version: 1.0 References: <1407376929.21497.2.camel@usa> <53E60F34.1070607@gmail.com> <1407587152.24027.5.camel@usa> <53E8FBBD.7050703@gmail.com> <53E92470.60806@linux.vnet.ibm.com> <53F07B73.60407@redhat.com> <54107187.8040706@gmail.com> <20140911174407.GP2353@work-vm> <20140912110735.GG1614@stefanha-thinkpad.redhat.com> <5419F4CC.2060903@gmail.com> <20140918135604.GB16227@stefanha-thinkpad.redhat.com> In-Reply-To: <20140918135604.GB16227@stefanha-thinkpad.redhat.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] Microcheckpointing: Memory-VCPU / Disk State consistency List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: kwolf@redhat.com, eddie.dong@intel.com, "Dr. David Alan Gilbert" , "Michael R. Hines" , qemu-devel@nongnu.org, Paolo Bonzini , yanghy@cn.fujitsu.com Am 18.09.2014 15:56, schrieb Stefan Hajnoczi: > There is the issue of request ordering (using write cache flushes). The > secondary probably needs to perform requests in the same order and > interleave cache flushes in the same way as the primary. Otherwise a > power failure on the secondary could leave the disk in an invalid state > that is impossible on the primary. So I'm just pointing out that cache > flush operations matter, not just read/write. To be honest, my thought was that drive-mirror handles all block device specific problems especially the cache flush requests for write ordering. So my naive approach was to use an existing functionality as a kind of black box transport mechanism and build on top of it. But that seems to be not possible for the subtle tricky part of the game. This means the "block filter" on the secondary must ensure the commit semantics. But for doing that it must be able to interpret the write ordering semantic of a stream of write requests. > > The second, and bigger, point is that if disk commit holds back > checkpoint commit it could be a significant performance problem due to > the slow nature of disks. You are completely right. This would raise the latency for the primary. This can be done by changing the proposed protocol to write directly at the primary and asynchronously applying updates to the secondary. > There are fancier solutions using either a journal or snapshots that > provide data integrity without posing a performance bottleneck during > the commit phase. > > The trick is to apply write requests as they come off the wire on the > secondary but use a journal or snapshot mechanism to enforce commit > semantics. That way the commit doesn't have to wait for writing out all > the data to disk. > Wouldn't that mean to send a kind of protocol information with the modified Blocks, a barrier or somthing like that? Can you please explain a little more what you meant? > The details depend on the code and I don't remember everything well > enough. Anyway, my mental model is: > > 1. The dirty bit is set *after* the primary has completed the write. > See bdrv_aligned_pwritev(). Therefore you cannot use the dirty > bitmap to query in-flight requests, instead you have to look at > bs->tracked_requests. > > 2. The mirror block job periodically scans the dirty bitmap (when there > is no rate-limit set it does this with no artifical delays) and > writes the dirty blocks. > > Given that cache flush requests probably need to be tracked too, maybe > you need MC-specific block driver on the primary to monitor and control > I/O requests. > > But I haven't thought this through and it's non-trivial so we need to > break this down more. > As drive-mirror lacks this functionality a way (without changing the drive-mirror code) might be a MC-specific mechanism on the primary. This mechanism must respect write ordering requests (like forced cache flush, and Force Unit Access request) and send corresponding information for a stream of blocks to the secondary. From what I have learned i'm assuming most guest OS filesystem/block layer follows an ordering interface based on SCSI???? As those kind of requests must be flaged in an I/O request by the guest operating system this should be possible. Do we have the chance to access those information in a guest request? If this is possible does this information survives the journey through the nbd-server or must there be another communication channel like the QEMUFile approach of “block-migration.c”? Walid