From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57277) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YQu4Y-0000uP-HA for qemu-devel@nongnu.org; Thu, 26 Feb 2015 03:45:31 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YQu4U-0001OK-Fv for qemu-devel@nongnu.org; Thu, 26 Feb 2015 03:45:30 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47948) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YQu4U-0001O8-8p for qemu-devel@nongnu.org; Thu, 26 Feb 2015 03:45:26 -0500 Date: Thu, 26 Feb 2015 16:44:47 +0800 From: Fam Zheng Message-ID: <20150226084447.GB5930@ad.nay.redhat.com> References: <1423710438-14377-1-git-send-email-wency@cn.fujitsu.com> <1423710438-14377-2-git-send-email-wency@cn.fujitsu.com> <20150212072117.GB32554@ad.nay.redhat.com> <54DC58E6.7060608@cn.fujitsu.com> <20150212084435.GD32554@ad.nay.redhat.com> <54EC2D3F.2030803@cn.fujitsu.com> <20150225024622.GC9178@ad.nay.redhat.com> <54EEBF7C.3010504@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54EEBF7C.3010504@cn.fujitsu.com> Subject: Re: [Qemu-devel] [RFC PATCH 01/14] docs: block replication's description List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Wen Congyang Cc: Kevin Wolf , Lai Jiangshan , Jiang Yunhong , Dong Eddie , "Dr. David Alan Gilbert" , qemu devel , Gonglei , Stefan Hajnoczi , Paolo Bonzini , Yang Hongyang , jsnow@redhat.com, zhanghailiang On Thu, 02/26 14:38, Wen Congyang wrote: > On 02/25/2015 10:46 AM, Fam Zheng wrote: > > On Tue, 02/24 15:50, Wen Congyang wrote: > >> On 02/12/2015 04:44 PM, Fam Zheng wrote: > >>> On Thu, 02/12 15:40, Wen Congyang wrote: > >>>> On 02/12/2015 03:21 PM, Fam Zheng wrote: > >>>>> Hi Congyang, > >>>>> > >>>>> On Thu, 02/12 11:07, Wen Congyang wrote: > >>>>>> +== Workflow == > >>>>>> +The following is the image of block replication workflow: > >>>>>> + > >>>>>> + +----------------------+ +------------------------+ > >>>>>> + |Primary Write Requests| |Secondary Write Requests| > >>>>>> + +----------------------+ +------------------------+ > >>>>>> + | | > >>>>>> + | (4) > >>>>>> + | V > >>>>>> + | /-------------\ > >>>>>> + | Copy and Forward | | > >>>>>> + |---------(1)----------+ | Disk Buffer | > >>>>>> + | | | | > >>>>>> + | (3) \-------------/ > >>>>>> + | speculative ^ > >>>>>> + | write through (2) > >>>>>> + | | | > >>>>>> + V V | > >>>>>> + +--------------+ +----------------+ > >>>>>> + | Primary Disk | | Secondary Disk | > >>>>>> + +--------------+ +----------------+ > >>>>>> + > >>>>>> + 1) Primary write requests will be copied and forwarded to Secondary > >>>>>> + QEMU. > >>>>>> + 2) Before Primary write requests are written to Secondary disk, the > >>>>>> + original sector content will be read from Secondary disk and > >>>>>> + buffered in the Disk buffer, but it will not overwrite the existing > >>>>>> + sector content in the Disk buffer. > >>>>> > >>>>> I'm a little confused by the tenses ("will be" versus "are") and terms. I am > >>>>> reading them as "s/will be/are/g" > >>>>> > >>>>> Why do you need this buffer? > >>>> > >>>> We only sync the disk till next checkpoint. Before next checkpoint, secondary > >>>> vm write to the buffer. > >>>> > >>>>> > >>>>> If both primary and secondary write to the same sector, what is saved in the > >>>>> buffer? > >>>> > >>>> The primary content will be written to the secondary disk, and the secondary content > >>>> is saved in the buffer. > >>> > >>> I wonder if alternatively this is possible with an imaginary "writable backing > >>> image" feature, as described below. > >>> > >>> When we have a normal backing chain, > >>> > >>> {virtio-blk dev 'foo'} > >>> | > >>> | > >>> | > >>> [base] <- [mid] <- (foo) > >>> > >>> Where [base] and [mid] are read only, (foo) is writable. When we add an overlay > >>> to an existing image on top, > >>> > >>> {virtio-blk dev 'foo'} {virtio-blk dev 'bar'} > >>> | | > >>> | | > >>> | | > >>> [base] <- [mid] <- (foo) <---------------------- (bar) > >>> > >>> It's important to make sure that writes to 'foo' doesn't break data for 'bar'. > >>> We can utilize an automatic hidden drive-backup target: > >>> > >>> {virtio-blk dev 'foo'} {virtio-blk dev 'bar'} > >>> | | > >>> | | > >>> v v > >>> > >>> [base] <- [mid] <- (foo) <----------------- (hidden target) <--------------- (bar) > >>> > >>> v ^ > >>> v ^ > >>> v ^ > >>> v ^ > >>> >>>> drive-backup sync=none >>>> > >>> > >>> So when guest writes to 'foo', the old data is moved to (hidden target), which > >>> remains unchanged from (bar)'s PoV. > >>> > >>> The drive in the middle is called hidden because QEMU creates it automatically, > >>> the naming is arbitrary. > >> > >> I don't understand this. In which function, the hidden target is created automatically? > >> > > > > It's to be determined. This part is only in my mind :) > > What about this: > -drive file=nbd-target,if=none,id=nbd-target0 \ > -drive file=active-disk,if=virtio,driver=qcow2,backing.file.filename=hidden-disk,backing.driver=qcow2,backing.backing=nbd-target0 > It's close. I suppose backing.backing is referencing another drive as its backing_hd, then you cannot have the other backing.file.* option - they conflict. It would be something along: -drive file=nbd-target,if=none,id=nbd-target0 \ -drive file=hidden-disk,if=none,id=hidden0,backing.backing=nbd-target0 \ -drive file=active-disk,if=virtio,driver=qcow2,backing.backing=hidden0 Or for simplicity, s/backing.backing=/backing=/g Yes, adding these "backing=$drive_id" option is also exactly what we expect in order to support image-fleecing, but we haven't figured how to allow that without breaking other qmp operations like block jobs, etc. Fam