From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48769) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YQYX4-00044U-FM for qemu-devel@nongnu.org; Wed, 25 Feb 2015 04:45:31 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YQYX0-0001ui-RO for qemu-devel@nongnu.org; Wed, 25 Feb 2015 04:45:30 -0500 Received: from mx1.redhat.com ([209.132.183.28]:46967) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YQYX0-0001uZ-KQ for qemu-devel@nongnu.org; Wed, 25 Feb 2015 04:45:26 -0500 Date: Wed, 25 Feb 2015 17:45:12 +0800 From: Fam Zheng Message-ID: <20150225094512.GA1823@ad.nay.redhat.com> References: <1423710438-14377-1-git-send-email-wency@cn.fujitsu.com> <1423710438-14377-2-git-send-email-wency@cn.fujitsu.com> <20150212072117.GB32554@ad.nay.redhat.com> <54DC58E6.7060608@cn.fujitsu.com> <20150212084435.GD32554@ad.nay.redhat.com> <54ED9179.7040602@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54ED9179.7040602@cn.fujitsu.com> Subject: Re: [Qemu-devel] [RFC PATCH 01/14] docs: block replication's description List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Wen Congyang Cc: Kevin Wolf , Lai Jiangshan , Jiang Yunhong , Dong Eddie , "Dr. David Alan Gilbert" , qemu devel , Gonglei , Stefan Hajnoczi , Paolo Bonzini , Yang Hongyang , jsnow@redhat.com, zhanghailiang On Wed, 02/25 17:10, Wen Congyang wrote: > On 02/12/2015 04:44 PM, Fam Zheng wrote: > > On Thu, 02/12 15:40, Wen Congyang wrote: > >> On 02/12/2015 03:21 PM, Fam Zheng wrote: > >>> Hi Congyang, > >>> > >>> On Thu, 02/12 11:07, Wen Congyang wrote: > >>>> +== Workflow == > >>>> +The following is the image of block replication workflow: > >>>> + > >>>> + +----------------------+ +------------------------+ > >>>> + |Primary Write Requests| |Secondary Write Requests| > >>>> + +----------------------+ +------------------------+ > >>>> + | | > >>>> + | (4) > >>>> + | V > >>>> + | /-------------\ > >>>> + | Copy and Forward | | > >>>> + |---------(1)----------+ | Disk Buffer | > >>>> + | | | | > >>>> + | (3) \-------------/ > >>>> + | speculative ^ > >>>> + | write through (2) > >>>> + | | | > >>>> + V V | > >>>> + +--------------+ +----------------+ > >>>> + | Primary Disk | | Secondary Disk | > >>>> + +--------------+ +----------------+ > >>>> + > >>>> + 1) Primary write requests will be copied and forwarded to Secondary > >>>> + QEMU. > >>>> + 2) Before Primary write requests are written to Secondary disk, the > >>>> + original sector content will be read from Secondary disk and > >>>> + buffered in the Disk buffer, but it will not overwrite the existing > >>>> + sector content in the Disk buffer. > >>> > >>> I'm a little confused by the tenses ("will be" versus "are") and terms. I am > >>> reading them as "s/will be/are/g" > >>> > >>> Why do you need this buffer? > >> > >> We only sync the disk till next checkpoint. Before next checkpoint, secondary > >> vm write to the buffer. > >> > >>> > >>> If both primary and secondary write to the same sector, what is saved in the > >>> buffer? > >> > >> The primary content will be written to the secondary disk, and the secondary content > >> is saved in the buffer. > > > > I wonder if alternatively this is possible with an imaginary "writable backing > > image" feature, as described below. > > > > When we have a normal backing chain, > > > > {virtio-blk dev 'foo'} > > | > > | > > | > > [base] <- [mid] <- (foo) > > > > Where [base] and [mid] are read only, (foo) is writable. When we add an overlay > > to an existing image on top, > > > > {virtio-blk dev 'foo'} {virtio-blk dev 'bar'} > > | | > > | | > > | | > > [base] <- [mid] <- (foo) <---------------------- (bar) > > > > It's important to make sure that writes to 'foo' doesn't break data for 'bar'. > > We can utilize an automatic hidden drive-backup target: > > > > {virtio-blk dev 'foo'} {virtio-blk dev 'bar'} > > | | > > | | > > v v > > > > [base] <- [mid] <- (foo) <----------------- (hidden target) <--------------- (bar) > > > > v ^ > > v ^ > > v ^ > > v ^ > > >>>> drive-backup sync=none >>>> > > > > So when guest writes to 'foo', the old data is moved to (hidden target), which > > remains unchanged from (bar)'s PoV. > > > > The drive in the middle is called hidden because QEMU creates it automatically, > > the naming is arbitrary. > > > > It is interesting because it is a more generalized case of image fleecing, > > where the (hidden target) is exposed via NBD server for data scanning (read > > only) purpose. > > > > More interestingly, with above facility, it is also possible to create a guest > > visible live snapshot (disk 'bar') of an existing device (disk 'foo') very > > cheaply. Or call it shadow copy if you will. > > > > Back to the COLO case, the configuration will be very similar: > > > > > > {primary wr} {secondary vm} > > | | > > | | > > | | > > v v > > > > [what] <- [ever] <- (nbd target) <------------ (hidden buf disk) <------------- (active disk) > > > > v ^ > > v ^ > > v ^ > > v ^ > > >>>> drive-backup sync=none >>>> > > Why nbd target has backing image ever? It's not strictly necessary, depending on your VM disk configuration. (for example at the time of vm booting, your image already points to a backing file, etc. Fam