From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44307) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YQvHV-0003p9-GZ for qemu-devel@nongnu.org; Thu, 26 Feb 2015 05:02:58 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YQvHS-0001w2-9s for qemu-devel@nongnu.org; Thu, 26 Feb 2015 05:02:57 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55581) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YQvHS-0001vp-3C for qemu-devel@nongnu.org; Thu, 26 Feb 2015 05:02:54 -0500 Date: Thu, 26 Feb 2015 18:02:39 +0800 From: Fam Zheng Message-ID: <20150226100239.GE5930@ad.nay.redhat.com> References: <1423710438-14377-1-git-send-email-wency@cn.fujitsu.com> <1423710438-14377-2-git-send-email-wency@cn.fujitsu.com> <20150212072117.GB32554@ad.nay.redhat.com> <54DC58E6.7060608@cn.fujitsu.com> <20150212084435.GD32554@ad.nay.redhat.com> <54EC2D3F.2030803@cn.fujitsu.com> <20150225024622.GC9178@ad.nay.redhat.com> <54EEBF7C.3010504@cn.fujitsu.com> <20150226084447.GB5930@ad.nay.redhat.com> <54EEE25D.2060704@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54EEE25D.2060704@cn.fujitsu.com> Subject: Re: [Qemu-devel] [RFC PATCH 01/14] docs: block replication's description List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Wen Congyang Cc: Kevin Wolf , Lai Jiangshan , Jiang Yunhong , Dong Eddie , qemu devel , "Dr. David Alan Gilbert" , Gonglei , Stefan Hajnoczi , Paolo Bonzini , Yang Hongyang , jsnow@redhat.com, zhanghailiang On Thu, 02/26 17:07, Wen Congyang wrote: > On 02/26/2015 04:44 PM, Fam Zheng wrote: > > On Thu, 02/26 14:38, Wen Congyang wrote: > >> On 02/25/2015 10:46 AM, Fam Zheng wrote: > >>> On Tue, 02/24 15:50, Wen Congyang wrote: > >>>> On 02/12/2015 04:44 PM, Fam Zheng wrote: > >>>>> On Thu, 02/12 15:40, Wen Congyang wrote: > >>>>>> On 02/12/2015 03:21 PM, Fam Zheng wrote: > >>>>>>> Hi Congyang, > >>>>>>> > >>>>>>> On Thu, 02/12 11:07, Wen Congyang wrote: > >>>>>>>> +== Workflow == > >>>>>>>> +The following is the image of block replication workflow: > >>>>>>>> + > >>>>>>>> + +----------------------+ +------------------------+ > >>>>>>>> + |Primary Write Requests| |Secondary Write Requests| > >>>>>>>> + +----------------------+ +------------------------+ > >>>>>>>> + | | > >>>>>>>> + | (4) > >>>>>>>> + | V > >>>>>>>> + | /-------------\ > >>>>>>>> + | Copy and Forward | | > >>>>>>>> + |---------(1)----------+ | Disk Buffer | > >>>>>>>> + | | | | > >>>>>>>> + | (3) \-------------/ > >>>>>>>> + | speculative ^ > >>>>>>>> + | write through (2) > >>>>>>>> + | | | > >>>>>>>> + V V | > >>>>>>>> + +--------------+ +----------------+ > >>>>>>>> + | Primary Disk | | Secondary Disk | > >>>>>>>> + +--------------+ +----------------+ > >>>>>>>> + > >>>>>>>> + 1) Primary write requests will be copied and forwarded to Secondary > >>>>>>>> + QEMU. > >>>>>>>> + 2) Before Primary write requests are written to Secondary disk, the > >>>>>>>> + original sector content will be read from Secondary disk and > >>>>>>>> + buffered in the Disk buffer, but it will not overwrite the existing > >>>>>>>> + sector content in the Disk buffer. > >>>>>>> > >>>>>>> I'm a little confused by the tenses ("will be" versus "are") and terms. I am > >>>>>>> reading them as "s/will be/are/g" > >>>>>>> > >>>>>>> Why do you need this buffer? > >>>>>> > >>>>>> We only sync the disk till next checkpoint. Before next checkpoint, secondary > >>>>>> vm write to the buffer. > >>>>>> > >>>>>>> > >>>>>>> If both primary and secondary write to the same sector, what is saved in the > >>>>>>> buffer? > >>>>>> > >>>>>> The primary content will be written to the secondary disk, and the secondary content > >>>>>> is saved in the buffer. > >>>>> > >>>>> I wonder if alternatively this is possible with an imaginary "writable backing > >>>>> image" feature, as described below. > >>>>> > >>>>> When we have a normal backing chain, > >>>>> > >>>>> {virtio-blk dev 'foo'} > >>>>> | > >>>>> | > >>>>> | > >>>>> [base] <- [mid] <- (foo) > >>>>> > >>>>> Where [base] and [mid] are read only, (foo) is writable. When we add an overlay > >>>>> to an existing image on top, > >>>>> > >>>>> {virtio-blk dev 'foo'} {virtio-blk dev 'bar'} > >>>>> | | > >>>>> | | > >>>>> | | > >>>>> [base] <- [mid] <- (foo) <---------------------- (bar) > >>>>> > >>>>> It's important to make sure that writes to 'foo' doesn't break data for 'bar'. > >>>>> We can utilize an automatic hidden drive-backup target: > >>>>> > >>>>> {virtio-blk dev 'foo'} {virtio-blk dev 'bar'} > >>>>> | | > >>>>> | | > >>>>> v v > >>>>> > >>>>> [base] <- [mid] <- (foo) <----------------- (hidden target) <--------------- (bar) > >>>>> > >>>>> v ^ > >>>>> v ^ > >>>>> v ^ > >>>>> v ^ > >>>>> >>>> drive-backup sync=none >>>> > >>>>> > >>>>> So when guest writes to 'foo', the old data is moved to (hidden target), which > >>>>> remains unchanged from (bar)'s PoV. > >>>>> > >>>>> The drive in the middle is called hidden because QEMU creates it automatically, > >>>>> the naming is arbitrary. > >>>> > >>>> I don't understand this. In which function, the hidden target is created automatically? > >>>> > >>> > >>> It's to be determined. This part is only in my mind :) > >> > >> What about this: > >> -drive file=nbd-target,if=none,id=nbd-target0 \ > >> -drive file=active-disk,if=virtio,driver=qcow2,backing.file.filename=hidden-disk,backing.driver=qcow2,backing.backing=nbd-target0 > >> > > > > It's close. I suppose backing.backing is referencing another drive as its > > backing_hd, then you cannot have the other backing.file.* option - they > > conflict. It would be something along: > > > > -drive file=nbd-target,if=none,id=nbd-target0 \ > > -drive file=hidden-disk,if=none,id=hidden0,backing.backing=nbd-target0 \ > > -drive file=active-disk,if=virtio,driver=qcow2,backing.backing=hidden0 > > > > Or for simplicity, s/backing.backing=/backing=/g > > If using backing=drive_id, backing.backing and backing.file.* are not conflict. > backing.backing=$drive_id means that: backing file's backing file's id is $drive_id. I see. > > > > > Yes, adding these "backing=$drive_id" option is also exactly what we expect > > in order to support image-fleecing, but we haven't figured how to allow that > > without breaking other qmp operations like block jobs, etc. > > I don't understand this. In which case, qmp operations will be broken? Can you give > me some examples? > I don't mean there is a fundamental stopper for this, but in order to relax the assumption that "only top BDS can have a BlockBackend", we need to think through the whole block layer, and add new finer checks/restrictions where it's necessary, otherwise it will be a mess to allow arbitrary backing reference. Some random questions I'm now aware of: 1. nbd-target0 is writable here, without the drive-backup, hidden0 could be corrupted by writings to it. So there need to be a new convention and invariance to follow. 2. in qmp, block-commit hidden0 to nbd-target0 or it's backing file, will corrupt data (from nbd-target0's perspective). 3. unclear implications of "change" and "eject" when there is backing reference. 4. can a drive be backing referenced by more than one other drives? Just two cents, and I still need to think about it systematically. Fam