From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:55557) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Rv3DV-0007OO-6t for qemu-devel@nongnu.org; Wed, 08 Feb 2012 03:49:34 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Rv3DS-0001v6-U4 for qemu-devel@nongnu.org; Wed, 08 Feb 2012 03:49:29 -0500 Received: from mx1.redhat.com ([209.132.183.28]:16544) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Rv3DS-0001uu-MJ for qemu-devel@nongnu.org; Wed, 08 Feb 2012 03:49:26 -0500 Message-ID: <4F323712.1030409@redhat.com> Date: Wed, 08 Feb 2012 10:49:22 +0200 From: Dor Laor MIME-Version: 1.0 References: <73865e0ce364c40e0eb65ec6b22b819d@mail.gmail.com> <4F31153E.9010205@codemonkey.ws> <4F311839.9030709@redhat.com> <4F311BBA.8000708@codemonkey.ws> <4F312FD3.5020206@zerto.com> <4F3137DB.1040503@redhat.com> <4F3139CE.4040103@zerto.com> <4F314798.8010009@redhat.com> <4F3211D0.3070502@zerto.com> In-Reply-To: <4F3211D0.3070502@zerto.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH] replication agent module Reply-To: dlaor@redhat.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ori Mamluk Cc: Kevin Wolf , =?UTF-8?B?16LXldeT15Mg16fXk9ed?= , =?UTF-8?B?16rXldee16gg15HXnyDXkNeV16g=?= , qemu-devel@nongnu.org, Yair Kuszpet , Paolo Bonzini On 02/08/2012 08:10 AM, Ori Mamluk wrote: > On 07/02/2012 17:47, Paolo Bonzini wrote: >> On 02/07/2012 03:48 PM, Ori Mamluk wrote: >>>> The current streaming code in QEMU only deals with the former. >>>> Streaming to a remote server would not be supported. >>>> >>> I need it at the same time. The Rephub reads either the full volume or >>> parts of, and concurrently protects new IOs. >> >> Why can't QEMU itself stream the full volume in the background, and >> send that together with any new I/O? Is it because the rephub knows >> which parts are out-of-date and need recovery? In that case, as a >> first approximation the rephub can pass the sector at which streaming >> should start. > Yes - it's because rephub knows. The parts that need recovery may be a > series of random IOs that were lost because of a network outage > somewhere along the replication pipe. > Easy to think of it as a bitmap holding the not-yet-replicated IOs. The > rephub occasionally reads those areas to 'sync' them, so in effect the > rephub needs read access - it's not really to trigger streaming from an > offset. >> >> But I'm also starting to wonder whether it would be simpler to use >> existing replication code. DRBD is more feature-rich, and you can use >> it over loopback or NBD devices (respectively raw and non-raw), and >> also store the replication metadata on a file using the loopback >> device. Ceph even has a userspace library and support within QEMU. >> > I think there are two immediate problems that drbd poses: > 1. Our replication is not a simple mirror - it maintains history. I.e. > you can recover to any point in time in the last X hours (usually 24) at > a granularity of about 5 seconds. > To be able to do that and keep the replica consistent we need to be > notified for each IO. Can you please elaborate some more in the exact details - In theory, you can build a setup where the drbd (or nbd) copy on the destination side write to a intermediate image and every such write is trapped locally on the destination and you may not immediately propagate that to the disk image the VM sees. > 2. drbd is 'below' all the Qemu block layers - if the protected volume > is qcow2 then drbd doesn't get the raw IOs, right? That's one of the major caveats in drbd/iscsi/nbd - there is no support for block level snapshots[1]. I wonder if the scsi protocol has something like this so we'll get efficient replication of qcow2/lvm snapshots that their base is already shared. If we'll gain such functionality, we'll benefit of it for storage vm motion solution too. Another issue w/ drbd is that a continuous backup solution requires to do consistent snapshot and call file system freeze and sync it w/ the current block IO transfer. DRBD doesn't do that nor the other protocols. Of course DRBD can be enhanced but it will take allot more time. A third requirement and similar to above is to group snapshots of several VMs so a consistent _cross vm application view_ will be created. It demands some control over IO tagging. To summarize, IMHO drbd (which I used successfully 6 years ago and I love) is not drop&replace solution to this case. I recommend we either to fit the nbd/iscsi case and improve our vm storage motion on the way or worse case develop proprietary logic that can live out side of qemu using IO tapping interface, similar to the guidelines Ori outlines. Thanks, Dor [1] Check the far too basic approach for snapshots: http://www.drbd.org/users-guide/s-lvm-snapshots.html > > Ori >