From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59774) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aZEBF-0000j9-Ez for qemu-devel@nongnu.org; Fri, 26 Feb 2016 03:55:24 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aZEBA-0006Ve-Iu for qemu-devel@nongnu.org; Fri, 26 Feb 2016 03:55:21 -0500 Received: from mail-ph.de-nserver.de ([85.158.179.214]:41932) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aZEBA-0006Ss-68 for qemu-devel@nongnu.org; Fri, 26 Feb 2016 03:55:16 -0500 References: <56CB6DC2.8040106@profihost.ag> <56CB86D9.9030004@redhat.com> <56CEB21C.6020109@profihost.ag> <56CF5BC0.3040600@redhat.com> From: Stefan Priebe Message-ID: <56D01346.6030907@profihost.ag> Date: Fri, 26 Feb 2016 09:56:38 +0100 MIME-Version: 1.0 In-Reply-To: <56CF5BC0.3040600@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] drive-backup List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: John Snow , qemu-devel Cc: Timo Grodzinski , Qemu-block Am 25.02.2016 um 20:53 schrieb John Snow: > > > On 02/25/2016 02:49 AM, Stefan Priebe - Profihost AG wrote: >> >> Am 22.02.2016 um 23:08 schrieb John Snow: >>> >>> >>> On 02/22/2016 03:21 PM, Stefan Priebe wrote: >>>> Hello, >>>> >>>> is there any chance or hack to work with a bigger cluster size for the >>>> drive backup job? >>>> >>>> See: >>>> http://git.qemu.org/?p=qemu.git;a=blob;f=block/backup.c;h=16105d40b193be9bb40346027bdf58e62b956a96;hb=98d2c6f2cd80afaa2dc10091f5e35a97c181e4f5 >>>> >>>> >>>> This is very slow with ceph - may be due to the 64k block size. I would >>>> like to check whether this is faster with cephs native block size of 4mb. >>>> >>>> Greets, >>>> Stefan >>>> >>> >>> It's hardcoded to 64K at the moment, but I am checking in a patch to >>> round up the cluster size to be the bigger of (64k, >>> $target_cluster_size) in order to make sure that incremental backups in >>> particular never copy a fraction of a cluster. As a side-effect, the >>> same round-up will happen for all modes (sync=top,none,full). >>> >>> If QEMU is aware of the target cluster size of 4MB, this would >>> immediately jump the copy-size up to 4MB clusters for you. >>> >>> See: https://lists.nongnu.org/archive/html/qemu-devel/2016-02/msg02839.html >> >> Thanks for your patches and thanks for your great answer. But our >> problem is not the target but the source ;-) The target has a local >> cache and don't care about the cluster size but the source does not. >> >> But it works fine if we change the default cluster size to 4MB. So it >> has point us to the right direction. >> >> Stefan >> > > Ah, sorry, I misunderstood. > > It's easy to change anyway! I am in favor of adding a configurable > parameter, as long as it keeps the other constraints I mentioned in mind. ah great and thanks! Stefan > > --js > >>> >>> Otherwise, after my trivial fix, you should find cluster_size to be a >>> mutable concept and perhaps one that you could introduce a runtime >>> parameter for if you could convince the necessary parties that it's >>> needed in the API. >>> >>> You'd have to be careful in the case of incremental that all the various >>> cluster sizes work well together: >>> >>> - Granularity of bitmap (Defaults to cluster size of source, or 64K if >>> unknown. can be configured to be arbitrarily larger.) >>> - Cluster size of source file (For qcow2, defaults to 64k) >>> - Cluster size of target file >>> - Cluster size of backup routine (Currently always 64K) >>> >>> I think that LCM(source_cluster_size, target_cluster_size, >>> backup_cluster_size) = MAX(A, B, C) will always be a safe minimum. >>> >>> Bitmap granularity is more flexible, and it is more efficient when the >>> bitmap granularity matches the backup cluster size, but it can cope with >>> mismatches. If the bitmap is smaller or larger than the backup cluster >>> size, it generally means that more data that is clean is going to be >>> transferred across the pipe. >>> >>> (Hmm, I wonder if it's worth checking in code to adjust a bitmap >>> granularity after it has been created so people can easily experiment >>> with performance tweaking here.) >>> >>> >>> Let me know if any of my ramble sounds interesting :) >>> --John >>> >