From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:32951) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WBFMZ-0000ST-Ff for qemu-devel@nongnu.org; Wed, 05 Feb 2014 22:10:58 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WBFMR-00013K-PU for qemu-devel@nongnu.org; Wed, 05 Feb 2014 22:10:51 -0500 Received: from mail-pb0-f43.google.com ([209.85.160.43]:60321) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WBFMR-00012F-JE for qemu-devel@nongnu.org; Wed, 05 Feb 2014 22:10:43 -0500 Received: by mail-pb0-f43.google.com with SMTP id md12so1190044pbc.16 for ; Wed, 05 Feb 2014 19:10:41 -0800 (PST) Message-ID: <52F2FD2B.9010504@ozlabs.ru> Date: Thu, 06 Feb 2014 14:10:35 +1100 From: Alexey Kardashevskiy MIME-Version: 1.0 References: <52F0938F.2040102@ozlabs.ru> <52F0C523.30102@redhat.com> <52F0D611.7070105@ozlabs.ru> <52F0D810.4070806@redhat.com> <52F0DA04.9040003@ozlabs.ru> <52F0F26A.5020304@redhat.com> <52F16708.8060902@ozlabs.ru> <52F1E5BA.60902@redhat.com> <20140205090912.GA2398@work-vm> <52F2685D.2050405@redhat.com> <20140205164219.GJ2398@work-vm> <52F26AC0.5040104@redhat.com> In-Reply-To: <52F26AC0.5040104@redhat.com> Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] migration: broken ram_save_pending List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , "Dr. David Alan Gilbert" Cc: "qemu-devel@nongnu.org" , Alex Graf On 02/06/2014 03:45 AM, Paolo Bonzini wrote: > Il 05/02/2014 17:42, Dr. David Alan Gilbert ha scritto: >> Because: >> * the code is still running and keeps redirtying a small handful of >> pages >> * but because we've underestimated our available bandwidth we never stop >> it and just throw those pages across immediately > > Ok, I thought Alexey was saying we are not redirtying that handful of pages. Every iteration we read the dirty map from KVM and send all dirty pages across the stream. > And in turn, this is because the max downtime we have is too low > (especially for the default 32 MB/sec default bandwidth; that's also pretty > low). My understanding nooow is that in order to finish migration QEMU waits for the earliest 100ms (BUFFER_DELAY) of continuously low trafic but due to those pages getting dirty every time we read the dirty map, we transfer more in these 100ms than we are actually allowed (>32MB/s or 320KB/100ms). So we transfer-transfer-transfer, detect than we transfer too much, do delay() and if max_size (calculated from actual transfer and downtime) for the next iteration is less (by luck) than those 96 pages (uncompressed) - we finish. Increasing speed or/and downtime will help but still - we would not need that if migration did not expect all 96 pages to have to be sent but did have some smart way to detect that many are empty (so - compressed). Literally, move is_zero_range() from ram_save_block() to migration_bitmap_sync() and store this bit in some new pages_zero_map, for example. But does it make a lot of sense? -- Alexey