From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50319) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZjlGc-0001Le-2u for qemu-devel@nongnu.org; Wed, 07 Oct 2015 05:44:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZjlGY-00047q-Ek for qemu-devel@nongnu.org; Wed, 07 Oct 2015 05:44:09 -0400 Received: from mail-wi0-x232.google.com ([2a00:1450:400c:c05::232]:34454) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZjlGY-00047h-9a for qemu-devel@nongnu.org; Wed, 07 Oct 2015 05:44:06 -0400 Received: by wicfx3 with SMTP id fx3so203779577wic.1 for ; Wed, 07 Oct 2015 02:44:05 -0700 (PDT) Sender: Paolo Bonzini References: <5614531B.5080107@redhat.com> <1444198846-5383-1-git-send-email-den@openvz.org> <1444198846-5383-7-git-send-email-den@openvz.org> From: Paolo Bonzini Message-ID: <5614E961.7000900@redhat.com> Date: Wed, 7 Oct 2015 11:44:01 +0200 MIME-Version: 1.0 In-Reply-To: <1444198846-5383-7-git-send-email-den@openvz.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Denis V. Lunev" Cc: Igor Redko , jsnow@redhat.com, qemu-devel@nongnu.org, annam@virtuozzo.com On 07/10/2015 08:20, Denis V. Lunev wrote: > > All calls of this hook will be from ram_save_pending(). > > At the first call of this hook we need to save the initial > size of VM memory and put the migration thread to sleep for > decent period (downtime for example). During this period > guest would dirty memory. > > The second and the last call. > We make our estimation of dirty bytes rate assuming that time > between two synchronizations of dirty bitmap differs from downtime > negligibly. > > An alternative to this approach is receiving information about > size of data “transmitted” through the transport. This would use before_ram_iterate/after_ram_iterate, right? > However, this > way creates large time and memory overheads: > 1/Transmitted guest’s memory pages are copied to QEMUFile’s buffer > (~8 sec per 4GB VM) Note that they are not if you implement writev_buffer. > 2/Dirty memory pages are processed one by one (~60msec per 4GB VM) That however improves the accuracy, doesn't it? Paolo