From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51569) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZkGkH-0007Pe-GL for qemu-devel@nongnu.org; Thu, 08 Oct 2015 15:20:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZkEQU-0000Qw-Lr for qemu-devel@nongnu.org; Thu, 08 Oct 2015 12:53:20 -0400 Received: from relay.parallels.com ([195.214.232.42]:42785) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZkEQS-0000Is-WA for qemu-devel@nongnu.org; Thu, 08 Oct 2015 12:52:17 -0400 References: <5614531B.5080107@redhat.com> <1444198846-5383-1-git-send-email-den@openvz.org> <1444198846-5383-7-git-send-email-den@openvz.org> <5614E961.7000900@redhat.com> From: "Denis V. Lunev" Message-ID: <56169F2E.3000204@openvz.org> Date: Thu, 8 Oct 2015 19:51:58 +0300 MIME-Version: 1.0 In-Reply-To: <5614E961.7000900@redhat.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Igor Redko , jsnow@redhat.com, qemu-devel@nongnu.org, "Dr. David Alan Gilbert" , annam@virtuozzo.com On 10/07/2015 12:44 PM, Paolo Bonzini wrote: > > On 07/10/2015 08:20, Denis V. Lunev wrote: >> All calls of this hook will be from ram_save_pending(). >> >> At the first call of this hook we need to save the initial >> size of VM memory and put the migration thread to sleep for >> decent period (downtime for example). During this period >> guest would dirty memory. >> >> The second and the last call. >> We make our estimation of dirty bytes rate assuming that time >> between two synchronizations of dirty bitmap differs from downtime >> negligibly. >> >> An alternative to this approach is receiving information about >> size of data “transmitted” through the transport. > This would use before_ram_iterate/after_ram_iterate, right? > >> However, this >> way creates large time and memory overheads: >> 1/Transmitted guest’s memory pages are copied to QEMUFile’s buffer >> (~8 sec per 4GB VM) > Note that they are not if you implement writev_buffer. yep, but we will have to setup iovec entry for each page but pls see below >> 2/Dirty memory pages are processed one by one (~60msec per 4GB VM) > That however improves the accuracy, doesn't it? > > Paolo from the point of estimate we need we need amount of dirtied page per second as a count as a result thus I do not think that this will make a difference. Though the approach proposed by David in the letter below is much better from the point of overhead and the result was presented in the original description as (2) aka ~60 msecs per 4 GB VM was obtained that way. Sorry that this was not clearly exposed in the description. Den