From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=59077 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PO4fp-00051N-50 for qemu-devel@nongnu.org; Thu, 02 Dec 2010 03:38:09 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PO4fM-0003eS-UC for qemu-devel@nongnu.org; Thu, 02 Dec 2010 03:37:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:32963) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PO4fM-0003dp-NB for qemu-devel@nongnu.org; Thu, 02 Dec 2010 03:37:24 -0500 Message-ID: <4CF75ABD.2050108@redhat.com> Date: Thu, 02 Dec 2010 10:37:17 +0200 From: Avi Kivity MIME-Version: 1.0 References: <9b23b9b4cee242591bdb356c838a9cfb9af033c1.1290552026.git.quintela@redhat.com> <4CF45D67.5010906@codemonkey.ws> <4CF4A478.8080209@redhat.com> <4CF5008F.2090306@codemonkey.ws> <4CF5030B.40703@redhat.com> <4CF50783.90402@codemonkey.ws> <4CF509C1.9@redhat.com> <20101201102034.07eff649.yoshikawa.takuya@oss.ntt.co.jp> <4CF6412D.5030601@redhat.com> <20101202103136.7cf30ce8.yoshikawa.takuya@oss.ntt.co.jp> In-Reply-To: <20101202103136.7cf30ce8.yoshikawa.takuya@oss.ntt.co.jp> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH 09/10] Exit loop if we have been there too long List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Takuya Yoshikawa Cc: Paolo Bonzini , kvm-devel , qemu-devel@nongnu.org, Juan Quintela On 12/02/2010 03:31 AM, Takuya Yoshikawa wrote: > Thanks for the answers Avi, Juan, > > Some FYI, (not about the bottleneck) > > On Wed, 01 Dec 2010 14:35:57 +0200 > Avi Kivity wrote: > > > > > - how many dirty pages do we have to care? > > > > > > default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of > > > dirty pages to have only 30ms of downtime. > > > > 1Gb/s * 30ms = 100 MB/s * 30 ms = 3 MB. > > > > 3MB / 4KB/page = 750 pages. > > Then, KVM side processing is near the theoretical goal! > > In my framebuffer test, I tested > > nr_dirty_pages/npages = 576/4096 > > case with the rate of 20 updates/s (1updates/50ms). > > Using rmap optimization, write protection only took 46,718 tsc time. Yes, using rmap to drive write protection with sparse dirty bitmaps really helps. > Bitmap copy was not a problem of course. > > The display was working anyway at this rate! > > > In my guess, within 1,000 dirty pages, kvm_vm_ioctl_get_dirty_log() > can be processed within 200us or so even for large RAM slot. > - rmap optimization depends mainly on nr_dirty_pages but npages. > > Avi, can you guess the property of O(1) write protection? > I want to test rmap optimization taking these issues into acount. I think we should use O(1) write protection only if there is a large number of dirty pages. With a small number, using rmap guided by the previous dirty bitmap is faster. So, under normal operation where only the framebuffer is logged, we'd use rmap write protection; when enabling logging for live migration we'd use O(1) write protection, after a few iterations when the number of dirty pages drops, we switch back to rmap write protection. > Of course, Kemari have to continue synchronization, and maybe see > more dirty pages. This will be a future task! > There's yet another option, of using dirty bits instead of write protection. Or maybe using write protection in the upper page tables and dirty bits in the lowest level. -- error compiling committee.c: too many arguments to function