From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=57343 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1ORoGw-0005Xt-Mr for qemu-devel@nongnu.org; Thu, 24 Jun 2010 11:23:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1ORoGt-0008Ec-FO for qemu-devel@nongnu.org; Thu, 24 Jun 2010 11:23:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44809) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1ORoGt-0008ER-79 for qemu-devel@nongnu.org; Thu, 24 Jun 2010 11:23:19 -0400 Subject: Re: [Qemu-devel] [PATCH 00/15] Make migration work with hotplug From: Alex Williamson In-Reply-To: <1277391870.4669.2251.camel@x201> References: <20100624044046.16168.32804.stgit@localhost.localdomain> <1277391870.4669.2251.camel@x201> Content-Type: text/plain; charset="UTF-8" Date: Thu, 24 Jun 2010 09:23:03 -0600 Message-ID: <1277392983.4669.2257.camel@x201> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Yoshiaki Tamura Cc: jan.kiszka@siemens.com, qemu-devel@nongnu.org, armbru@redhat.com, paul@codesourcery.com, cam@cs.ualberta.ca, kraxel@redhat.com On Thu, 2010-06-24 at 09:04 -0600, Alex Williamson wrote: > On Thu, 2010-06-24 at 15:02 +0900, Yoshiaki Tamura wrote: > > > > Hi Alex, > > > > Is there additional overhead to save rams introduce by this series? > > If so, how much? > > Yes, there is overhead, but it's typically quite small. If I migrate a > 1G VM immediately after I boot to a login prompt (lots of zero pages), I > get an overhead of 0.000076%. That's only 226 extra bytes over the > 297164995 bytes otherwise transferred. If I build a kernel on the guest > and migrate during the compilation, the overhead is 0.000019%. The > overhead is tiny largely due to patch 12/15, which avoids sending the > block name if we're working within the same block as sent previously. > If I disable this optimization, the overhead goes up to 0.93% after boot > and 0.26% during a kernel compile. > > Note that an x86 VM does a separate qemu_ram_alloc for memory above 4G, > which means in bigger VMs we may end up needing to resend the ramblock > name once in a while as we bounce between above and below 4G. Worst > case for this could match the 0.26% above, but in my testing during a > kernel compile, this seems to increase the overhead to 0.000026% on a 6G > VM. I don't see any reason why we couldn't allocate all the ram in a > single qemu_ram_alloc call, so I'll add another patch to make that > change (which will also shorten the name to "pc.ram" for even less > overhead ;). Thanks, FWIW, with this change, my migration during kernel compile on the 6G VM seems to be running just at 0.000019%-0.000020%, so that eliminates the penalty for bigger memory VMs. Thanks, Alex