From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:38192) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QKiWz-0004dr-9Z for qemu-devel@nongnu.org; Thu, 12 May 2011 22:55:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QKiWt-0007Tn-Ch for qemu-devel@nongnu.org; Thu, 12 May 2011 22:55:09 -0400 Received: from mail-ww0-f53.google.com ([74.125.82.53]:51209) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QKiWt-0007Tc-8d for qemu-devel@nongnu.org; Thu, 12 May 2011 22:55:03 -0400 Received: by wwj40 with SMTP id 40so2159304wwj.10 for ; Thu, 12 May 2011 19:55:01 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20110512105411.GH14575@valinux.co.jp> References: <20110512105411.GH14575@valinux.co.jp> Date: Fri, 13 May 2011 11:55:01 +0900 Message-ID: From: Yoshiaki Tamura Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH] Add warmup phase for live migration of large memory apps List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Isaku Yamahata Cc: Stefan Hajnoczi , ohmura.kei@lab.ntt.co.jp, "Shribman, Aidan" , "qemu-devel@nongnu.org" , Juan Quintela 2011/5/12 Isaku Yamahata : > On Thu, May 12, 2011 at 12:39:22PM +0200, Juan Quintela wrote: >> "Shribman, Aidan" wrote: >> >> On Wed, May 11, 2011 at 8:58 AM, Shribman, Aidan >> >> wrote: >> >> > From: Aidan Shribman >> >> > >> >> > [PATCH] Add warmup phase for live migration of large memory apps >> >> > >> >> > By invoking "migrate -w " we initiate a background >> >> live-migration >> >> > transferring of dirty pages continuously until invocation >> >> of "migrate_end" >> >> > which attempts to complete the live migration operation. >> >> >> >> What is the purpose of this patch? =A0How and when do I use it? >> >> >> > >> > The warmup patch adds none-converging background update of guest >> > memory during live-migration such that on request of live-migration >> > completion (via "migrate_end" command) we get much faster >> > response. This is especially needed when running a payload of large >> > enterprise applications which have high memory demands. >> >> We should integrate this with Kemari (Kemari is doing something like >> this, just that it has more requirements). =A0Isaku, do you have any com= ments? > > Yochi and Kei are familiar with Kemari. Not me. Cced to them. I think it's OK to have this feature by checking max_downtime =3D=3D 0. But I'm wondering that if users type commands like: migrate_set_downtime 0 migrate # w/o -d it'll lock the monitor forever in most cases. So forcing users to set -d or automatically doing inside in case of max_downtime =3D=3D 0 seems better to me. Sorry if I'm missing the point... Yoshi > > >> >> BTW, what loads have you tested for this? >> >> if I setup an image with 1GB RAM and a DVD iso image, and do in the >> guest: >> >> while true; do find /media/cdrom -type f | xargs md5sum; done >> >> Migration never converges with current code (if you use more than 1GB >> memory, then all the DVD will be cached inside). >> >> So, I see this only useful for guests that are almost idle, and on that >> case, migration speed is not the bigger of your problems, no? >> >> Later, Juan. >> > > -- > yamahata >