From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:54618) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TUGvF-0007oW-W0 for qemu-devel@nongnu.org; Fri, 02 Nov 2012 09:04:31 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TUGvC-0004nZ-2L for qemu-devel@nongnu.org; Fri, 02 Nov 2012 09:04:29 -0400 Received: from plane.gmane.org ([80.91.229.3]:55289) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TUGvB-0004nK-Rb for qemu-devel@nongnu.org; Fri, 02 Nov 2012 09:04:25 -0400 Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1TUGvE-0000F3-4M for qemu-devel@nongnu.org; Fri, 02 Nov 2012 14:04:28 +0100 Received: from 93-34-169-1.ip50.fastwebnet.it ([93.34.169.1]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Fri, 02 Nov 2012 14:04:28 +0100 Received: from pbonzini by 93-34-169-1.ip50.fastwebnet.it with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Fri, 02 Nov 2012 14:04:28 +0100 From: Paolo Bonzini Date: Fri, 02 Nov 2012 14:04:07 +0100 Message-ID: References: <20121102031011.GM27695@truffula.fritz.box> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit In-Reply-To: <20121102031011.GM27695@truffula.fritz.box> Subject: Re: [Qemu-devel] Testing migration under stress List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Il 02/11/2012 04:10, David Gibson ha scritto: > Asking for some advice on the list. > > I have prorotype savevm and migration support ready for the pseries > machine. They seem to work under simple circumstances (idle guest). > To test them more extensively I've been attempting to perform live > migrations (just over tcp->localhost) which the guest is active with > something. In particular I've tried while using octave to do matrix > multiply (so exercising the FP unit) and my colleague Alexey has tried > during some video encoding. > > However, in each of these cases, we've found that the migration only > completes and the source instance only stops after the intensive > workload has (just) completed. What I surmise is happening is that > the workload is touching memory pages fast enough that the ram > migration code is never getting below the threshold to complete the > migration until the guest is idle again. > > Does anyone have some ideas for testing this better: workloads that > are less likely to trigger this behaviour, or settings to tweak in the > migration itself to make it more likely to complete migration while > the workload is still active. Have you set the migration speed (migrate_set_speed) to something higher than the default 32MB/sec? Paolo