From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:56491) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QwTNb-0008CA-Qf for qemu-devel@nongnu.org; Thu, 25 Aug 2011 02:25:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QwTNa-0000jk-NA for qemu-devel@nongnu.org; Thu, 25 Aug 2011 02:25:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:32224) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QwTNa-0000jd-Fo for qemu-devel@nongnu.org; Thu, 25 Aug 2011 02:25:30 -0400 Message-ID: <4E55EAD6.4070803@redhat.com> Date: Thu, 25 Aug 2011 02:25:26 -0400 From: Umesh Deshpande MIME-Version: 1.0 References: <4E55328A.8000203@codemonkey.ws> In-Reply-To: <4E55328A.8000203@codemonkey.ws> Content-Type: multipart/alternative; boundary="------------050103000902080403070908" Subject: Re: [Qemu-devel] [RFC PATCH v5 0/4] Separate thread for VM migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: pbonzini@redhat.com, mtosatti@redhat.com, qemu-devel@nongnu.org, kvm@vger.kernel.org, quintela@redhat.com This is a multi-part message in MIME format. --------------050103000902080403070908 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit *Jitterd Test* I ran jitterd in a migrating VM of size 8GB with and w/o the patch series. ./jitterd -f -m 1 -p 100 -r 40 That is to report the jitter of greater than 400ms during the interval of 40 seconds. Jitter in ms. with the migration thread. Run Total (Peak) 1 No chatter 2 No chatter 3 No chatter 4 409 (360) Jitter in ms. without migration thread. Run Total (Peak) 1 4663 (2413) 2 643 (423) 3 1973 (1817) 4 3908 (3772) *Flood ping test* : ping to the migrating VM from a third machine (data over 3 runs) Latency (ms) ping to a non-migrating VM : Avg 0.156, Max: 0.96 Latency (ms) with migration thread : Avg 0.215, Max: 280 Latency (ms) without migration thread : Avg 6.47, Max: 4562 - Umesh On 08/24/2011 01:19 PM, Anthony Liguori wrote: > On 08/23/2011 10:12 PM, Umesh Deshpande wrote: >> Following patch series deals with VCPU and iothread starvation during >> the >> migration of a guest. Currently the iothread is responsible for >> performing the >> guest migration. It holds qemu_mutex during the migration and doesn't >> allow VCPU >> to enter the qemu mode and delays its return to the guest. The guest >> migration, >> executed as an iohandler also delays the execution of other iohandlers. >> In the following patch series, > > Can you please include detailed performance data with and without this > series? > > Perhaps runs of migration with jitterd running in the guest. > > Regards, > > Anthony Liguori > >> >> The migration has been moved to a separate thread to >> reduce the qemu_mutex contention and iohandler starvation. >> >> Umesh Deshpande (4): >> MRU ram block list >> migration thread mutex >> separate migration bitmap >> separate migration thread >> >> arch_init.c | 38 ++++++++++++---- >> buffered_file.c | 75 +++++++++++++++++-------------- >> cpu-all.h | 42 +++++++++++++++++ >> exec.c | 97 ++++++++++++++++++++++++++++++++++++++-- >> migration.c | 122 >> +++++++++++++++++++++++++++++--------------------- >> migration.h | 9 ++++ >> qemu-common.h | 2 + >> qemu-thread-posix.c | 10 ++++ >> qemu-thread.h | 1 + >> savevm.c | 5 -- >> 10 files changed, 297 insertions(+), 104 deletions(-) >> > --------------050103000902080403070908 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Jitterd Test
I ran jitterd in a migrating VM of size 8GB with and w/o the patch series.
./jitterd -f -m 1 -p 100 -r 40
That is to report the jitter of greater than 400ms during the interval of 40 seconds.

Jitter in ms. with the migration thread.
Run    Total (Peak)
1        No chatter
2        No chatter
3        No chatter
4        409 (360)

Jitter in ms. without migration thread.
Run    Total (Peak)
1        4663 (2413)
2        643 (423)
3        1973 (1817)
4        3908 (3772)

Flood ping test : ping to the migrating VM from a third machine (data over 3 runs)
Latency (ms) ping to a non-migrating VM    : Avg 0.156, Max: 0.96
Latency (ms) with migration thread             : Avg 0.215, Max: 280
Latency (ms) without migration thread        : Avg 6.47,   Max: 4562

- Umesh


On 08/24/2011 01:19 PM, Anthony Liguori wrote:
On 08/23/2011 10:12 PM, Umesh Deshpande wrote:
Following patch series deals with VCPU and iothread starvation during the
migration of a guest. Currently the iothread is responsible for performing the
guest migration. It holds qemu_mutex during the migration and doesn't allow VCPU
to enter the qemu mode and delays its return to the guest. The guest migration,
executed as an iohandler also delays the execution of other iohandlers.
In the following patch series,

Can you please include detailed performance data with and without this series?

Perhaps runs of migration with jitterd running in the guest.

Regards,

Anthony Liguori


The migration has been moved to a separate thread to
reduce the qemu_mutex contention and iohandler starvation.

Umesh Deshpande (4):
   MRU ram block list
   migration thread mutex
   separate migration bitmap
   separate migration thread

  arch_init.c         |   38 ++++++++++++----
  buffered_file.c     |   75 +++++++++++++++++--------------
  cpu-all.h           |   42 +++++++++++++++++
  exec.c              |   97 ++++++++++++++++++++++++++++++++++++++--
  migration.c         |  122 +++++++++++++++++++++++++++++---------------------
  migration.h         |    9 ++++
  qemu-common.h       |    2 +
  qemu-thread-posix.c |   10 ++++
  qemu-thread.h       |    1 +
  savevm.c            |    5 --
  10 files changed, 297 insertions(+), 104 deletions(-)



--------------050103000902080403070908--