From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:50566) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T0waX-0001ij-19 for qemu-devel@nongnu.org; Mon, 13 Aug 2012 11:29:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1T0waU-0006t7-Nt for qemu-devel@nongnu.org; Mon, 13 Aug 2012 11:29:52 -0400 Received: from g1t0029.austin.hp.com ([15.216.28.36]:29714) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T0waU-0006sM-Fs for qemu-devel@nongnu.org; Mon, 13 Aug 2012 11:29:50 -0400 Received: from g1t0038.austin.hp.com (g1t0038.austin.hp.com [16.236.32.44]) by g1t0029.austin.hp.com (Postfix) with ESMTP id 01407381D2 for ; Mon, 13 Aug 2012 15:29:42 +0000 (UTC) Received: from [16.98.41.99] (unknown [16.98.41.99]) by g1t0038.austin.hp.com (Postfix) with ESMTP id 915AB301A6 for ; Mon, 13 Aug 2012 15:29:42 +0000 (UTC) Message-ID: <50291D65.1060904@hp.com> Date: Mon, 13 Aug 2012 08:29:41 -0700 From: Chegu Vinod MIME-Version: 1.0 References: <877gt3x71x.fsf@elfo.mitica> In-Reply-To: <877gt3x71x.fsf@elfo.mitica> Content-Type: multipart/alternative; boundary="------------040209060606010808090500" Subject: [Qemu-devel] Fwd: Re: [RFC 0/7] Migration stats List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org This is a multi-part message in MIME format. --------------040209060606010808090500 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Forwarding to the alias. Thanks, Vinod -------- Original Message -------- Subject: Re: [RFC 0/7] Migration stats Date: Mon, 13 Aug 2012 15:20:10 +0200 From: Juan Quintela Reply-To: To: Chegu Vinod CC: [ snip ] >> - Prints the real downtime that we have had >> >> really, it prints the total downtime of the complete phase, but the >> downtime also includes the last ram_iterate phase. Working on >> fixing that one. Good one. [...] >> What do I want to know: >> >> - is there any stat that you want? Once here, adding a new one should >> be easy. >> > > A few suggestions : > > a) Total amount of time spent sync'ng up dirty bitmap logs for the > total duration of migration. I can add that one, it is not difficult. Notice that in future I expect to do the syncs in smaller chucks (but that is pie on the sky) > b) Actual [average?] bandwidth that was used as compared to the > allocated bandwidth ... (I am wanting to know how folks are observing > near line rate on a 10Gig...when I am not...). Print average bandwidth is easy. The "hardware one" is difficult to get from inside one application. > > > > 'think it would be useful to know the approximate amount of [host] cpu > time that got used up by the migration related thread(s) and any > related host side services (like servicing the I/O interrupts while > driving traffic through the network). I assume there are alternate > methods to derive all these (and we don't need to overload the > migration stats?] This one is not easy to do from inside qemu. Much easier to get from the outside. As far as I know, it is not easy to monitor cpu usage from inside the cpu that we can to measure. Thanks for the comments, Juan. . --------------040209060606010808090500 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit
Forwarding to the alias.
Thanks,
Vinod

-------- Original Message --------
Subject: Re: [RFC 0/7] Migration stats
Date: Mon, 13 Aug 2012 15:20:10 +0200
From: Juan Quintela <quintela@redhat.com>
Reply-To: <quintela@redhat.com>
To: Chegu Vinod <chegu_vinod@hp.com>
CC:

[ snip ]

>> - Prints the real downtime that we have had
>>
>>    really, it prints the total downtime of the complete phase, but the
>>    downtime also includes the last ram_iterate phase.  Working on
>>    fixing that one.

Good one.


[...]

>> What do I want to know:
>>
>> - is there any stat that you want?  Once here, adding a new one should
>>    be easy.
>>

>
> A few suggestions :
>
> a) Total amount of time spent sync'ng up dirty bitmap logs for the
> total duration of migration.

I can add that one, it is not difficult.  Notice that in future I expect
to do the syncs in smaller chucks (but that is pie on the sky)

> b) Actual [average?] bandwidth that was used as compared to the
> allocated bandwidth ...  (I am wanting to know how folks are observing
> near line rate on a 10Gig...when I am not...).

Print average bandwidth is easy.  The "hardware one" is difficult to get
from inside one application.

>
>
>
> 'think it would be useful to know the approximate amount of [host] cpu
> time that got used up by the migration related thread(s) and any
> related host side services (like servicing the I/O interrupts while
> driving traffic through the network). I assume there are alternate
> methods to derive all these (and we don't need to overload the
> migration stats?]

This one is not easy to do from inside qemu.  Much easier to get from
the outside.  As far as I know, it is not easy to monitor cpu usage from
inside the cpu that we can to measure.

Thanks for the comments, Juan.
.




--------------040209060606010808090500--