qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Chegu Vinod <chegu_vinod@hp.com>
To: qemu-devel@nongnu.org
Subject: [Qemu-devel] Fwd: Re: [RFC 0/7] Migration stats
Date: Mon, 13 Aug 2012 08:29:41 -0700	[thread overview]
Message-ID: <50291D65.1060904@hp.com> (raw)
In-Reply-To: <877gt3x71x.fsf@elfo.mitica>

[-- Attachment #1: Type: text/plain, Size: 1795 bytes --]


Forwarding to the alias.
Thanks,
Vinod

-------- Original Message --------
Subject: 	Re: [RFC 0/7] Migration stats
Date: 	Mon, 13 Aug 2012 15:20:10 +0200
From: 	Juan Quintela <quintela@redhat.com>
Reply-To: 	<quintela@redhat.com>
To: 	Chegu Vinod <chegu_vinod@hp.com>
CC: 	


[ snip ]

 >> - Prints the real downtime that we have had

>>
>>    really, it prints the total downtime of the complete phase, but the
>>    downtime also includes the last ram_iterate phase.  Working on
>>    fixing that one.

Good one.


[...]

>> What do I want to know:
>>
>> - is there any stat that you want?  Once here, adding a new one should
>>    be easy.
>>

>
> A few suggestions :
>
> a) Total amount of time spent sync'ng up dirty bitmap logs for the
> total duration of migration.

I can add that one, it is not difficult.  Notice that in future I expect
to do the syncs in smaller chucks (but that is pie on the sky)

> b) Actual [average?] bandwidth that was used as compared to the
> allocated bandwidth ...  (I am wanting to know how folks are observing
> near line rate on a 10Gig...when I am not...).

Print average bandwidth is easy.  The "hardware one" is difficult to get
from inside one application.

>
>
>
> 'think it would be useful to know the approximate amount of [host] cpu
> time that got used up by the migration related thread(s) and any
> related host side services (like servicing the I/O interrupts while
> driving traffic through the network). I assume there are alternate
> methods to derive all these (and we don't need to overload the
> migration stats?]

This one is not easy to do from inside qemu.  Much easier to get from
the outside.  As far as I know, it is not easy to monitor cpu usage from
inside the cpu that we can to measure.

Thanks for the comments, Juan.
.





[-- Attachment #2: Type: text/html, Size: 3400 bytes --]

           reply	other threads:[~2012-08-13 15:29 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <877gt3x71x.fsf@elfo.mitica>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50291D65.1060904@hp.com \
    --to=chegu_vinod@hp.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).