qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Juan Quintela <quintela@redhat.com>
Cc: aik@ozlabs.ru, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] Testing migration under stress
Date: Mon, 5 Nov 2012 11:31:58 +1100	[thread overview]
Message-ID: <20121105003158.GX27695@truffula.fritz.box> (raw)
In-Reply-To: <87sj8sgnku.fsf@elfo.mitica>

On Fri, Nov 02, 2012 at 02:07:45PM +0100, Juan Quintela wrote:
> David Gibson <david@gibson.dropbear.id.au> wrote:
> > Asking for some advice on the list.
> >
> > I have prorotype savevm and migration support ready for the pseries
> > machine.  They seem to work under simple circumstances (idle guest).
> > To test them more extensively I've been attempting to perform live
> > migrations (just over tcp->localhost) which the guest is active with
> > something.  In particular I've tried while using octave to do matrix
> > multiply (so exercising the FP unit) and my colleague Alexey has tried
> > during some video encoding.
> >
> > However, in each of these cases, we've found that the migration only
> > completes and the source instance only stops after the intensive
> > workload has (just) completed.  What I surmise is happening is that
> > the workload is touching memory pages fast enough that the ram
> > migration code is never getting below the threshold to complete the
> > migration until the guest is idle again.
> >
> > Does anyone have some ideas for testing this better: workloads that
> > are less likely to trigger this behaviour, or settings to tweak in the
> > migration itself to make it more likely to complete migration while
> > the workload is still active.
> 
> You can:
> 
> migrate_set_downtime 2s (or so)
> 
> I normally run stress, and you move the memory that it dirties until it
> converges (depends a lot of your networking).

So, I'm using tcp to localhost, so it should be really fast, but it
doesn't seem to be :/.  I suspect there are some other bugs here.

> Doing anything that is really memory intensive is basically never gonig
> to converge.

Well, I didn't think the loads I chose would be memory limited
(especially the video encode), but..

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

      reply	other threads:[~2012-11-05  0:30 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-02  3:10 [Qemu-devel] Testing migration under stress David Gibson
2012-11-02 12:12 ` Orit Wasserman
2012-11-05  0:30   ` David Gibson
2012-11-05 12:21     ` Orit Wasserman
2012-11-06  1:14       ` David Gibson
2012-11-06  5:22   ` Alexey Kardashevskiy
2012-11-06  6:55     ` David Gibson
2012-11-06  7:55       ` Alexey Kardashevskiy
2012-11-06 10:54     ` Orit Wasserman
2012-11-02 13:04 ` Paolo Bonzini
2012-11-02 13:07 ` Juan Quintela
2012-11-05  0:31   ` David Gibson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121105003158.GX27695@truffula.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=aik@ozlabs.ru \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).