From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Stefan Priebe <s.priebe@profihost.ag>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>,
Alexandre DERUMIER <aderumier@odiso.com>,
owasserm@redhat.com
Subject: Re: [Qemu-devel] [pve-devel] QEMU LIve Migration - swap_free: Bad swap file entry
Date: Fri, 14 Feb 2014 09:06:13 +0000 [thread overview]
Message-ID: <20140214090612.GA2316@work-vm> (raw)
In-Reply-To: <52FD3696.8080103@profihost.ag>
* Stefan Priebe (s.priebe@profihost.ag) wrote:
>
> Am 13.02.2014 21:06, schrieb Dr. David Alan Gilbert:
> >* Stefan Priebe (s.priebe@profihost.ag) wrote:
> >>Am 10.02.2014 17:07, schrieb Dr. David Alan Gilbert:
> >>>* Stefan Priebe (s.priebe@profihost.ag) wrote:
> >>>>i could fix it by explicitly disable xbzrle - it seems its
> >>>>automatically on if i do not set the migration caps to false.
> >>>>
> >>>>So it seems to be a xbzrle bug.
> >>>
> >>>Stefan can you give me some more info on your hardware and
> >>>migration setup; that stressapptest (which is a really nice
> >>>find!) really batters the memory and it means the migration
> >>>isn't converging for me, so I'm curious what your setup is.
> >>
> >>That one is devlopment by google and known to me since a few years.
> >>Google has detected that memtest and co are not good enough to
> >>stress test memory.
> >
> >Hi Stefan,
> > I've just posted a patch to qemu-devel that fixes two bugs that
> >we found; I've only tried a small stressapptest run and it seems
> >to survive with them (where it didn't before); you might like to try
> >it if you're up for rebuilding qemu.
> >
> >It's the one entitled ' [PATCH] Fix two XBZRLE corruption issues'
> >
> >I'll try and get a larger run done myself, but I'd be interested to
> >hear if it fixes it for you (or anyone else who hit the problem).
>
> Yes works fine - now no crash but it's sower than without XBZRLE ;-)
>
> Without XBZRLE: i needed migrate_downtime 4 around 60s
> With XBZRLE: i needed migrate_downtime 16 and 240s
Hmm; how did that compare with the previous (broken) with XBZRLE
time? (i.e. was XBZRLE always slower for you?)
If you're driving this from the hmp/command interface then
the result of the
info migrate
command at the end of each of those runs would be interesting.
Another thing you could try is changing the xbzrle_cache_zero_page
in arch_init.c that I added so it reads as:
static void xbzrle_cache_zero_page(ram_addr_t current_addr)
{
if (ram_bulk_stage || !migrate_use_xbzrle()) {
return;
}
if (!cache_is_cached(XBZRLE.cache, current_addr)) {
return;
}
/* We don't care if this fails to allocate a new cache page
* as long as it updated an old one */
cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE);
}
Dave
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2014-02-14 9:06 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-05 17:51 [Qemu-devel] QEMU LIve Migration - swap_free: Bad swap file entry Stefan Priebe
2014-02-05 20:15 ` Dr. David Alan Gilbert
2014-02-06 7:20 ` Stefan Priebe - Profihost AG
2014-02-06 10:22 ` Orit Wasserman
2014-02-06 10:49 ` Stefan Priebe - Profihost AG
2014-02-06 11:14 ` [Qemu-devel] [pve-devel] " Alexandre DERUMIER
2014-02-06 11:19 ` Stefan Priebe - Profihost AG
2014-02-06 11:40 ` Alexandre DERUMIER
2014-02-06 12:10 ` Stefan Priebe - Profihost AG
2014-02-06 14:03 ` Stefan Priebe - Profihost AG
2014-02-06 14:12 ` Marcin Gibuła
2014-02-06 19:51 ` Dr. David Alan Gilbert
2014-02-06 20:00 ` Stefan Priebe
2014-02-07 8:15 ` Alexandre DERUMIER
2014-02-07 8:17 ` Stefan Priebe - Profihost AG
2014-02-07 9:15 ` Dr. David Alan Gilbert
2014-02-07 9:20 ` Stefan Priebe - Profihost AG
2014-02-07 9:29 ` Marcin Gibuła
2014-02-07 9:30 ` Stefan Priebe - Profihost AG
2014-02-07 9:31 ` Dr. David Alan Gilbert
2014-02-07 9:37 ` Stefan Priebe - Profihost AG
2014-02-07 12:02 ` Stefan Priebe - Profihost AG
2014-02-07 12:21 ` Dr. David Alan Gilbert
2014-02-07 12:30 ` Stefan Priebe - Profihost AG
2014-02-07 12:44 ` Paolo Bonzini
2014-02-07 13:04 ` Stefan Priebe - Profihost AG
2014-02-07 13:08 ` Dr. David Alan Gilbert
2014-02-07 13:10 ` Stefan Priebe - Profihost AG
2014-02-07 13:15 ` Dr. David Alan Gilbert
2014-02-07 13:21 ` Stefan Priebe - Profihost AG
2014-02-07 13:19 ` Paolo Bonzini
2014-02-07 13:39 ` Stefan Priebe - Profihost AG
2014-02-07 13:45 ` Stefan Priebe - Profihost AG
2014-02-07 19:21 ` Stefan Priebe
2014-02-07 20:02 ` Dr. David Alan Gilbert
2014-02-07 20:10 ` Stefan Priebe
2014-02-08 19:23 ` Stefan Priebe
2014-02-10 9:30 ` Dr. David Alan Gilbert
2014-02-10 16:07 ` Dr. David Alan Gilbert
2014-02-10 18:53 ` Stefan Priebe
2014-02-13 20:06 ` Dr. David Alan Gilbert
2014-02-13 20:26 ` Stefan Priebe
2014-02-13 20:31 ` Stefan Priebe
2014-02-13 21:18 ` Stefan Priebe
2014-02-14 9:06 ` Dr. David Alan Gilbert [this message]
2014-02-11 13:32 ` Orit Wasserman
2014-02-11 13:33 ` Stefan Priebe - Profihost AG
2014-02-11 13:45 ` Orit Wasserman
2014-02-11 14:49 ` Stefan Priebe - Profihost AG
2014-02-07 9:59 ` Marcin Gibuła
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140214090612.GA2316@work-vm \
--to=dgilbert@redhat.com \
--cc=aderumier@odiso.com \
--cc=owasserm@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=s.priebe@profihost.ag \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).