From: Neil Skrypuch <neil@tembosocial.com>
To: qemu-devel@nongnu.org
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: [Qemu-devel] [regression] Clock jump on VM migration
Date: Thu, 07 Feb 2019 17:33:25 -0500 [thread overview]
Message-ID: <2932080.UxbmD43V0u@neil> (raw)
We (ab)use migration + block mirroring to perform transparent zero downtime VM
backups. Basically:
1) do a block mirror of the source VM's disk
2) migrate the source VM to a destination VM using the disk copy
3) cancel the block mirroring
4) resume the source VM
5) shut down the destination VM gracefully and move the disk to backup
Note that both source and destination VMs are running on the same host and
same disk array.
Relatively recently, the source VM's clock started jumping after step #4. The
specific amount of clock jump is generally around 1s, but sometimes as much as
2-3s. I was able to bisect this down to the following QEMU change:
commit dd577a26ff03b6829721b1ffbbf9e7c411b72378
Author: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri Apr 27 17:23:11 2018 +0100
block/file-posix: implement bdrv_co_invalidate_cache() on Linux
On Linux posix_fadvise(POSIX_FADV_DONTNEED) invalidates pages*. Use
this to drop page cache on the destination host during shared storage
migration. This way the destination host will read the latest copy of
the data and will not use stale data from the page cache.
The flow is as follows:
1. Source host writes out all dirty pages and inactivates drives.
2. QEMU_VM_EOF is sent on migration stream.
3. Destination host invalidates caches before accessing drives.
This patch enables live migration even with -drive cache.direct=off.
* Terms and conditions may apply, please see patch for details.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20180427162312.18583-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
That patch went in for QEMU 3.0.0, and I can confirm this issue affects both
3.1.0 and 3.0.0, but not 2.12.0. Most testing has been done on kernel 4.20.5,
but I also confirmed the issue with 4.13.0.
Reproducing this issue is easy, it happens 100% of the time with a CentOS 7
guest (other guests not tested), NTP will notice the clock jump quite soon
after migration.
We are seeing this issue across our entire fleet of VMs, but the specific VM
I've been testing with has a 20G disk and 1.5G of RAM.
To further debug this issue, I made the following changes:
diff --git a/block/file-posix.c b/block/file-posix.c
index 07bbdab953..4724b543df 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2570,6 +2570,7 @@ static void coroutine_fn
raw_co_invalidate_cache(BlockDriverState *bs,
{
BDRVRawState *s = bs->opaque;
int ret;
+ struct timeval t;
ret = fd_open(bs);
if (ret < 0) {
@@ -2581,6 +2582,8 @@ static void coroutine_fn
raw_co_invalidate_cache(BlockDriverState *bs,
return; /* No host kernel page cache */
}
+ gettimeofday(&t, NULL);
+ printf("before: %d.%d\n", (int) t.tv_sec, (int) t.tv_usec);
#if defined(__linux__)
/* This sets the scene for the next syscall... */
ret = bdrv_co_flush(bs);
@@ -2610,6 +2613,8 @@ static void coroutine_fn
raw_co_invalidate_cache(BlockDriverState *bs,
* configurations that should not cause errors.
*/
#endif /* !__linux__ */
+ gettimeofday(&t, NULL);
+ printf("after: %d.%d\n", (int) t.tv_sec, (int) t.tv_usec);
}
static coroutine_fn int
In two separate runs, they produced the following:
before: 1549567702.412048
after: 1549567703.295500
-> clock jump: 949ms
before: 1549576767.707454
after: 1549576768.584981
-> clock jump: 941ms
The clock jump numbers above are from NTP, but you can see that they are quite
close to the amount of time spent in raw_co_invalidate_cache. So, it looks
like flushing the cache is just taking a long time and stalling the guest,
which causes the clock jump. This isn't too surprising as the entire disk
image was just written as part of the block mirror and would likely still be
in the cache.
I see the use case for this feature, but I don't think it applies here, as
we're not technically using shared storage. I believe an option to toggle this
behaviour on/off and/or some sort of heuristic to guess whether or not it
should be enabled by default would be in order here.
- Neil
next reply other threads:[~2019-02-07 22:52 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-07 22:33 Neil Skrypuch [this message]
2019-02-08 6:24 ` [Qemu-devel] [regression] Clock jump on VM migration Stefan Hajnoczi
2019-02-08 9:48 ` Dr. David Alan Gilbert
2019-02-08 22:52 ` Neil Skrypuch
2019-02-12 2:56 ` Stefan Hajnoczi
2019-02-26 10:45 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2932080.UxbmD43V0u@neil \
--to=neil@tembosocial.com \
--cc=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).