From: Kevin Wolf <kwolf@redhat.com>
To: Quentin Grolleau <quentin.grolleau@ovhcloud.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, qemu-block@nongnu.org
Subject: Re: [raw] Guest stuck during live live-migration
Date: Mon, 23 Nov 2020 13:25:26 +0100 [thread overview]
Message-ID: <20201123122526.GC5317@merkur.fritz.box> (raw)
In-Reply-To: <e6f25c7e67ce4cfea5e01e4e46f0a3d8@ovhcloud.com>
[ Cc: qemu-block ]
Am 23.11.2020 um 10:36 hat Quentin Grolleau geschrieben:
> Hello,
>
> In our company, we are hosting a large number of Vm, hosted behind Openstack (so libvirt/qemu).
> A large majority of our Vms are runnign with local data only, stored on NVME, and most of them are RAW disks.
>
> With Qemu 4.0 (can be even with older version) we see strange live-migration comportement:
First of all, 4.0 is relatively old. Generally it is worth retrying with
the most recent code (git master or 5.2.0-rc2) before having a closer
look at problems, because it is frustrating to spend considerable time
debugging an issue and then find out it has already been fixed a year
ago.
> - some Vms live migrate at very high speed without issue (> 6 Gbps)
> - some Vms are running correctly, but migrating at a strange low speed (3Gbps)
> - some Vms are migrating at a very low speed (1Gbps, sometime less) and during the migration the guest is completely I/O stuck
>
> When this issue happen the VM is completly block, iostat in the Vm show us a latency of 30 secs
Can you get the stack backtraces of all QEMU threads while the VM is
blocked (e.g. with gdb or pstack)?
> First we thought it was related to an hardware issue we check it, we comparing different hardware, but no issue where found there
>
> So one of my colleague had the idea to limit with "tc" the bandwidth on the interface the migration was done, and it worked the VM didn't lose any ping nor being I/O stuck
> Important point : Once the Vm have been migrate (with the limitation ) one time, if we migrate it again right after, the migration will be done at full speed (8-9Gb/s) without freezing the Vm
Since you say you're using local storage, I assume that you're doing
both a VM live migration and storage migration at the same time. These
are separate connections, storage is migrated using a NBD connection.
Did you limit the bandwith for both connections, or if it was just one
of them, which one was it?
> It only happen on existing VM, we tried to replicate with a fresh instance with exactly the same spec and nothing was happening
>
> We tried to replicate the workload inside the VM but there was no way to replicate the case. So it was not related to the workload nor to the server that hosts the Vm
>
> So we thought about the disk of the instance : the raw file.
>
> We also tried to strace -c the process during the live-migration and it was doing a lot of "lseek"
>
> and we found this :
> https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg00462.html
This case is different in that it used qcow2 (which should behave much
better today).
It also used ZFS, which you didn't mention. Is the problematic image
stored on ZFS? If not, which filesystem is it?
> So i rebuilt Qemu with this patch and the live-migration went well, at high speed and with no VM freeze
> ( https://github.com/qemu/qemu/blob/master/block/file-posix.c#L2601 )
>
> Do you have a way to avoid the "lseek" mechanism as it consumes more resources to find the holes in the disk and don't let any for the VM ?
If you can provide the stack trace during the hang, we might be able to
tell why we're even trying to find holes.
Please also provide your QEMU command line.
At the moment, my assumption is that this is during a mirror block job
which is migrating the disk to your destination server. Not looking for
holes would mean that a sparse source file would become fully allocated
on the destination, which is usually not wanted (also we would
potentially transfer a lot more data over the networkj).
Can you give us a snippet from your strace that shows the individual
lseek syscalls? Depending on which ranges are queried, maybe we could
optimise things by caching the previous result.
Also, a final remark, I know of some cases (on XFS) where lseeks were
slow because the image file was heavily fragmented. Defragmenting the
file resolved the problem, so this may be another thing to try.
On XFS, newer QEMU versions set an extent size hint on newly created
image files (during qemu-img create), which can reduce fragmentation
considerably.
Kevin
> Server hosting the VM :
> - Bi-Xeon hosts With NVME storage and 10 Go Network card
> - Qemu 4.0 And Libvirt 5.4
> - Kernel 4.18.0.25
>
> Guest having the issue :
> - raw image with Debian 8
>
> Here the qemu img on the disk :
> > qemu-img info disk
> image: disk
> file format: raw
> virtual size: 400G (429496729600 bytes)
> disk size: 400G
>
>
> Quentin GROLLEAU
>
next prev parent reply other threads:[~2020-11-23 12:26 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-23 9:36 [raw] Guest stuck during live live-migration Quentin Grolleau
2020-11-23 12:25 ` Kevin Wolf [this message]
2020-11-24 12:58 ` Quentin Grolleau
2020-12-02 15:09 ` Quentin Grolleau
2020-12-02 15:33 ` Kevin Wolf
2021-01-18 15:35 ` Alexandre Arents
2020-12-15 1:46 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201123122526.GC5317@merkur.fritz.box \
--to=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=quentin.grolleau@ovhcloud.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).