From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Claudio Fontana <cfontana@suse.de>
Cc: libvir-list@redhat.com, andrea.righi@canonical.com,
Jiri Denemark <jdenemar@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [libvirt RFC] virFile: new VIR_FILE_WRAPPER_BIG_PIPE to improve performance
Date: Fri, 25 Mar 2022 11:29:11 +0000 [thread overview]
Message-ID: <Yj2nh1LRZ54BXuds@redhat.com> (raw)
In-Reply-To: <737974fa-905c-d171-05b0-ec4df42bc762@suse.de>
On Fri, Mar 18, 2022 at 02:34:29PM +0100, Claudio Fontana wrote:
> On 3/17/22 4:03 PM, Dr. David Alan Gilbert wrote:
> > * Claudio Fontana (cfontana@suse.de) wrote:
> >> On 3/17/22 2:41 PM, Claudio Fontana wrote:
> >>> On 3/17/22 11:25 AM, Daniel P. Berrangé wrote:
> >>>> On Thu, Mar 17, 2022 at 11:12:11AM +0100, Claudio Fontana wrote:
> >>>>> On 3/16/22 1:17 PM, Claudio Fontana wrote:
> >>>>>> On 3/14/22 6:48 PM, Daniel P. Berrangé wrote:
> >>>>>>> On Mon, Mar 14, 2022 at 06:38:31PM +0100, Claudio Fontana wrote:
> >>>>>>>> On 3/14/22 6:17 PM, Daniel P. Berrangé wrote:
> >>>>>>>>> On Sat, Mar 12, 2022 at 05:30:01PM +0100, Claudio Fontana wrote:
> >>>>>>>>>> the first user is the qemu driver,
> >>>>>>>>>>
> >>>>>>>>>> virsh save/resume would slow to a crawl with a default pipe size (64k).
> >>>>>>>>>>
> >>>>>>>>>> This improves the situation by 400%.
> >>>>>>>>>>
> >>>>>>>>>> Going through io_helper still seems to incur in some penalty (~15%-ish)
> >>>>>>>>>> compared with direct qemu migration to a nc socket to a file.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Claudio Fontana <cfontana@suse.de>
> >>>>>>>>>> ---
> >>>>>>>>>> src/qemu/qemu_driver.c | 6 +++---
> >>>>>>>>>> src/qemu/qemu_saveimage.c | 11 ++++++-----
> >>>>>>>>>> src/util/virfile.c | 12 ++++++++++++
> >>>>>>>>>> src/util/virfile.h | 1 +
> >>>>>>>>>> 4 files changed, 22 insertions(+), 8 deletions(-)
> >>>>>>>>>>
> >>>>>>>>>> Hello, I initially thought this to be a qemu performance issue,
> >>>>>>>>>> so you can find the discussion about this in qemu-devel:
> >>>>>>>>>>
> >>>>>>>>>> "Re: bad virsh save /dev/null performance (600 MiB/s max)"
> >>>>>>>>>>
> >>>>>>>>>> https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg03142.html
> >>>>
> >>>>
> >>>>> Current results show these experimental averages maximum throughput
> >>>>> migrating to /dev/null per each FdWrapper Pipe Size (as per QEMU QMP
> >>>>> "query-migrate", tests repeated 5 times for each).
> >>>>> VM Size is 60G, most of the memory effectively touched before migration,
> >>>>> through user application allocating and touching all memory with
> >>>>> pseudorandom data.
> >>>>>
> >>>>> 64K: 5200 Mbps (current situation)
> >>>>> 128K: 5800 Mbps
> >>>>> 256K: 20900 Mbps
> >>>>> 512K: 21600 Mbps
> >>>>> 1M: 22800 Mbps
> >>>>> 2M: 22800 Mbps
> >>>>> 4M: 22400 Mbps
> >>>>> 8M: 22500 Mbps
> >>>>> 16M: 22800 Mbps
> >>>>> 32M: 22900 Mbps
> >>>>> 64M: 22900 Mbps
> >>>>> 128M: 22800 Mbps
> >>>>>
> >>>>> This above is the throughput out of patched libvirt with multiple Pipe Sizes for the FDWrapper.
> >>>>
> >>>> Ok, its bouncing around with noise after 1 MB. So I'd suggest that
> >>>> libvirt attempt to raise the pipe limit to 1 MB by default, but
> >>>> not try to go higher.
> >>>>
> >>>>> As for the theoretical limit for the libvirt architecture,
> >>>>> I ran a qemu migration directly issuing the appropriate QMP
> >>>>> commands, setting the same migration parameters as per libvirt,
> >>>>> and then migrating to a socket netcatted to /dev/null via
> >>>>> {"execute": "migrate", "arguments": { "uri", "unix:///tmp/netcat.sock" } } :
> >>>>>
> >>>>> QMP: 37000 Mbps
> >>>>
> >>>>> So although the Pipe size improves things (in particular the
> >>>>> large jump is for the 256K size, although 1M seems a very good value),
> >>>>> there is still a second bottleneck in there somewhere that
> >>>>> accounts for a loss of ~14200 Mbps in throughput.
> >>
> >>
> >> Interesting addition: I tested quickly on a system with faster cpus and larger VM sizes, up to 200GB,
> >> and the difference in throughput libvirt vs qemu is basically the same ~14500 Mbps.
> >>
> >> ~50000 mbps qemu to netcat socket to /dev/null
> >> ~35500 mbps virsh save to /dev/null
> >>
> >> Seems it is not proportional to cpu speed by the looks of it (not a totally fair comparison because the VM sizes are different).
> >
> > It might be closer to RAM or cache bandwidth limited though; for an extra copy.
>
> I was thinking about sendfile(2) in iohelper, but that probably
> can't work as the input fd is a socket, I am getting EINVAL.
Yep, sendfile() requires the input to be a mmapable FD,
and the output to be a socket.
Try splice() instead which merely requires 1 end to be a
pipe, and the other end can be any FD afaik.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2022-03-25 11:31 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220312163001.3811-1-cfontana@suse.de>
[not found] ` <Yi94mQUfrxMVbiLM@redhat.com>
[not found] ` <34eb53b5-78f7-3814-b71e-aa7ac59f9d25@suse.de>
[not found] ` <Yi+ACeaZ+oXTVYjc@redhat.com>
[not found] ` <2d1248d4-ebdf-43f9-e4a7-95f586aade8e@suse.de>
2022-03-17 10:12 ` [libvirt RFC] virFile: new VIR_FILE_WRAPPER_BIG_PIPE to improve performance Claudio Fontana
2022-03-17 10:25 ` Daniel P. Berrangé
2022-03-17 13:41 ` Claudio Fontana
2022-03-17 14:14 ` Claudio Fontana
2022-03-17 15:03 ` Dr. David Alan Gilbert
2022-03-18 13:34 ` Claudio Fontana
2022-03-21 7:55 ` Andrea Righi
2022-03-25 9:56 ` Claudio Fontana
2022-03-25 10:33 ` Daniel P. Berrangé
2022-03-25 10:56 ` Claudio Fontana
2022-03-25 11:14 ` Daniel P. Berrangé
2022-03-25 11:16 ` Claudio Fontana
2022-04-10 19:58 ` Claudio Fontana
2022-03-25 11:29 ` Daniel P. Berrangé [this message]
2022-03-26 15:49 ` Claudio Fontana
2022-03-26 17:38 ` Claudio Fontana
2022-03-28 8:31 ` Daniel P. Berrangé
2022-03-28 9:19 ` Claudio Fontana
2022-03-28 9:41 ` Claudio Fontana
2022-03-28 9:31 ` Claudio Fontana
2022-04-05 8:35 ` Dr. David Alan Gilbert
2022-04-05 9:23 ` Claudio Fontana
2022-04-07 7:11 ` Claudio Fontana
2022-04-07 13:53 ` Dr. David Alan Gilbert
2022-04-07 13:57 ` Claudio Fontana
2022-04-11 18:21 ` Claudio Fontana
2022-04-11 18:53 ` Dr. David Alan Gilbert
2022-04-12 9:04 ` Claudio Fontana
2022-03-28 10:47 ` Claudio Fontana
2022-03-28 13:28 ` Claudio Fontana
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yj2nh1LRZ54BXuds@redhat.com \
--to=berrange@redhat.com \
--cc=andrea.righi@canonical.com \
--cc=cfontana@suse.de \
--cc=dgilbert@redhat.com \
--cc=jdenemar@redhat.com \
--cc=libvir-list@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).